Sample records for area database rogad

  1. 76 FR 30997 - National Transit Database: Amendments to Urbanized Area Annual Reporting Manual

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-27

    ... Transit Database: Amendments to Urbanized Area Annual Reporting Manual AGENCY: Federal Transit Administration (FTA), DOT. ACTION: Notice of Amendments to 2011 National Transit Database Urbanized Area Annual... Administration's (FTA) 2011 National Transit Database (NTD) Urbanized Area Annual Reporting Manual (Annual Manual...

  2. 75 FR 61553 - National Transit Database: Amendments to the Urbanized Area Annual Reporting Manual and to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-05

    ... Transit Database: Amendments to the Urbanized Area Annual Reporting Manual and to the Safety and Security... the 2011 National Transit Database Urbanized Area Annual Reporting Manual and Announcement of... Transit Administration's (FTA) National Transit Database (NTD) reporting requirements, including...

  3. Providing accurate near real-time fire alerts for Protected Areas through NASA FIRMS: Opportunities and Challenges

    NASA Astrophysics Data System (ADS)

    Ilavajhala, S.; Davies, D.; Schmaltz, J. E.; Wong, M.; Murphy, K. J.

    2013-12-01

    The NASA Fire Information for Resource Management System (FIRMS) is at the forefront of providing global near real-time (NRT) MODIS thermal anomalies / hotspot location data to end-users . FIRMS serves the data via an interactive Web GIS named Web Fire Mapper, downloads of NRT active fire, archive data downloads for MODIS hotspots dating back to 1999 and a hotspot email alert system The FIRMS Email Alerts system has been successfully alerting users of fires in their area of interest in near real-time and/or via daily and weekly email summaries, with an option to receive MODIS hotspot data as a text file (CSV) attachment. Currently, there are more than 7000 email alert subscriptions from more than 100 countries. Specifically, the email alerts system is designed to generate and send an email alert for any region or area on the globe, with a special focus on providing alerts for protected areas worldwide. For many protected areas, email alerts are particularly useful for early fire detection, monitoring on going fires, as well as allocating resources to protect wildlife and natural resources of particular value. For protected areas, FIRMS uses the World Database on Protected Areas (WDPA) supplied by United Nations Environment Program - World Conservation Monitoring Centre (UNEP-WCMC). Maintaining the most up-to-date, accurate boundary geometry for the protected areas for the email alerts is a challenge as the WDPA is continuously updated due to changing boundaries, merging or delisting of certain protected areas. Because of this dynamic nature of the protected areas database, the FIRMS protected areas database is frequently out-of-date with the most current version of WDPA database. To maintain the most up-to-date boundary information for protected areas and to be in compliance with the WDPA terms and conditions, FIRMS needs to constantly update its database of protected areas. Currently, FIRMS strives to keep its database up to date by downloading the most recent WDPA database at regular intervals, processing it, and ingesting it into the FIRMS spatial database. However, due to the large size of database, the process to download, process and ingest the database is quite time consuming. The FIRMS team is currently working on developing a method to update the protected areas database via web at regular intervals or on-demand. Using such a solution, FIRMS will be able access the most up-to-date extents of any protected area and the corresponding spatial geometries in real time. As such, FIRMS can utilize such a service to access the protected areas and their associated geometries to keep users' protected area boundaries in sync with those of the most recent WDPA database, and thus serve a more accurate email alert to the users. Furthermore, any client accessing the WDPA protected areas database could potentially use the solution of real-time access to the protected areas database. This talk primarily focuses on the challenges for FIRMS in sending accurate email alerts for protected areas, along with the solution the FIRMS team is developing. This talk also introduces the FIRMS fire information system and its components, with a special emphasis on the FIRMS email alerts system.

  4. National Map Data Base On Landslide Prerequisites In Clay and Silt Areas - Development of Prototype

    NASA Astrophysics Data System (ADS)

    Viberg, Leif

    Swedish geotechnical institute, SGI, has in co-operation with Swedish geologic survey, Lantmateriet (land surveying) and Swedish Rescue Service developed a theme database on landslide prerequisites in clay and silt areas. The work is carried out on commission of the Swedish government. A report with suggestions for production of the database has been delivered to the government. The database is a prototype, which has been tested in an area in northern Sweden. Recommended presentation map scale is about 1:50 000. Distribution of the database via Internet is discussed. The aim of the database is to use it as a modern planning tool in combination with other databases, e g databases on flooding prognoses. The main use is supposed to be in early planning stages, e g for new building and infrastructure development and for risk analyses. The database can also be used in more acute cases, e g for risk analyses and rescue operations in connection with flooding over large areas. Users are supposed to be municipal and county planners and rescue services, infrastructure planners, consultants and assurance companies. The database is constructed by combination of two existing databases: Elevation data and soil map data. The investigation area is divided into three zones with different stability criteria: 1. Clay and silt in sloping ground or adjoining water. 2. Clay and silt in flat ground. 3. Rock and other soils than clay and silt. The geometrical and soil criteria for the zones are specified in an algoritm, that will do the job to sort out the different zones. The algoritm is thereby using data from the elevation and soil databases. The investigation area is divided into cells (raster format) with 5 x 5 m side length. Different algoritms had to be developed before reasonable calculation time was reached. The theme may be presented on screen or as a map plot. A prototype map has been produced for the test area. A description is accompanying the map. The database is suggested to be produced in landslide prone areas in Sweden and approximately 200-300 map sheets (25 x 25 km) are required.

  5. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  6. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  7. A Sediment Testing Reference Area Database for the San Francisco Deep Ocean Disposal Site (SF-DODS)

    EPA Pesticide Factsheets

    EPA established and maintains a SF-DODS reference area database of previously-collected sediment test data. Several sets of sediment test data have been successfully collected from the SF-DODS reference area.

  8. Geologic map and map database of parts of Marin, San Francisco, Alameda, Contra Costa, and Sonoma counties, California

    USGS Publications Warehouse

    Blake, M.C.; Jones, D.L.; Graymer, R.W.; digital database by Soule, Adam

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  9. Scalable Indoor Localization via Mobile Crowdsourcing and Gaussian Process

    PubMed Central

    Chang, Qiang; Li, Qun; Shi, Zesen; Chen, Wei; Wang, Weiping

    2016-01-01

    Indoor localization using Received Signal Strength Indication (RSSI) fingerprinting has been extensively studied for decades. The positioning accuracy is highly dependent on the density of the signal database. In areas without calibration data, however, this algorithm breaks down. Building and updating a dense signal database is labor intensive, expensive, and even impossible in some areas. Researchers are continually searching for better algorithms to create and update dense databases more efficiently. In this paper, we propose a scalable indoor positioning algorithm that works both in surveyed and unsurveyed areas. We first propose Minimum Inverse Distance (MID) algorithm to build a virtual database with uniformly distributed virtual Reference Points (RP). The area covered by the virtual RPs can be larger than the surveyed area. A Local Gaussian Process (LGP) is then applied to estimate the virtual RPs’ RSSI values based on the crowdsourced training data. Finally, we improve the Bayesian algorithm to estimate the user’s location using the virtual database. All the parameters are optimized by simulations, and the new algorithm is tested on real-case scenarios. The results show that the new algorithm improves the accuracy by 25.5% in the surveyed area, with an average positioning error below 2.2 m for 80% of the cases. Moreover, the proposed algorithm can localize the users in the neighboring unsurveyed area. PMID:26999139

  10. Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints

    ERIC Educational Resources Information Center

    Philip, George C.

    2007-01-01

    This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…

  11. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  12. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  13. Protocol for the E-Area Low Level Waste Facility Disposal Limits Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swingle, R

    2006-01-31

    A database has been developed to contain the disposal limits for the E-Area Low Level Waste Facility (ELLWF). This database originates in the form of an EXCEL{copyright} workbook. The pertinent sheets are translated to PDF format using Adobe ACROBAT{copyright}. The PDF version of the database is accessible from the Solid Waste Division web page on SHRINE. In addition to containing the various disposal unit limits, the database also contains hyperlinks to the original references for all limits. It is anticipated that database will be revised each time there is an addition, deletion or revision of any of the ELLWF radionuclidemore » disposal limits.« less

  14. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  15. Feasibility of combining two aquatic benthic macroinvertebrate community databases for water-quality assessment

    USGS Publications Warehouse

    Lenz, Bernard N.

    1997-01-01

    An important part of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program is the analysis of existing data in each of the NAWQA study areas. The Wisconsin Department of Natural Resources (WDNR) has an extensive aquatic benthic macroinvertebrate communities in streams (benthic invertebrates) database maintained by the University of Wisconsin-Stevens Point. This database has data which date back to 1984 and includes data from streams within the Western Lake Michigan Drainages (WMIC) study area (fig. 1). This report looks at the feasibility of USGS scientists supplementing the data they collect with data from the WDNR database when assessing water quality in the study area.

  16. Using Virtual Servers to Teach the Implementation of Enterprise-Level DBMSs: A Teaching Note

    ERIC Educational Resources Information Center

    Wagner, William P.; Pant, Vik

    2010-01-01

    One of the areas where demand has remained strong for MIS students is in the area of database management. Since the early days, this topic has been a mainstay in the MIS curriculum. Students of database management today typically learn about relational databases, SQL, normalization, and how to design and implement various kinds of database…

  17. Patent Databases. . .A Survey of What Is Available from DIALOG, Questel, SDC, Pergamon and INPADOC.

    ERIC Educational Resources Information Center

    Kulp, Carol S.

    1984-01-01

    Presents survey of two groups of databases covering patent literature: patent literature only and general literature that includes patents relevant to subject area of database. Description of databases and comparison tables for patent and general databases (cost, country coverage, years covered, update frequency, file size, and searchable data…

  18. DOE technology information management system database study report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less

  19. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  20. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  1. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  2. Terrestrial Sediments of the Earth: Development of a Global Unconsolidated Sediments Map Database (GUM)

    NASA Astrophysics Data System (ADS)

    Börker, J.; Hartmann, J.; Amann, T.; Romero-Mujalli, G.

    2018-04-01

    Mapped unconsolidated sediments cover half of the global land surface. They are of considerable importance for many Earth surface processes like weathering, hydrological fluxes or biogeochemical cycles. Ignoring their characteristics or spatial extent may lead to misinterpretations in Earth System studies. Therefore, a new Global Unconsolidated Sediments Map database (GUM) was compiled, using regional maps specifically representing unconsolidated and quaternary sediments. The new GUM database provides insights into the regional distribution of unconsolidated sediments and their properties. The GUM comprises 911,551 polygons and describes not only sediment types and subtypes, but also parameters like grain size, mineralogy, age and thickness where available. Previous global lithological maps or databases lacked detail for reported unconsolidated sediment areas or missed large areas, and reported a global coverage of 25 to 30%, considering the ice-free land area. Here, alluvial sediments cover about 23% of the mapped total ice-free area, followed by aeolian sediments (˜21%), glacial sediments (˜20%), and colluvial sediments (˜16%). A specific focus during the creation of the database was on the distribution of loess deposits, since loess is highly reactive and relevant to understand geochemical cycles related to dust deposition and weathering processes. An additional layer compiling pyroclastic sediment is added, which merges consolidated and unconsolidated pyroclastic sediments. The compilation shows latitudinal abundances of sediment types related to climate of the past. The GUM database is available at the PANGAEA database (https://doi.org/10.1594/PANGAEA.884822).

  3. Environmental databases and other computerized information tools

    NASA Technical Reports Server (NTRS)

    Clark-Ingram, Marceia

    1995-01-01

    Increasing environmental legislation has brought about the development of many new environmental databases and software application packages to aid in the quest for environmental compliance. These databases and software packages are useful tools and applicable to a wide range of environmental areas from atmospheric modeling to materials replacement technology. The great abundance of such products and services can be very overwhelming when trying to identify the tools which best meet specific needs. This paper will discuss the types of environmental databases and software packages available. This discussion will also encompass the affected environmental areas of concern, product capabilities, and hardware requirements for product utilization.

  4. Comparison between satellite wildfire databases in Europe

    NASA Astrophysics Data System (ADS)

    Amraoui, Malik; Pereira, Mário; DaCamara, Carlos

    2013-04-01

    For Europe, several databases of wildfires based on the satellite imagery are currently available and being used to conduct various studies and produce official reports. The European Forest Fire Information System (EFFIS) burned area perimeters database comprises fires with burnt area greater than 1.0 ha occurred in the Europe countries during the 2000 - 2011 period. The MODIS Burned Area Product (MCD45A1) is a monthly global Level 3 gridded 500m product containing per-pixel burning, quality information, and tile-level metadata. The Burned Area Product was developed by the MODIS Fire Team at the University of Maryland and is available April 2000 onwards. Finally, for Portugal the National Forest Authority (AFN) discloses the national mapping of burned areas of the years 1990 to 2011, based on Landsat imagery which accounts for fires larger than 5.0 ha. This study main objectives are: (i) provide a comprehensive description of the datasets, its limitations and potential; (ii) do preliminary statistics on the data; and, (iii) to compare the MODIS and EFFIS satellite wildfires databases throughout/across the entire European territory, based on indicators such as the spatial location of the burned areas and the extent of area burned annually and complement the analysis for Portugal will the inclusion of database AFN. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  5. Geology of Point Reyes National Seashore and vicinity, California: a digital database

    USGS Publications Warehouse

    Clark, Jospeh C.; Brabb, Earl E.

    1997-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The report does include, however, a PostScript plot file containing an image of the geologic map sheet with explanation, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously published and unpublished data and new mapping by the authors, represents the general distribution of surficial deposits and rock units in Point Reyes and surrounding areas. Together with the accompanying text file (pr-geo.txt or pr-geo.ps), it provides current information on the stratigraphy and structural geology of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:48,000 or smaller.

  6. Potash: a global overview of evaporate-related potash resources, including spatial databases of deposits, occurrences, and permissive tracts: Chapter S in Global mineral resource assessment

    USGS Publications Warehouse

    Orris, Greta J.; Cocker, Mark D.; Dunlap, Pamela; Wynn, Jeff C.; Spanski, Gregory T.; Briggs, Deborah A.; Gass, Leila; Bliss, James D.; Bolm, Karen S.; Yang, Chao; Lipin, Bruce R.; Ludington, Stephen; Miller, Robert J.; Słowakiewicz, Mirosław

    2014-01-01

    This report describes a global, evaporite-related potash deposits and occurrences database and a potash tracts database. Chapter 1 summarizes potash resource history and use. Chapter 2 describes a global potash deposits and occurrences database, which contains more than 900 site records. Chapter 3 describes a potash tracts database, which contains 84 tracts with geology permissive for the presence of evaporite-hosted potash resources, including areas with active evaporite-related potash production, areas with known mineralization that has not been quantified or exploited, and areas with potential for undiscovered potash resources. Chapter 4 describes geographic information system (GIS) data files that include (1) potash deposits and occurrences data, (2) potash tract data, (3) reference databases for potash deposit and tract data, and (4) representative graphics of geologic features related to potash tracts and deposits. Summary descriptive models for stratabound potash-bearing salt and halokinetic potash-bearing salt are included in appendixes A and B, respectively. A glossary of salt- and potash-related terms is contained in appendix C and a list of database abbreviations is given in appendix D. Appendix E describes GIS data files, and appendix F is a guide to using the geodatabase.

  7. Understanding youthful risk taking and driving : database report

    DOT National Transportation Integrated Search

    1995-11-01

    This report catalogs national databases that contain information about adolescents and risk taking behaviors. It contains descriptions of the major areas, unique characteristics, and risk-related aspects of each database. Detailed information is prov...

  8. Understanding Youthful Risk Taking and Driving: Database Report

    DOT National Transportation Integrated Search

    1995-11-01

    This report catalogs national databases that contain information about adolescents and risk taking behaviors. It contains descriptions of the major areas, unique characteristics, and risk-related aspects of each database. Detailed information is prov...

  9. Map-Based Querying for Multimedia Database

    DTIC Science & Technology

    2014-09-01

    existing assets in a custom multimedia database based on an area of interest. It also describes the augmentation of an Android Tactical Assault Kit (ATAK......for Multimedia Database Somiya Metu Computational and Information Sciences Directorate, ARL

  10. Geologic map and map database of the Palo Alto 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Brabb, E.E.; Jones, D.L.; Graymer, R.W.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (pamf.ps, pamf.pdf, pamf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  11. Geologic map and map database of western Sonoma, northernmost Marin, and southernmost Mendocino counties, California

    USGS Publications Warehouse

    Blake, M.C.; Graymer, R.W.; Stamski, R.E.

    2002-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (wsomf.ps, wsomf.pdf, wsomf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  12. Directory of Assistive Technology: Data Sources.

    ERIC Educational Resources Information Center

    Council for Exceptional Children, Reston, VA. Center for Special Education Technology.

    The annotated directory describes in detail both on-line and print databases in the area of assistive technology for individuals with disabilities. For each database, the directory provides the name, address, and telephone number of the sponsoring organization; disability areas served; number of hardware and software products; types of information…

  13. Digital database architecture and delineation methodology for deriving drainage basins, and a comparison of digitally and non-digitally derived numeric drainage areas

    USGS Publications Warehouse

    Dupree, Jean A.; Crowfoot, Richard M.

    2012-01-01

    The drainage basin is a fundamental hydrologic entity used for studies of surface-water resources and during planning of water-related projects. Numeric drainage areas published by the U.S. Geological Survey water science centers in Annual Water Data Reports and on the National Water Information Systems (NWIS) Web site are still primarily derived from hard-copy sources and by manual delineation of polygonal basin areas on paper topographic map sheets. To expedite numeric drainage area determinations, the Colorado Water Science Center developed a digital database structure and a delineation methodology based on the hydrologic unit boundaries in the National Watershed Boundary Dataset. This report describes the digital database architecture and delineation methodology and also presents the results of a comparison of the numeric drainage areas derived using this digital methodology with those derived using traditional, non-digital methods. (Please see report for full Abstract)

  14. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    ERIC Educational Resources Information Center

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  15. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  16. Alaska IPASS database preparation manual.

    Treesearch

    P. McHugh; D. Olson; C. Schallau

    1989-01-01

    Describes the data, their sources, and the calibration procedures used in compiling a database for the Alaska IPASS (interactive policy analysis simulation system) model. Although this manual is for Alaska, it provides generic instructions for analysts preparing databases for other geographical areas.

  17. Developing Database Files for Student Use.

    ERIC Educational Resources Information Center

    Warner, Michael

    1988-01-01

    Presents guidelines for creating student database files that supplement classroom teaching. Highlights include determining educational objectives, planning the database with computer specialists and subject area specialists, data entry, and creating student worksheets. Specific examples concerning elements of the periodic table and…

  18. Experimental evaluation of dynamic data allocation strategies in a distributed database with changing workloads

    NASA Technical Reports Server (NTRS)

    Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul

    1995-01-01

    Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.

  19. Multi-window detection for P-wave in electrocardiograms based on bilateral accumulative area.

    PubMed

    Chen, Riqing; Huang, Yingsong; Wu, Jian

    2016-11-01

    P-wave detection is one of the most challenging aspects in electrocardiograms (ECGs) due to its low amplitude, low frequency, and variable waveforms. This work introduces a novel multi-window detection method for P-wave delineation based on the bilateral accumulative area. The bilateral accumulative area is calculated by summing the areas covered by the P-wave curve with left and right sliding windows. The onset and offset of a positive P-wave correspond to the local maxima of the area detector. The position drift and difference in area variation of local extreme points with different windows are used to systematically combine multi-window and 12-lead synchronous detection methods, which are used to screen the optimization boundary points from all extreme points of different window widths and adaptively match the P-wave location. The proposed method was validated with ECG signals from various databases, including the Standard CSE Database, T-Wave Alternans Challenge Database, PTB Diagnostic ECG Database, and the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database. The average sensitivity Se was 99.44% with a positive predictivity P+ of 99.37% for P-wave detection. Standard deviations of 3.7 and 4.3ms were achieved for the onset and offset of P-waves, respectively, which is in agreement with the accepted tolerances required by the CSE committee. Compared with well-known delineation methods, this method can achieve high sensitivity and positive predictability using a simple calculation process. The experiment results suggest that the bilateral accumulative area could be an effective detection tool for ECG signal analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Geologic Surface Effects of Underground Nuclear Testing, Buckboard Mesa, Climax Stock, Dome Mountain, Frenchman Flat, Rainier/Aqueduct Mesa, and Shoshone Mountain, Nevada Test Site, Nevada

    USGS Publications Warehouse

    Grasso, Dennis N.

    2003-01-01

    Surface effects maps were produced for 72 of 89 underground detonations conducted at the Frenchman Flat, Rainier Mesa and Aqueduct Mesa, Climax Stock, Shoshone Mountain, Buckboard Mesa, and Dome Mountain testing areas of the Nevada Test Site between August 10, 1957 (Saturn detonation, Area 12) and September 18, 1992 (Hunters Trophy detonation, Area 12). The ?Other Areas? Surface Effects Map Database, which was used to construct the maps shown in this report, contains digital reproductions of these original maps. The database is provided in both ArcGIS (v. 8.2) geodatabase format and ArcView (v. 3.2) shapefile format. This database contains sinks, cracks, faults, and other surface effects having a combined (cumulative) length of 136.38 km (84.74 mi). In GIS digital format, the user can view all surface effects maps simultaneously, select and view the surface effects of one or more sites of interest, or view specific surface effects by area or site. Three map layers comprise the database. They are: (1) the surface effects maps layer (oase_n27f), (2) the bar symbols layer (oase_bar_n27f), and (3) the ball symbols layer (oase_ball_n27f). Additionally, an annotation layer, named 'Ball_and_Bar_Labels,' and a polygon features layer, named 'Area12_features_poly_n27f,' are contained in the geodatabase version of the database. The annotation layer automatically labels all 295 ball-and-bar symbols shown on these maps. The polygon features layer displays areas of ground disturbances, such as rock spall and disturbed ground caused by the detonations. Shapefile versions of the polygon features layer in Nevada State Plane and Universal Transverse Mercator projections, named 'area12_features_poly_n27f.shp' and 'area12_features_poly_u83m.shp,' are also provided in the archive.

  1. Geologic map database of the El Mirage Lake area, San Bernardino and Los Angeles Counties, California

    USGS Publications Warehouse

    Miller, David M.; Bedford, David R.

    2000-01-01

    This geologic map database for the El Mirage Lake area describes geologic materials for the dry lake, parts of the adjacent Shadow Mountains and Adobe Mountain, and much of the piedmont extending south from the lake upward toward the San Gabriel Mountains. This area lies within the western Mojave Desert of San Bernardino and Los Angeles Counties, southeastern California. The area is traversed by a few paved highways that service the community of El Mirage, and by numerous dirt roads that lead to outlying properties. An off-highway vehicle area established by the Bureau of Land Management encompasses the dry lake and much of the land north and east of the lake. The physiography of the area consists of the dry lake, flanking mud and sand flats and alluvial piedmonts, and a few sharp craggy mountains. This digital geologic map database, intended for use at 1:24,000-scale, describes and portrays the rock units and surficial deposits of the El Mirage Lake area. The map database was prepared to aid in a water-resource assessment of the area by providing surface geologic information with which deepergroundwater-bearing units may be understood. The area mapped covers the Shadow Mountains SE and parts of the Shadow Mountains, Adobe Mountain, and El Mirage 7.5-minute quadrangles. The map includes detailed geology of surface and bedrock deposits, which represent a significant update from previous bedrock geologic maps by Dibblee (1960) and Troxel and Gunderson (1970), and the surficial geologic map of Ponti and Burke (1980); it incorporates a fringe of the detailed bedrock mapping in the Shadow Mountains by Martin (1992). The map data were assembled as a digital database using ARC/INFO to enable wider applications than traditional paper-product geologic maps and to provide for efficient meshing with other digital data bases prepared by the U.S. Geological Survey's Southern California Areal Mapping Project.

  2. Northern Forest Futures reporting tools and database guide

    Treesearch

    Patrick D. Miles; Robert J. Huggett; W. Keith Moser

    2015-01-01

    The Northern Forest Futures database (NFFDB) supports the reporting of both current and projected future forest conditions for the 20 states that make up the U.S. North, an area bounded by Maine, Maryland, Missouri, and Minnesota. The NFFDB database and attendant reporting tools are available to the public as a Microsoft AccessTM database. The...

  3. Algorithm to calculate proportional area transformation factors for digital geographic databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, R.

    1983-01-01

    A computer technique is described for determining proportionate-area factors used to transform thematic data between large geographic areal databases. The number of calculations in the algorithm increases linearly with the number of segments in the polygonal definitions of the databases, and increases with the square root of the total number of chains. Experience is presented in calculating transformation factors for two national databases, the USGS Water Cataloging Unit outlines and DOT county boundaries which consist of 2100 and 3100 polygons respectively. The technique facilitates using thematic data defined on various natural bases (watersheds, landcover units, etc.) in analyses involving economicmore » and other administrative bases (states, counties, etc.), and vice versa.« less

  4. Object-oriented parsing of biological databases with Python.

    PubMed

    Ramu, C; Gemünd, C; Gibson, T J

    2000-07-01

    While database activities in the biological area are increasing rapidly, rather little is done in the area of parsing them in a simple and object-oriented way. We present here an elegant, simple yet powerful way of parsing biological flat-file databases. We have taken EMBL, SWISSPROT and GENBANK as examples. EMBL and SWISS-PROT do not differ much in the format structure. GENBANK has a very different format structure than EMBL and SWISS-PROT. Extracting the desired fields in an entry (for example a sub-sequence with an associated feature) for later analysis is a constant need in the biological sequence-analysis community: this is illustrated with tools to make new splice-site databases. The interface to the parser is abstract in the sense that the access to all the databases is independent from their different formats, since parsing instructions are hidden.

  5. The 2018 Nucleic Acids Research database issue and the online molecular biology database collection.

    PubMed

    Rigden, Daniel J; Fernández, Xosé M

    2018-01-04

    The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/. © The Author(s) 2018. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Mathematical Notation in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Pasterczyk, Catherine E.

    1990-01-01

    Discusses ways in which using mathematical symbols to search online bibliographic databases in scientific and technical areas can improve search results. The representations used for Greek letters, relations, binary operators, arrows, and miscellaneous special symbols in the MathSci, Inspec, Compendex, and Chemical Abstracts databases are…

  7. Seeds in Chernobyl: the database on proteome response on radioactive environment

    PubMed Central

    Klubicová, Katarína; Vesel, Martin; Rashydov, Namik M.; Hajduch, Martin

    2012-01-01

    Two serious nuclear accidents during the last quarter century (Chernobyl, 1986 and Fukushima, 2011) contaminated large agricultural areas with radioactivity. The database “Seeds in Chernobyl” (http://www.chernobylproteomics.sav.sk) contains the information about the abundances of hundreds of proteins from on-going investigation of mature and developing seed harvested from plants grown in radioactive Chernobyl area. This database provides a useful source of information concerning the response of the seed proteome to permanently increased level of ionizing radiation in a user-friendly format. PMID:23087698

  8. The Status of Statewide Subscription Databases

    ERIC Educational Resources Information Center

    Krueger, Karla S.

    2012-01-01

    This qualitative content analysis presents subscription databases available to school libraries through statewide purchases. The results may help school librarians evaluate grade and subject-area coverage, make comparisons to recommended databases, and note potential suggestions for their states to include in future contracts or for local…

  9. Geologic map and map database of northeastern San Francisco Bay region, California, [including] most of Solano County and parts of Napa, Marin, Contra Costa, San Joaquin, Sacramento, Yolo, and Sonoma Counties

    USGS Publications Warehouse

    Graymer, Russell Walter; Jones, David Lawrence; Brabb, Earl E.

    2002-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (nesfmf.ps, nesfmf.pdf, nesfmf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  10. Database of Ground-Water Levels in the Vicinity of Rainier Mesa, Nevada Test Site, Nye County, Nevada, 1957-2005

    USGS Publications Warehouse

    Fenelon, Joseph M.

    2006-01-01

    More than 1,200 water-level measurements from 1957 to 2005 in the Rainier Mesa area of the Nevada Test Site were quality assured and analyzed. Water levels were measured from 50 discrete intervals within 18 boreholes and from 4 tunnel sites. An interpretive database was constructed that describes water-level conditions for each water level measured in the Rainier Mesa area. Multiple attributes were assigned to each water-level measurement in the database to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed for each water-level measurement. The database also includes hydrograph narratives that describe the water-level history of each well.

  11. Databases in the Area of Pharmacogenetics

    PubMed Central

    Sim, Sarah C.; Altman, Russ B.; Ingelman-Sundberg, Magnus

    2012-01-01

    In the area of pharmacogenetics and personalized health care it is obvious that databases, providing important information of the occurrence and consequences of variant genes encoding drug metabolizing enzymes, drug transporters, drug targets, and other proteins of importance for drug response or toxicity, are of critical value for scientists, physicians, and industry. The primary outcome of the pharmacogenomic field is the identification of biomarkers that can predict drug toxicity and drug response, thereby individualizing and improving drug treatment of patients. The drug in question and the polymorphic gene exerting the impact are the main issues to be searched for in the databases. Here, we review the databases that provide useful information in this respect, of benefit for the development of the pharmacogenomic field. PMID:21309040

  12. Decision Support for Emergency Operations Centers

    NASA Technical Reports Server (NTRS)

    Harvey, Craig; Lawhead, Joel; Watts, Zack

    2005-01-01

    The Flood Disaster Mitigation Decision Support System (DSS) is a computerized information system that allows regional emergency-operations government officials to make decisions regarding the dispatch of resources in response to flooding. The DSS implements a real-time model of inundation utilizing recently acquired lidar elevation data as well as real-time data from flood gauges, and other instruments within and upstream of an area that is or could become flooded. The DSS information is updated as new data become available. The model generates realtime maps of flooded areas and predicts flood crests at specified locations. The inundation maps are overlaid with information on population densities, property values, hazardous materials, evacuation routes, official contact information, and other information needed for emergency response. The program maintains a database and a Web portal through which real-time data from instrumentation are gathered into the database. Also included in the database is a geographic information system, from which the program obtains the overlay data for areas of interest as needed. The portal makes some portions of the database accessible to the public. Access to other portions of the database is restricted to government officials according to various levels of authorization. The Flood Disaster Mitigation DSS has been integrated into a larger DSS named REACT (Real-time Emergency Action Coordination Tool), which also provides emergency operations managers with data for any type of impact area such as floods, fires, bomb

  13. Database Software for the 1990s.

    ERIC Educational Resources Information Center

    Beiser, Karl

    1990-01-01

    Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and…

  14. Alternative Databases for Anthropology Searching.

    ERIC Educational Resources Information Center

    Brody, Fern; Lambert, Maureen

    1984-01-01

    Examines online search results of sample questions in several databases covering linguistics, cultural anthropology, and physical anthropology in order to determine if and where any overlap in results might occur, and which files have greatest number of relevant hits. Search results by database are given for each subject area. (EJS)

  15. HOMED-homicides eastern Denmark: an introduction to a forensic medical homicide database.

    PubMed

    Colville-Ebeling, Bonnie; Frisch, Morten; Lynnerup, Niels; Theilade, Peter

    2014-11-01

    An introduction to a forensic medical homicide database established at the Department of Forensic Medicine in Copenhagen. The database contains substantial clinical and demographic data obtained in conjunction with medico-legal autopsies of victims and forensic clinical examinations of perpetrators in homicide cases in eastern Denmark. The database contains information on all homicide cases investigated at the Department of Forensic Medicine in Copenhagen since 1971. Coverage for the catchment area of the department is assumed to be very good because of a medico-legal homicide autopsy rate close to 100%. Regional differences might exist however, due to the fact that the catchment area of the department is dominated by the city of Copenhagen. The strength of the database includes a long running time, near complete regional coverage and an exhaustive list of registered variables it is useful for research purposes, although specific data limitations apply. © 2014 the Nordic Societies of Public Health.

  16. An inventory of continental U.S. terrestrial candidate ecological restoration areas based on landscape context.

    PubMed

    Wickham, James; Riitters, Kurt; Vogt, Peter; Costanza, Jennifer; Neale, Anne

    2017-11-01

    Landscape context is an important factor in restoration ecology, but the use of landscape context for site prioritization has not been as fully developed. We used morphological image processing to identify candidate ecological restoration areas based on their proximity to existing natural vegetation. We identified 1,102,720 candidate ecological restoration areas across the continental United States. Candidate ecological restoration areas were concentrated in the Great Plains and eastern United States. We populated the database of candidate ecological restoration areas with 17 attributes related to site content and context, including factors such as soil fertility and roads (site content), and number and area of potentially conjoined vegetated regions (site context) to facilitate its use for site prioritization. We demonstrate the utility of the database in the state of North Carolina, U.S.A. for a restoration objective related to restoration of water quality (mandated by the U.S. Clean Water Act), wetlands, and forest. The database will be made publicly available on the U.S. Environmental Protection Agency's EnviroAtlas website (http://enviroatlas.epa.gov) for stakeholders interested in ecological restoration.

  17. An inventory of continental U.S. terrestrial candidate ecological restoration areas based on landscape context

    PubMed Central

    Wickham, James; Riitters, Kurt; Vogt, Peter; Costanza, Jennifer; Neale, Anne

    2018-01-01

    Landscape context is an important factor in restoration ecology, but the use of landscape context for site prioritization has not been as fully developed. We used morphological image processing to identify candidate ecological restoration areas based on their proximity to existing natural vegetation. We identified 1,102,720 candidate ecological restoration areas across the continental United States. Candidate ecological restoration areas were concentrated in the Great Plains and eastern United States. We populated the database of candidate ecological restoration areas with 17 attributes related to site content and context, including factors such as soil fertility and roads (site content), and number and area of potentially conjoined vegetated regions (site context) to facilitate its use for site prioritization. We demonstrate the utility of the database in the state of North Carolina, U.S.A. for a restoration objective related to restoration of water quality (mandated by the U.S. Clean Water Act), wetlands, and forest. The database will be made publicly available on the U.S. Environmental Protection Agency's EnviroAtlas website (http://enviroatlas.epa.gov) for stakeholders interested in ecological restoration. PMID:29683130

  18. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  19. Evaluation of Database Coverage: A Comparison of Two Methodologies.

    ERIC Educational Resources Information Center

    Tenopir, Carol

    1982-01-01

    Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…

  20. The relational clinical database: a possible solution to the star wars in registry systems.

    PubMed

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  1. Computer Security Products Technology Overview

    DTIC Science & Technology

    1988-10-01

    13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a

  2. Subject Retrieval from Full-Text Databases in the Humanities

    ERIC Educational Resources Information Center

    East, John W.

    2007-01-01

    This paper examines the problems involved in subject retrieval from full-text databases of secondary materials in the humanities. Ten such databases were studied and their search functionality evaluated, focusing on factors such as Boolean operators, document surrogates, limiting by subject area, proximity operators, phrase searching, wildcards,…

  3. High-integrity databases for helicopter operations

    NASA Astrophysics Data System (ADS)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  4. Geologic map of the Grand Canyon 30' x 60' quadrangle, Coconino and Mohave Counties, northwestern Arizona

    USGS Publications Warehouse

    Billingsley, G.H.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data as well as new mapping by the author, represents the general distribution of bedrock and surficial deposits in the map area. Together with the accompanying pamphlet, it provides current information on the geologic structure and stratigraphy of the Grand Canyon area. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller.

  5. NLCD - MODIS albedo data

    EPA Pesticide Factsheets

    The NLCD-MODIS land cover-albedo database integrates high-quality MODIS albedo observations with areas of homogeneous land cover from NLCD. The spatial resolution (pixel size) of the database is 480m-x-480m aligned to the standardized UGSG Albers Equal-Area projection. The spatial extent of the database is the continental United States. This dataset is associated with the following publication:Wickham , J., C.A. Barnes, and T. Wade. Combining NLCD and MODIS to Create a Land Cover-Albedo Dataset for the Continental United States. REMOTE SENSING OF ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 170(0): 143-153, (2015).

  6. Automated identification of retinal vessels using a multiscale directional contrast quantification (MDCQ) strategy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, Yi; Zhang, Xinyuan; Wang, Ningli, E-mail: wningli@vip.163.com, E-mail: puj@upmc.edu

    2014-09-15

    Purpose: A novel algorithm is presented to automatically identify the retinal vessels depicted in color fundus photographs. Methods: The proposed algorithm quantifies the contrast of each pixel in retinal images at multiple scales and fuses the resulting consequent contrast images in a progressive manner by leveraging their spatial difference and continuity. The multiscale strategy is to deal with the variety of retinal vessels in width, intensity, resolution, and orientation; and the progressive fusion is to combine consequent images and meanwhile avoid a sudden fusion of image noise and/or artifacts in space. To quantitatively assess the performance of the algorithm, wemore » tested it on three publicly available databases, namely, DRIVE, STARE, and HRF. The agreement between the computer results and the manual delineation in these databases were quantified by computing their overlapping in both area and length (centerline). The measures include sensitivity, specificity, and accuracy. Results: For the DRIVE database, the sensitivities in identifying vessels in area and length were around 90% and 70%, respectively, the accuracy in pixel classification was around 99%, and the precisions in terms of both area and length were around 94%. For the STARE database, the sensitivities in identifying vessels were around 90% in area and 70% in length, and the accuracy in pixel classification was around 97%. For the HRF database, the sensitivities in identifying vessels were around 92% in area and 83% in length for the healthy subgroup, around 92% in area and 75% in length for the glaucomatous subgroup, around 91% in area and 73% in length for the diabetic retinopathy subgroup. For all three subgroups, the accuracy was around 98%. Conclusions: The experimental results demonstrate that the developed algorithm is capable of identifying retinal vessels depicted in color fundus photographs in a relatively reliable manner.« less

  7. Challenges in developing medicinal plant databases for sharing ethnopharmacological knowledge.

    PubMed

    Ningthoujam, Sanjoy Singh; Talukdar, Anupam Das; Potsangbam, Kumar Singh; Choudhury, Manabendra Dutta

    2012-05-07

    Major research contributions in ethnopharmacology have generated vast amount of data associated with medicinal plants. Computerized databases facilitate data management and analysis making coherent information available to researchers, planners and other users. Web-based databases also facilitate knowledge transmission and feed the circle of information exchange between the ethnopharmacological studies and public audience. However, despite the development of many medicinal plant databases, a lack of uniformity is still discernible. Therefore, it calls for defining a common standard to achieve the common objectives of ethnopharmacology. The aim of the study is to review the diversity of approaches in storing ethnopharmacological information in databases and to provide some minimal standards for these databases. Survey for articles on medicinal plant databases was done on the Internet by using selective keywords. Grey literatures and printed materials were also searched for information. Listed resources were critically analyzed for their approaches in content type, focus area and software technology. Necessity for rapid incorporation of traditional knowledge by compiling primary data has been felt. While citation collection is common approach for information compilation, it could not fully assimilate local literatures which reflect traditional knowledge. Need for defining standards for systematic evaluation, checking quality and authenticity of the data is felt. Databases focussing on thematic areas, viz., traditional medicine system, regional aspect, disease and phytochemical information are analyzed. Issues pertaining to data standard, data linking and unique identification need to be addressed in addition to general issues like lack of update and sustainability. In the background of the present study, suggestions have been made on some minimum standards for development of medicinal plant database. In spite of variations in approaches, existence of many overlapping features indicates redundancy of resources and efforts. As the development of global data in a single database may not be possible in view of the culture-specific differences, efforts can be given to specific regional areas. Existing scenario calls for collaborative approach for defining a common standard in medicinal plant database for knowledge sharing and scientific advancement. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Data-Based Decisions Guidelines for Teachers of Students with Severe Intellectual and Developmental Disabilities

    ERIC Educational Resources Information Center

    Jimenez, Bree A.; Mims, Pamela J.; Browder, Diane M.

    2012-01-01

    Effective practices in student data collection and implementation of data-based instructional decisions are needed for all educators, but are especially important when students have severe intellectual and developmental disabilities. Although research in the area of data-based instructional decisions for students with severe disabilities shows…

  9. 47 CFR 52.32 - Allocation of the shared costs of long-term number portability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...

  10. 47 CFR 52.32 - Allocation of the shared costs of long-term number portability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...

  11. 47 CFR 52.32 - Allocation of the shared costs of long-term number portability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...

  12. 47 CFR 52.32 - Allocation of the shared costs of long-term number portability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...

  13. 77 FR 37869 - Agency Information Collection Activities: Proposed Collection; Comment Request-National Hunger...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-25

    ...: Proposed Collection; Comment Request--National Hunger Clearinghouse Database Form AGENCY: Food and... Database Form. Form: FNS 543. OMB Number: 0584-0474. Expiration Date: 8/31/2012. Type of Request: Revision... Clearinghouse includes a database (FNS-543) of non- governmental, grassroots programs that work in the areas of...

  14. 47 CFR 52.32 - Allocation of the shared costs of long-term number portability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....21(h), of each regional database, as defined in § 52.21(1), shall recover the shared costs of long-term number portability attributable to that regional database from all telecommunications carriers providing telecommunications service in areas that regional database serves. Pursuant to its duties under...

  15. An Evaluation of Online Business Databases.

    ERIC Educational Resources Information Center

    van der Heyde, Angela J.

    The purpose of this study was to evaluate the credibility and timeliness of online business databases. The areas of evaluation were the currency, reliability, and extent of financial information in the databases. These were measured by performing an online search for financial information on five U.S. companies. The method of selection for the…

  16. The roles of nearest neighbor methods in imputing missing data in forest inventory and monitoring databases

    Treesearch

    Bianca N. I. Eskelson; Hailemariam Temesgen; Valerie Lemay; Tara M. Barrett; Nicholas L. Crookston; Andrew T. Hudak

    2009-01-01

    Almost universally, forest inventory and monitoring databases are incomplete, ranging from missing data for only a few records and a few variables, common for small land areas, to missing data for many observations and many variables, common for large land areas. For a wide variety of applications, nearest neighbor (NN) imputation methods have been developed to fill in...

  17. Computer-Assisted Promotion of Recreational Opportunities in Natural Resource Areas: A Demonstration and Case Example

    Treesearch

    Emilyn Sheffield; Leslie Furr; Charles Nelson

    1992-01-01

    Filevision IV is a multilayer imaging and data-base management system that combines drawing, filing and extensive report-writing capabilities (Filevision IV, 1988). Filevision IV users access data by attaching graphics to text-oriented data-base records. Tourist attractions, support services, and geo-graphic features can be located on a base map of an area or region....

  18. Geologic Map and Map Database of Eastern Sonoma and Western Napa Counties, California

    USGS Publications Warehouse

    Graymer, R.W.; Brabb, E.E.; Jones, D.L.; Barnes, J.; Nicholson, R.S.; Stamski, R.E.

    2007-01-01

    Introduction This report contains a new 1:100,000-scale geologic map, derived from a set of geologic map databases (Arc-Info coverages) containing information at 1:62,500-scale resolution, and a new description of the geologic map units and structural relations in the map area. Prepared as part of the San Francisco Bay Region Mapping Project, the study area includes the north-central part of the San Francisco Bay region, and forms the final piece of the effort to generate new, digital geologic maps and map databases for an area which includes Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Santa Cruz, Solano, and Sonoma Counties. Geologic mapping in Lake County in the north-central part of the map extent was not within the scope of the Project. The map and map database integrates both previously published reports and new geologic mapping and field checking by the authors (see Sources of Data index map on the map sheet or the Arc-Info coverage eswn-so and the textfile eswn-so.txt). This report contains new ideas about the geologic structures in the map area, including the active San Andreas Fault system, as well as the geologic units and their relations. Together, the map (or map database) and the unit descriptions in this report describe the composition, distribution, and orientation of geologic materials and structures within the study area at regional scale. Regional geologic information is important for analysis of earthquake shaking, liquifaction susceptibility, landslide susceptibility, engineering materials properties, mineral resources and hazards, as well as groundwater resources and hazards. These data also assist in answering questions about the geologic history and development of the California Coast Ranges.

  19. The Coral Triangle Atlas: an integrated online spatial database system for improving coral reef management.

    PubMed

    Cros, Annick; Ahamad Fatan, Nurulhuda; White, Alan; Teoh, Shwu Jiau; Tan, Stanley; Handayani, Christian; Huang, Charles; Peterson, Nate; Venegas Li, Ruben; Siry, Hendra Yusran; Fitriana, Ria; Gove, Jamison; Acoba, Tomoko; Knight, Maurice; Acosta, Renerio; Andrew, Neil; Beare, Doug

    2014-01-01

    In this paper we describe the construction of an online GIS database system, hosted by WorldFish, which stores bio-physical, ecological and socio-economic data for the 'Coral Triangle Area' in South-east Asia and the Pacific. The database has been built in partnership with all six (Timor-Leste, Malaysia, Indonesia, The Philippines, Solomon Islands and Papua New Guinea) of the Coral Triangle countries, and represents a valuable source of information for natural resource managers at the regional scale. Its utility is demonstrated using biophysical data, data summarising marine habitats, and data describing the extent of marine protected areas in the region.

  20. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  1. Digital geomorphological landslide hazard mapping of the Alpago area, Italy

    NASA Astrophysics Data System (ADS)

    van Westen, Cees J.; Soeters, Rob; Sijmons, Koert

    Large-scale geomorphological maps of mountainous areas are traditionally made using complex symbol-based legends. They can serve as excellent "geomorphological databases", from which an experienced geomorphologist can extract a large amount of information for hazard mapping. However, these maps are not designed to be used in combination with a GIS, due to their complex cartographic structure. In this paper, two methods are presented for digital geomorphological mapping at large scales using GIS and digital cartographic software. The methods are applied to an area with a complex geomorphological setting on the Borsoia catchment, located in the Alpago region, near Belluno in the Italian Alps. The GIS database set-up is presented with an overview of the data layers that have been generated and how they are interrelated. The GIS database was also converted into a paper map, using a digital cartographic package. The resulting largescale geomorphological hazard map is attached. The resulting GIS database and cartographic product can be used to analyse the hazard type and hazard degree for each polygon, and to find the reasons for the hazard classification.

  2. Coal database for Cook Inlet and North Slope, Alaska

    USGS Publications Warehouse

    Stricker, Gary D.; Spear, Brianne D.; Sprowl, Jennifer M.; Dietrich, John D.; McCauley, Michael I.; Kinney, Scott A.

    2011-01-01

    This database is a compilation of published and nonconfidential unpublished coal data from Alaska. Although coal occurs in isolated areas throughout Alaska, this study includes data only from the Cook Inlet and North Slope areas. The data include entries from and interpretations of oil and gas well logs, coal-core geophysical logs (such as density, gamma, and resistivity), seismic shot hole lithology descriptions, measured coal sections, and isolated coal outcrops.

  3. The database design of LAMOST based on MYSQL/LINUX

    NASA Astrophysics Data System (ADS)

    Li, Hui-Xian, Sang, Jian; Wang, Sha; Luo, A.-Li

    2006-03-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) will be set up in the coming years. A fully automated software system for reducing and analyzing the spectra has to be developed with the telescope. This database system is an important part of the software system. The requirements for the database of the LAMOST, the design of the LAMOST database system based on MYSQL/LINUX and performance tests of this system are described in this paper.

  4. Database for the degradation risk assessment of groundwater resources (Southern Italy)

    NASA Astrophysics Data System (ADS)

    Polemio, M.; Dragone, V.; Mitolo, D.

    2003-04-01

    The risk characterisation of quality degradation and availability lowering of groundwater resources has been pursued for a wide coastal plain (Basilicata region, Southern Italy), an area covering 40 km along the Ionian Sea and 10 km inland. The quality degradation is due two phenomena: pollution due to discharge of waste water (coming from urban areas) and due to salt pollution, related to seawater intrusion but not only. The availability lowering is due to overexploitation but also due to drought effects. To this purpose the historical data of 1,130 wells have been collected. Wells, homogenously distributed in the area, were the source of geological, stratigraphical, hydrogeological, geochemical data. In order to manage space-related information via a GIS, a database system has been devised to encompass all the surveyed wells and the body of information available per well. Geo-databases were designed to comprise the four types of data collected: a database including geometrical, geological and hydrogeological data on wells (WDB), a database devoted to chemical and physical data on groundwater (CDB), a database including the geotechnical parameters (GDB), a database concering piezometric and hydrological (rainfall, air temperature, river discharge) data (HDB). The record pertaining to each well is identified in these databases by the progressive number of the well itself. Every database is designed as follows: a) the HDB contains 1,158 records, 28 of and 31 fields, mainly describing the geometry of the well and of the stratigraphy; b) the CDB encompasses data about 157 wells, based on which the chemical and physical analyses of groundwater have been carried out. More than one record has been associated with these 157 wells, due to periodic monitoring and analysis; c) the GDB covers 61 wells to which the geotechnical parameters obtained by soil samples taken at various depths; the HDB is designed to permit the analysis of long time series (from 1918) of piezometric data, monitored by more than 60 wells, temperature, rainfall and river discharge data. Based on geo-databases, the geostatistical processing of data has permitted to characterise the degradation risk of groundwater resources of a wide coastal aquifer.

  5. Draft secure medical database standard.

    PubMed

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  6. Historical hydrology and database on flood events (Apulia, southern Italy)

    NASA Astrophysics Data System (ADS)

    Lonigro, Teresa; Basso, Alessia; Gentile, Francesco; Polemio, Maurizio

    2014-05-01

    Historical data about floods represent an important tool for the comprehension of the hydrological processes, the estimation of hazard scenarios as a basis for Civil Protection purposes, as a basis of the rational land use management, especially in karstic areas, where time series of river flows are not available and the river drainage is rare. The research shows the importance of the improvement of existing flood database with an historical approach, finalized to collect past or historical floods event, in order to better assess the occurrence trend of floods, in the case for the Apulian region (south Italy). The main source of records of flood events for Apulia was the AVI (the acronym means Italian damaged areas) database, an existing Italian database that collects data concerning damaging floods from 1918 to 1996. The database was expanded consulting newspapers, publications, and technical reports from 1996 to 2006. In order to expand the temporal range further data were collected searching in the archives of regional libraries. About 700 useful news from 17 different local newspapers were found from 1876 to 1951. From a critical analysis of the 700 news collected since 1876 to 1952 only 437 were useful for the implementation of the Apulia database. The screening of these news showed the occurrence of about 122 flood events in the entire region. The district of Bari, the regional main town, represents the area in which the great number of events occurred; the historical analysis confirms this area as flood-prone. There is an overlapping period (from 1918 to 1952) between old AVI database and new historical dataset obtained by newspapers. With regard to this period, the historical research has highlighted new flood events not reported in the existing AVI database and it also allowed to add more details to the events already recorded. This study shows that the database is a dynamic instrument, which allows a continuous implementation of data, even in real time. More details on previous results of this research activity were recently published (Polemio, 2010; Basso et al., 2012; Lonigro et al., 2013) References Basso A., Lonigro T. and Polemio M. (2012) "The improvement of historical database on damaging hydrogeological events in the case of Apulia (Southern Italy)". Rendiconti online della Società Geologica Italiana, 21: 379-380; Lonigro T., Basso A. and Polemio M. (2013) "Historical database on damaging hydrogeological events in Apulia region (Southern Italy)". Rendiconti online della Società Geologica Italiana, 24: 196-198; Polemio M. (2010) "Historical floods and a recent extreme rainfall event in the Murgia karstic environment (Southern Italy)". Zeitschrift für Geomorphologie, 54(2): 195-219.

  7. Database assessment of CMIP5 and hydrological models to determine flood risk areas

    NASA Astrophysics Data System (ADS)

    Limlahapun, Ponthip; Fukui, Hiromichi

    2016-11-01

    Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.

  8. A computer program for calculation of doses and prices of injectable medications based on body weight or body surface area

    PubMed Central

    2004-01-01

    Abstract A computer program (CalcAnesth) was developed with Visual Basic for the purpose of calculating the doses and prices of injectable medications on the basis of body weight or body surface area. The drug names, concentrations, and prices are loaded from a drug database. This database is a simple text file, that the user can easily create or modify. The animal names and body weights can be loaded from a similar database. After typing the dose and the units into the user interface, the results will be automatically displayed. The program is able to open and save anesthetic protocols, and export or print the results. This CalcAnesth program can be useful in clinical veterinary anesthesiology and research. The rationale for dosing on the basis of body surface area is also discussed in this article. PMID:14979437

  9. Consulting report on the NASA technology utilization network system

    NASA Technical Reports Server (NTRS)

    Hlava, Marjorie M. K.

    1992-01-01

    The purposes of this consulting effort are: (1) to evaluate the existing management and production procedures and workflow as they each relate to the successful development, utilization, and implementation of the NASA Technology Utilization Network System (TUNS) database; (2) to identify, as requested by the NASA Project Monitor, the strengths, weaknesses, areas of bottlenecking, and previously unaddressed problem areas affecting TUNS; (3) to recommend changes or modifications of existing procedures as necessary in order to effect corrections for the overall benefit of NASA TUNS database production, implementation, and utilization; and (4) to recommend the addition of alternative procedures, routines, and activities that will consolidate and facilitate the production, implementation, and utilization of the NASA TUNS database.

  10. Biological Databases for Behavioral Neurobiology

    PubMed Central

    Baker, Erich J.

    2014-01-01

    Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the various needs of behavioral neuroscience analysis and interpretation. The focus is on the classes of databases, data curation, and advanced applications in bioinformatics using examples largely drawn from research efforts in behavioral neuroscience. PMID:23195119

  11. ALARA database value in future outage work planning and dose management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.W.; Green, W.H.

    1995-03-01

    ALARA database encompassing job-specific duration and man-rem plant specific information over three refueling outages represents an invaluable tool for the outage work planner and ALARA engineer. This paper describes dose-management trends emerging based on analysis of three refueling outages at Clinton Power Station. Conclusions reached based on hard data available from a relational database dose-tracking system is a valuable tool for planning of future outage work. The system`s ability to identify key problem areas during a refueling outage is improving as more outage comparative data becomes available. Trends over a three outage period are identified in this paper in themore » categories of number and type of radiation work permits implemented, duration of jobs, projected vs. actual dose rates in work areas, and accuracy of outage person-rem projection. The value of the database in projecting 1 and 5 year station person-rem estimates is discussed.« less

  12. Comprehensive national database of tree effects on air quality and human health in the United States.

    PubMed

    Hirabayashi, Satoshi; Nowak, David J

    2016-08-01

    Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and urban areas in every county in the conterminous United States. The results populated a national database of annual air pollutant removal, concentration changes, and reductions in adverse health incidences and costs for NO2, O3, PM2.5 and SO2. The developed database enabled a first order approximation of air quality and associated human health benefits provided by trees with any forest configurations anywhere in the conterminous United States over time. Comprehensive national database of tree effects on air quality and human health in the United States was developed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A Computational Chemistry Database for Semiconductor Processing

    NASA Technical Reports Server (NTRS)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  14. [Databases for surgical specialists in Cancún, Quintana Roo].

    PubMed

    Contla-Hosking, Jorge Eduardo; Ceballos-Martínez, Zoila Inés; Peralta-Bahena, Mónica Esther

    2004-01-01

    Our aim was to identify the level of knowledge of surgical health-area specialists in Cancún, Quintana Roo, Mexico, from the personal productivity database. We carried out an investigation of 37 surgical specialists: 24 belonged to the Mexican Social Security Institute (IMSS), while 13 belonged to the Mexican Health Secretariat (SSA). In our research, we found that 61% of surgical health-area specialist physicians were familiar with some aspects of the institutional surgical registry, including the following: 54% knew of the existence of a personal registry of surgeries carried out, and 43% keep a record of their personal activities. From the latter percentage, 69% of surgical health-area specialist physicians mentioned keeping their records manually, while 44% used the computer. Results of the research suggest that these physicians would like to have some kind of record of the surgeries carried out by each. An important percentage of these specialists do not keep a personal record on a database; due to this lack of knowledge, we obtain incorrect information in institutional records of the reality of what is actually done. We consider it important to inform surgical specialists concerning the existence of personal institutional records in database form or even of record done manually, as well as correct terminology for the International Codification (CIE-9 & 10). We inform here of the need to encourage a culture in records and databases in the formative stage of surgeon specialists.

  15. CropEx Web-Based Agricultural Monitoring and Decision Support

    NASA Technical Reports Server (NTRS)

    Harvey. Craig; Lawhead, Joel

    2011-01-01

    CropEx is a Web-based agricultural Decision Support System (DSS) that monitors changes in crop health over time. It is designed to be used by a wide range of both public and private organizations, including individual producers and regional government offices with a vested interest in tracking vegetation health. The database and data management system automatically retrieve and ingest data for the area of interest. Another stores results of the processing and supports the DSS. The processing engine will allow server-side analysis of imagery with support for image sub-setting and a set of core raster operations for image classification, creation of vegetation indices, and change detection. The system includes the Web-based (CropEx) interface, data ingestion system, server-side processing engine, and a database processing engine. It contains a Web-based interface that has multi-tiered security profiles for multiple users. The interface provides the ability to identify areas of interest to specific users, user profiles, and methods of processing and data types for selected or created areas of interest. A compilation of programs is used to ingest available data into the system, classify that data, profile that data for quality, and make data available for the processing engine immediately upon the data s availability to the system (near real time). The processing engine consists of methods and algorithms used to process the data in a real-time fashion without copying, storing, or moving the raw data. The engine makes results available to the database processing engine for storage and further manipulation. The database processing engine ingests data from the image processing engine, distills those results into numerical indices, and stores each index for an area of interest. This process happens each time new data is ingested and processed for the area of interest, and upon subsequent database entries, the database processing engine qualifies each value for each area of interest and conducts a logical processing of results indicating when and where thresholds are exceeded. Reports are provided at regular, operator-determined intervals that include variances from thresholds and links to view raw data for verification, if necessary. The technology and method of development allow the code base to easily be modified for varied use in the real-time and near-real-time processing environments. In addition, the final product will be demonstrated as a means for rapid draft assessment of imagery.

  16. A case study for a digital seabed database: Bohai Sea engineering geology database

    NASA Astrophysics Data System (ADS)

    Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang

    2006-07-01

    This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.

  17. 47 CFR 15.707 - Permissible channels of operation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of each such area as set forth in § 15.712(d). These channels will be listed in the TV bands database... on available channels as determined by the TV bands database and in accordance with the interference...

  18. Transport and Environment Database System (TRENDS): Maritime air pollutant emission modelling

    NASA Astrophysics Data System (ADS)

    Georgakaki, Aliki; Coffey, Robert A.; Lock, Graham; Sorenson, Spencer C.

    This paper reports the development of the maritime module within the framework of the Transport and Environment Database System (TRENDS) project. A detailed database has been constructed for the calculation of energy consumption and air pollutant emissions. Based on an in-house database of commercial vessels kept at the Technical University of Denmark, relationships between the fuel consumption and size of different vessels have been developed, taking into account the fleet's age and service speed. The technical assumptions and factors incorporated in the database are presented, including changes from findings reported in Methodologies for Estimating air pollutant Emissions from Transport (MEET). The database operates on statistical data provided by Eurostat, which describe vessel and freight movements from and towards EU 15 major ports. Data are at port to Maritime Coastal Area (MCA) level, so a bottom-up approach is used. A port to MCA distance database has also been constructed for the purpose of the study. This was the first attempt to use Eurostat maritime statistics for emission modelling; and the problems encountered, since the statistical data collection was not undertaken with a view to this purpose, are mentioned. Examples of the results obtained by the database are presented. These include detailed air pollutant emission calculations for bulk carriers entering the port of Helsinki, as an example of the database operation, and aggregate results for different types of movements for France. Overall estimates of SO x and NO x emission caused by shipping traffic between the EU 15 countries are in the area of 1 and 1.5 million tonnes, respectively.

  19. Integration of NASA/GSFC and USGS Rock Magnetic Databases.

    NASA Astrophysics Data System (ADS)

    Nazarova, K. A.; Glen, J. M.

    2004-05-01

    A global Magnetic Petrology Database (MPDB) was developed and continues to be updated at NASA/Goddard Space Flight Center. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via the Internet for a more realistic interpretation of satellite (as well as aeromagnetic and ground) lithospheric magnetic anomalies. The MPDB contains data on rocks from localities around the world (about 19,000 samples) including the Ukranian and Baltic Shields, Kamchatka, Iceland, Urals Mountains, etc. The MPDB is designed, managed and presented on the web as a research oriented database. Several database applications have been specifically developed for data manipulation and analysis of the MPDB. The geophysics unit at the USGS in Menlo Park has over 17,000 rock-property data, largely from sites within the western U.S. This database contains rock-density and rock-magnetic parameters collected for use in gravity and magnetic field modeling, and paleomagnetic studies. Most of these data were taken from surface outcrops and together they span a broad range of rock types. Measurements were made either in-situ at the outcrop, or in the laboratory on hand samples and paleomagnetic cores acquired in the field. The USGS and NASA/GSFC data will be integrated as part of an effort to provide public access to a single, uniformly maintained database. Due to the large number of data and the very large area sampled, the database can yield rock-property statistics on a broad range of rock types; it is thus applicable to study areas beyond the geographic scope of the database. The intent of this effort is to provide incentive for others to further contribute to the database, and a tool with which the geophysical community can entertain studies formerly precluded.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Haeryong; Lee, Eunyong; Jeong, YiYeong

    Korea Radioactive-waste Management Corporation (KRMC) established in 2009 has started a new project to collect information on long-term stability of deep geological environments on the Korean Peninsula. The information has been built up in the integrated natural barrier database system available on web (www.deepgeodisposal.kr). The database system also includes socially and economically important information, such as land use, mining area, natural conservation area, population density, and industrial complex, because some of this information is used as exclusionary criteria during the site selection process for a deep geological repository for safe and secure containment and isolation of spent nuclear fuel andmore » other long-lived radioactive waste in Korea. Although the official site selection process has not been started yet in Korea, current integrated natural barrier database system and socio-economic database is believed that the database system will be effectively utilized to narrow down the number of sites where future investigation is most promising in the site selection process for a deep geological repository and to enhance public acceptance by providing readily-available relevant scientific information on deep geological environments in Korea. (authors)« less

  1. Burn Injury Assessment Tool with Morphable 3D Human Body Models

    DTIC Science & Technology

    2017-04-21

    waist, arms and legs measurements) as stored in most anthropometry databases . To improve on bum area estimations, the bum tool will allow the user to...different algorithm for morphing that relies on searching of an extensive anthropometric database , which is created from thousands of randomly...interpolation methods are required. Develop Patient Database : Patient data entered (name, gender, age, anthropometric measurements), collected (photographic

  2. Field validation of secondary data sources: a novel measure of representativity applied to a Canadian food outlet database.

    PubMed

    Clary, Christelle M; Kestens, Yan

    2013-06-19

    Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a "fair" representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV were respectively 65.5% and 77.3%. The representativity of the EPOI database was 77.7%. The novel measure of representativity might serve as an alternative to traditional validity measures, and could be more appropriate in certain situations, depending on the nature and scale of the research question.

  3. Determining Faculty Staffing Using Lotus 1-2-3.

    ERIC Educational Resources Information Center

    Ebner, Stanley G.

    1987-01-01

    Discusses how to manipulate a database to create a spreadsheet which can be used to help decide which teaching areas are understaffed and by how much. Focuses on the use of the Lotus 1-2-3 database statistical functions. (TW)

  4. NATIONAL URBAN DATABASE AND ACCESS PROTAL TOOL

    EPA Science Inventory

    Current mesoscale weather prediction and microscale dispersion models are limited in their ability to perform accurate assessments in urban areas. A project called the National Urban Database with Access Portal Tool (NUDAPT) is beginning to provide urban data and improve the para...

  5. Children's growth: a health indicator and a diagnostic tool.

    PubMed

    Gelander, Lars

    2006-05-01

    The publication of Werner and Bodin in Acta Paediatrica should inspire countries to use the growth of children as an indicator of health. The development of databases that cover all measurements of all children that have contact with healthcare and medical care will provide new knowledge in this area. Such databases will give us the opportunity to explore health in different areas of the country and to evaluate community projects in order to prevent obesity. Growth charts that are used to identify sick children or children that have other causes for growth disturbances must reflect how a healthy child should grow. If such prescriptive growth charts are computerized together with regional databases, they will provide necessary growth data for descriptive health surveys.

  6. [Construction and application of special analysis database of geoherbs based on 3S technology].

    PubMed

    Guo, Lan-ping; Huang, Lu-qi; Lv, Dong-mei; Shao, Ai-juan; Wang, Jian

    2007-09-01

    In this paper,the structures, data sources, data codes of "the spacial analysis database of geoherbs" based 3S technology are introduced, and the essential functions of the database, such as data management, remote sensing, spacial interpolation, spacial statistics, spacial analysis and developing are described. At last, two examples for database usage are given, the one is classification and calculating of NDVI index of remote sensing image in geoherbal area of Atractylodes lancea, the other one is adaptation analysis of A. lancea. These indicate that "the spacial analysis database of geoherbs" has bright prospect in spacial analysis of geoherbs.

  7. NuSTAR on-ground calibration: II. Effective area

    NASA Astrophysics Data System (ADS)

    Brejnholt, Nicolai F.; Christensen, Finn E.; Westergaard, Niels J.; Hailey, Charles J.; Koglin, Jason E.; Craig, William W.

    2012-09-01

    The Nuclear Spectroscopic Telescope ARray (NuSTAR) was launched in June 2012 carrying the first focusing hard X-ray (5-80keV) optics to orbit. The multilayer coating was carried out at the Technical University of Denmark (DTU Space). In this article we introduce the NuSTAR multilayer reference database and its implementation in the NuSTAR optic response model. The database and its implementation is validated using on-ground effective area calibration data and used to estimate in-orbit performance.

  8. GIS for the Gulf: A reference database for hurricane-affected areas: Chapter 4C in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Greenlee, Dave

    2007-01-01

    A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.

  9. Research information needs on terrestrial vertebrate species of the interior Columbia basin and northern portions of the Klamath and Great Basins: a research, development, and application database.

    Treesearch

    Bruce G. Marcot

    1997-01-01

    Research information needs on selected invertebrates and all vertebrates of the interior Columbia River basin and adjacent areas in the United States were collected into a research, development, and application database as part of the Interior Columbia Basin Ecosystem Management Project. The database includes 482 potential research study topics on 232 individual...

  10. Integrated Database And Knowledge Base For Genomic Prospective Cohort Study In Tohoku Medical Megabank Toward Personalized Prevention And Medicine.

    PubMed

    Ogishima, Soichi; Takai, Takako; Shimokawa, Kazuro; Nagaie, Satoshi; Tanaka, Hiroshi; Nakaya, Jun

    2015-01-01

    The Tohoku Medical Megabank project is a national project to revitalization of the disaster area in the Tohoku region by the Great East Japan Earthquake, and have conducted large-scale prospective genome-cohort study. Along with prospective genome-cohort study, we have developed integrated database and knowledge base which will be key database for realizing personalized prevention and medicine.

  11. Surviving the Glut: The Management of Event Streams in Cyberphysical Systems

    NASA Astrophysics Data System (ADS)

    Buchmann, Alejandro

    Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de

  12. Bibliographic Databases Outside of the United States.

    ERIC Educational Resources Information Center

    McGinn, Thomas P.; And Others

    1988-01-01

    Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…

  13. CATS 1990 household travel survey : technical documentation for the household, person and trip files

    DOT National Transportation Integrated Search

    1994-04-01

    This report contains the database documentation and data dictionary for the : Chicago Area Transportation Study's 1990 Household Travel Survey. The database : documentation can be found on pages 1 through 25 followed by the data dictionary. : Any que...

  14. Difficulties and challenges associated with literature searches in operating room management, complete with recommendations.

    PubMed

    Wachtel, Ruth E; Dexter, Franklin

    2013-12-01

    The purpose of this article is to teach operating room managers, financial analysts, and those with a limited knowledge of search engines, including PubMed, how to locate articles they need in the areas of operating room and anesthesia group management. Many physicians are unaware of current literature in their field and evidence-based practices. The most common source of information is colleagues. Many people making management decisions do not read published scientific articles. Databases such as PubMed are available to search for such articles. Other databases, such as citation indices and Google Scholar, can be used to uncover additional articles. Nevertheless, most people who do not know how to use these databases are reluctant to utilize help resources when they do not know how to accomplish a task. Most people are especially reluctant to use on-line help files. Help files and search databases are often difficult to use because they have been designed for users already familiar with the field. The help files and databases have specialized vocabularies unique to the application. MeSH terms in PubMed are not useful alternatives for operating room management, an important limitation, because MeSH is the default when search terms are entered in PubMed. Librarians or those trained in informatics can be valuable assets for searching unusual databases, but they must possess the domain knowledge relative to the subject they are searching. The search methods we review are especially important when the subject area (e.g., anesthesia group management) is so specific that only 1 or 2 articles address the topic of interest. The materials are presented broadly enough that the reader can extrapolate the findings to other areas of clinical and management issues in anesthesiology.

  15. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  16. Data management with a landslide inventory of the Franconian Alb (Germany) using a spatial database and GIS tools

    NASA Astrophysics Data System (ADS)

    Bemm, Stefan; Sandmeier, Christine; Wilde, Martina; Jaeger, Daniel; Schwindt, Daniel; Terhorst, Birgit

    2014-05-01

    The area of the Swabian-Franconian cuesta landscape (Southern Germany) is highly prone to landslides. This was apparent in the late spring of 2013, when numerous landslides occurred as a consequence of heavy and long-lasting rainfalls. The specific climatic situation caused numerous damages with serious impact on settlements and infrastructure. Knowledge on spatial distribution of landslides, processes and characteristics are important to evaluate the potential risk that can occur from mass movements in those areas. In the frame of two projects about 400 landslides were mapped and detailed data sets were compiled during years 2011 to 2014 at the Franconian Alb. The studies are related to the project "Slope stability and hazard zones in the northern Bavarian cuesta" (DFG, German Research Foundation) as well as to the LfU (The Bavarian Environment Agency) within the project "Georisks and climate change - hazard indication map Jura". The central goal of the present study is to create a spatial database for landslides. The database should contain all fundamental parameters to characterize the mass movements and should provide the potential for secure data storage and data management, as well as statistical evaluations. The spatial database was created with PostgreSQL, an object-relational database management system and PostGIS, a spatial database extender for PostgreSQL, which provides the possibility to store spatial and geographic objects and to connect to several GIS applications, like GRASS GIS, SAGA GIS, QGIS and GDAL, a geospatial library (Obe et al. 2011). Database access for querying, importing, and exporting spatial and non-spatial data is ensured by using GUI or non-GUI connections. The database allows the use of procedural languages for writing advanced functions in the R, Python or Perl programming languages. It is possible to work directly with the (spatial) data entirety of the database in R. The inventory of the database includes (amongst others), informations on location, landslide types and causes, geomorphological positions, geometries, hazards and damages, as well as assessments related to the activity of landslides. Furthermore, there are stored spatial objects, which represent the components of a landslide, in particular the scarps and the accumulation areas. Besides, waterways, map sheets, contour lines, detailed infrastructure data, digital elevation models, aspect and slope data are included. Examples of spatial queries to the database are intersections of raster and vector data for calculating values for slope gradients or aspects of landslide areas and for creating multiple, overlaying sections for the comparison of slopes, as well as distances to the infrastructure or to the next receiving drainage. Furthermore, getting informations on landslide magnitudes, distribution and clustering, as well as potential correlations concerning geomorphological or geological conditions. The data management concept in this study can be implemented for any academic, public or private use, because it is independent from any obligatory licenses. The created spatial database offers a platform for interdisciplinary research and socio-economic questions, as well as for landslide susceptibility and hazard indication mapping. Obe, R.O., Hsu, L.S. 2011. PostGIS in action. - pp 492, Manning Publications, Stamford

  17. Historical reconstructions of California wildfires vary by data source

    USGS Publications Warehouse

    Syphard, Alexandra D.; Keeley, Jon E.

    2016-01-01

    Historical data are essential for understanding how fire activity responds to different drivers. It is important that the source of data is commensurate with the spatial and temporal scale of the question addressed, but fire history databases are derived from different sources with different restrictions. In California, a frequently used fire history dataset is the State of California Fire and Resource Assessment Program (FRAP) fire history database, which circumscribes fire perimeters at a relatively fine scale. It includes large fires on both state and federal lands but only covers fires that were mapped or had other spatially explicit data. A different database is the state and federal governments’ annual reports of all fires. They are more complete than the FRAP database but are only spatially explicit to the level of county (California Department of Forestry and Fire Protection – Cal Fire) or forest (United States Forest Service – USFS). We found substantial differences between the FRAP database and the annual summaries, with the largest and most consistent discrepancy being in fire frequency. The FRAP database missed the majority of fires and is thus a poor indicator of fire frequency or indicators of ignition sources. The FRAP database is also deficient in area burned, especially before 1950. Even in contemporary records, the huge number of smaller fires not included in the FRAP database account for substantial cumulative differences in area burned. Wildfires in California account for nearly half of the western United States fire suppression budget. Therefore, the conclusions about data discrepancies and the implications for fire research are of broad importance.

  18. Contaminant exposure and effects--terrestrial vertebrates database: Trends and data gaps for Atlantic Coast estuaries

    USGS Publications Warehouse

    Rattner, B.A.; Pearson, J.L.; Golden, N.H.; Cohen, J.B.; Erwin, R.M.; Ottinger, M.A.

    2000-01-01

    In order to examine the condition of biota in Atlantic coast estuaries, a ?Contaminant Exposure and Effects--Terrestrial Vertebrates? database (CEE-TV) has been compiled through computerized search of published literature, review of existing databases, and solicitation of unpublished reports from conservation agencies, private groups, and universities. Summary information has been entered into the database, including species, collection date (1965-present), site coordinates, estuary name, hydrologic unit catalogue code, sample matrix, contaminant concentrations, biomarker and bioindicator responses, and reference source, utilizing a 98-field character and numeric format. Currently, the CEE-TV database contains 3699 georeferenced records representing 190 vertebrate species and >145,000 individuals residing in estuaries from Maine through Florida. This relational database can be directly queried, imported into a Geographic Information System to examine spatial patterns, identify data gaps and areas of concern, generate hypotheses, and focus ecotoxicological field assessments. Information on birds made up the vast majority (83%) of the database, with only a modicum of data on amphibians (75,000 chemical compounds in commerce, only 118 commonly measured environmental contaminants were quantified in tissues of terrestrial vertebrates. There were no CEE-TV data records in 15 of the 67 estuaries located along the Atlantic coast and Florida Gulf coast. The CEE-TV database has a number of potential applications including focusing biomonitoring efforts to generate critically needed ecotoxicological data in the numerous ?gaps? along the coast, reducing uncertainty about contaminant risk, identifying areas for mitigation, restoration or special management, and ranking ecological conditions of estuaries.

  19. Regulatory administrative databases in FDA's Center for Biologics Evaluation and Research: convergence toward a unified database.

    PubMed

    Smith, Jeffrey K

    2013-04-01

    Regulatory administrative database systems within the Food and Drug Administration's (FDA) Center for Biologics Evaluation and Research (CBER) are essential to supporting its core mission, as a regulatory agency. Such systems are used within FDA to manage information and processes surrounding the processing, review, and tracking of investigational and marketed product submissions. This is an area of increasing interest in the pharmaceutical industry and has been a topic at trade association conferences (Buckley 2012). Such databases in CBER are complex, not for the type or relevance of the data to any particular scientific discipline but because of the variety of regulatory submission types and processes the systems support using the data. Commonalities among different data domains of CBER's regulatory administrative databases are discussed. These commonalities have evolved enough to constitute real database convergence and provide a valuable asset for business process intelligence. Balancing review workload across staff, exploring areas of risk in review capacity, process improvement, and presenting a clear and comprehensive landscape of review obligations are just some of the opportunities of such intelligence. This convergence has been occurring in the presence of usual forces that tend to drive information technology (IT) systems development toward separate stovepipes and data silos. CBER has achieved a significant level of convergence through a gradual process, using a clear goal, agreed upon development practices, and transparency of database objects, rather than through a single, discrete project or IT vendor solution. This approach offers a path forward for FDA systems toward a unified database.

  20. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  1. Heterogeneous database integration in biomedicine.

    PubMed

    Sujansky, W

    2001-08-01

    The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.

  2. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  3. The Free Trade Area of the Americas: Can Regional Economic Integration Lead to Greater Cooperation on Security?

    DTIC Science & Technology

    2002-12-01

    Brazilian Air Force has been testing a new surveillance system called Sistema de Vigilancia da Amazonia (SIVAM), designed to...2000 Online Database, 23 April 1998 and “Plan de seguridad para la triple frontera,” Ser en el 2000 Online Database, 01 June...Plan de seguridad para la triple frontera,” Ser en el 2000 Online Database, 01 June 1998. 64 Robert Devlin, Antoni Estevadeordal

  4. Urban roadway congestion : annual report

    DOT National Transportation Integrated Search

    1998-01-01

    The annual traffic congestion study is an effort to monitor roadway congestion in major urban areas in the United States. The comparisons to other areas and to previous experiences in each area are facilitated by a database that begins in 1982 and in...

  5. Geophysical Log Database for the Mississippi Embayment Regional Aquifer Study (MERAS)

    USGS Publications Warehouse

    Hart, Rheannon M.; Clark, Brian R.

    2008-01-01

    The Mississippi Embayment Regional Aquifer Study (MERAS) is an investigation of ground-water availability and sustainability within the Mississippi embayment as part of the U.S. Geological Survey Ground-Water Resources Program. The MERAS area consists of approximately 70,000 square miles and encompasses parts of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. More than 2,600 geophysical logs of test holes and wells within the MERAS area were compiled into a database and were used to develop a digital hydrogeologic framework from land surface to the top of the Midway Group of upper Paleocene age. The purpose of this report is to document, present, and summarize the geophysical log database, as well as to preserve the geophysical logs in a digital image format for online access.

  6. Analysis and Design of a Distributed System for Management and Distribution of Natural Language Assertions

    DTIC Science & Technology

    2010-09-01

    5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD

  7. Assessment and mapping of water pollution indices in zone-III of municipal corporation of hyderabad using remote sensing and geographic information system.

    PubMed

    Asadi, S S; Vuppala, Padmaja; Reddy, M Anji

    2005-01-01

    A preliminary survey of area under Zone-III of MCH was undertaken to assess the ground water quality, demonstrate its spatial distribution and correlate with the land use patterns using advance techniques of remote sensing and geographical information system (GIS). Twenty-seven ground water samples were collected and their chemical analysis was done to form the attribute database. Water quality index was calculated from the measured parameters, based on which the study area was classified into five groups with respect to suitability of water for drinking purpose. Thematic maps viz., base map, road network, drainage and land use/land cover were prepared from IRS ID PAN + LISS III merged satellite imagery forming the spatial database. Attribute database was integrated with spatial sampling locations map in Arc/Info and maps showing spatial distribution of water quality parameters were prepared in Arc View. Results indicated that high concentrations of total dissolved solids (TDS), nitrates, fluorides and total hardness were observed in few industrial and densely populated areas indicating deteriorated water quality while the other areas exhibited moderate to good water quality.

  8. Measuring Quality of Healthcare Outcomes in Type 2 Diabetes from Routine Data: a Seven-nation Survey Conducted by the IMIA Primary Health Care Working Group.

    PubMed

    Hinton, W; Liyanage, H; McGovern, A; Liaw, S-T; Kuziemsky, C; Munro, N; de Lusignan, S

    2017-08-01

    Background: The Institute of Medicine framework defines six dimensions of quality for healthcare systems: (1) safety, (2) effectiveness, (3) patient centeredness, (4) timeliness of care, (5) efficiency, and (6) equity. Large health datasets provide an opportunity to assess quality in these areas. Objective: To perform an international comparison of the measurability of the delivery of these aims, in people with type 2 diabetes mellitus (T2DM) from large datasets. Method: We conducted a survey to assess healthcare outcomes data quality of existing databases and disseminated this through professional networks. We examined the data sources used to collect the data, frequency of data uploads, and data types used for identifying people with T2DM. We compared data completeness across the six areas of healthcare quality, using selected measures pertinent to T2DM management. Results: We received 14 responses from seven countries (Australia, Canada, Italy, the Netherlands, Norway, Portugal, Turkey and the UK). Most databases reported frequent data uploads and would be capable of near real time analysis of healthcare quality.The majority of recorded data related to safety (particularly medication adverse events) and treatment efficacy (glycaemic control and microvascular disease). Data potentially measuring equity was less well recorded. Recording levels were lowest for patient-centred care, timeliness of care, and system efficiency, with the majority of databases containing no data in these areas. Databases using primary care sources had higher data quality across all areas measured. Conclusion: Data quality could be improved particularly in the areas of patient-centred care, timeliness, and efficiency. Primary care derived datasets may be most suited to healthcare quality assessment. Georg Thieme Verlag KG Stuttgart.

  9. Information support of monitoring of technical condition of buildings in construction risk area

    NASA Astrophysics Data System (ADS)

    Skachkova, M. E.; Lepihina, O. Y.; Ignatova, V. V.

    2018-05-01

    The paper presents the results of the research devoted to the development of a model of information support of monitoring buildings technical condition; these buildings are located in the construction risk area. As a result of the visual and instrumental survey, as well as the analysis of existing approaches and techniques, attributive and cartographic databases have been created. These databases allow monitoring defects and damages of buildings located in a 30-meter risk area from the object under construction. The classification of structures and defects of these buildings under survey is presented. The functional capabilities of the developed model and the field of it practical applications are determined.

  10. Field validation of secondary data sources: a novel measure of representativity applied to a Canadian food outlet database

    PubMed Central

    2013-01-01

    Background Validation studies of secondary datasets used to characterize neighborhood food businesses generally evaluate how accurately the database represents the true situation on the ground. Depending on the research objectives, the characterization of the business environment may tolerate some inaccuracies (e.g. minor imprecisions in location or errors in business names). Furthermore, if the number of false negatives (FNs) and false positives (FPs) is balanced within a given area, one could argue that the database still provides a “fair” representation of existing resources in this area. Yet, traditional validation measures do not relax matching criteria, and treat FNs and FPs independently. Through the field validation of food businesses found in a Canadian database, this paper proposes alternative criteria for validity. Methods Field validation of the 2010 Enhanced Points of Interest (EPOI) database (DMTI Spatial®) was performed in 2011 in 12 census tracts (CTs) in Montreal, Canada. Some 410 food outlets were extracted from the database and 484 were observed in the field. First, traditional measures of sensitivity and positive predictive value (PPV) accounting for every single mismatch between the field and the database were computed. Second, relaxed measures of sensitivity and PPV that tolerate mismatches in business names or slight imprecisions in location were assessed. A novel measure of representativity that further allows for compensation between FNs and FPs within the same business category and area was proposed. Representativity was computed at CT level as ((TPs +|FPs-FNs|)/(TPs+FNs)), with TPs meaning true positives, and |FPs-FNs| being the absolute value of the difference between the number of FNs and the number of FPs within each outlet category. Results The EPOI database had a "moderate" capacity to detect an outlet present in the field (sensitivity: 54.5%) or to list only the outlets that actually existed in the field (PPV: 64.4%). Relaxed measures of sensitivity and PPV were respectively 65.5% and 77.3%. The representativity of the EPOI database was 77.7%. Conclusions The novel measure of representativity might serve as an alternative to traditional validity measures, and could be more appropriate in certain situations, depending on the nature and scale of the research question. PMID:23782570

  11. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  12. Database and online map service on unstable rock slopes in Norway - From data perpetuation to public information

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Nordahl, Bobo; Bunkholt, Halvor; Nicolaisen, Magnus; Jarna, Alexandra; Iversen, Sverre; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.

    2015-11-01

    The unstable rock slope database is developed and maintained by the Geological Survey of Norway as part of the systematic mapping of unstable rock slopes in Norway. This mapping aims to detect catastrophic rock slope failures before they occur. More than 250 unstable slopes with post-glacial deformation are detected up to now. The main aims of the unstable rock slope database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, as well as hazard and risk classification. Feature classes and tables linked to the main feature class include different scenarios of an unstable rock slope, field observation points, sampling points for dating, displacement measurement stations, lineaments, unstable areas, run-out areas, areas affected by secondary effects, along with tables for hazard and risk classification and URL links to further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through an online map service. Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site. Areas of possible rock avalanche run-out and their secondary effects displayed in the online map service, along with hazard and risk assessments, will become important tools for land-use planning. The present database will further evolve in the coming years as the systematic mapping progresses and as available techniques and tools evolve.

  13. Inferring Network Controls from Topology Using the Chomp Database

    DTIC Science & Technology

    2015-12-03

    AFRL-AFOSR-VA-TR-2016-0033 INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE John Harer DUKE UNIVERSITY Final Report 12/03/2015...INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0436 5c. PROGRAM ELEMENT NUMBER 6...area of Topological Data Analysis (TDA) and it’s application to dynamical systems. The role of this work in the Complex Networks program is based on

  14. Automatic pattern localization across layout database and photolithography mask

    NASA Astrophysics Data System (ADS)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  15. Ecological Functions of Off-Channel Habitats of the Willamette River, Oregon, Database and Documentation (1997-2001)

    EPA Science Inventory

    The database from the Ecological Functions of Off-Channel Habitats of the Willamette River, Oregon project (OCH Project) contains data collected from 1997 through 2001 from multiple research areas of the project, and project documents such as the OCH Research Plan, Quality Assura...

  16. Proteomics: Protein Identification Using Online Databases

    ERIC Educational Resources Information Center

    Eurich, Chris; Fields, Peter A.; Rice, Elizabeth

    2012-01-01

    Proteomics is an emerging area of systems biology that allows simultaneous study of thousands of proteins expressed in cells, tissues, or whole organisms. We have developed this activity to enable high school or college students to explore proteomic databases using mass spectrometry data files generated from yeast proteins in a college laboratory…

  17. Intermodal Passenger Connectivity Database A measurement of connectivity in the U.S. Passenger Transportation System : [2016

    DOT National Transportation Integrated Search

    2016-10-01

    Each database record shows the modes that serve the facility, those that are nearby but not connecting, and incudes facility location information. The data can be analyzed on a city, state, zip code, metropolitan area, or modal basis. Geographic coor...

  18. Windshear certification data base for forward-look detection systems

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Hinton, David A.; Proctor, Fred H.

    1994-01-01

    Described is an introduction to a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-looking windshear detection equipment. The database contains high-resolution three-dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies that are generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases contained in the certification documentation represent a wide spectrum of windshear events. The database will be used with vendor-developed sensor simulation software and vendor-collected ground-clutter data to demonstrate detection performance in a variety of meteorological conditions using NASA/FAA pre-defined path scenarios for each of the certification cases. A brief outline of the contents and sample plots from the database documentation are included. These plots show fields of hazard factor, or F-factor (Bowles 1990), radar reflectivity, and velocity vectors on a horizontal plane overlayed with the applicable certification paths. For the plot of the F-factor field the region of 0.105 and above signify an area of hazardous, performance decreasing windshear, while negative values indicate regions of performance increasing windshear. The values of F-factor are based on 1-Km averaged segments along horizontal flight paths, assuming an air speed of 150 knots (approx. 75 m/s). The database has been released to vendors participating in the certification process. The database and associated document have been transferred to the FAA for archival storage and distribution.

  19. 23 CFR 450.312 - Metropolitan planning area boundaries.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... area. (d) MPA boundaries may be established to coincide with the geography of regional economic... descriptions shall be submitted either as a geo-spatial database or described in sufficient detail to enable...

  20. 23 CFR 450.312 - Metropolitan planning area boundaries.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... area. (d) MPA boundaries may be established to coincide with the geography of regional economic... descriptions shall be submitted either as a geo-spatial database or described in sufficient detail to enable...

  1. 23 CFR 450.312 - Metropolitan planning area boundaries.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... area. (d) MPA boundaries may be established to coincide with the geography of regional economic... descriptions shall be submitted either as a geo-spatial database or described in sufficient detail to enable...

  2. Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones.

    PubMed

    Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan

    2015-07-20

    This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications.

  3. Spatial database for a global assessment of undiscovered copper resources: Chapter Z in Global mineral resource assessment

    USGS Publications Warehouse

    Dicken, Connie L.; Dunlap, Pamela; Parks, Heather L.; Hammarstrom, Jane M.; Zientek, Michael L.; Zientek, Michael L.; Hammarstrom, Jane M.; Johnson, Kathleen M.

    2016-07-13

    As part of the first-ever U.S. Geological Survey global assessment of undiscovered copper resources, data common to several regional spatial databases published by the U.S. Geological Survey, including one report from Finland and one from Greenland, were standardized, updated, and compiled into a global copper resource database. This integrated collection of spatial databases provides location, geologic and mineral resource data, and source references for deposits, significant prospects, and areas permissive for undiscovered deposits of both porphyry copper and sediment-hosted copper. The copper resource database allows for efficient modeling on a global scale in a geographic information system (GIS) and is provided in an Esri ArcGIS file geodatabase format.

  4. Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones

    PubMed Central

    Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan

    2015-01-01

    This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications. PMID:26205269

  5. Database usage and performance for the Fermilab Run II experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonham, D.; Box, D.; Gallas, E.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databasesmore » used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.« less

  6. [Research on Zhejiang blood information network and management system].

    PubMed

    Yan, Li-Xing; Xu, Yan; Meng, Zhong-Hua; Kong, Chang-Hong; Wang, Jian-Min; Jin, Zhen-Liang; Wu, Shi-Ding; Chen, Chang-Shui; Luo, Ling-Fei

    2007-02-01

    This research was aimed to develop the first level blood information centralized database and real time communication network at a province area in China. Multiple technology like local area network database separate operation, real time data concentration and distribution mechanism, allopatric backup, and optical fiber virtual private network (VPN) were used. As a result, the blood information centralized database and management system were successfully constructed, which covers all the Zhejiang province, and the real time exchange of blood data was realised. In conclusion, its implementation promote volunteer blood donation and ensure the blood safety in Zhejiang, especially strengthen the quick response to public health emergency. This project lays the first stone of centralized test and allotment among blood banks in Zhejiang, and can serve as a reference of contemporary blood bank information systems in China.

  7. The round window region and contiguous areas: endoscopic anatomy and surgical implications.

    PubMed

    Marchioni, Daniele; Alicandri-Ciufelli, Matteo; Pothier, David D; Rubini, Alessia; Presutti, Livio

    2015-05-01

    The round window region is a critical area of the middle ear; the aim of this paper is to describe its anatomy from an endoscopic perspective, emphasizing some structures, the knowledge of which could have important implications during surgery, as well as to evaluate what involvement cholesteatoma may have with these structures. Retrospective review of video recordings of endoscopic ear surgeries and retrospective database review were conducted in Tertiary university referral center. Videos from endoscopic middle ear procedures carried out between June 2010 and September 2012 and stored in a shared database were reviewed retrospectively. Surgeries in which an endoscopic magnification of the round window region and the inferior retrotympanum area was performed intraoperatively were included in the study. Involvement by cholesteatoma of those regions was also documented based on information obtained from the surgical database. Conformation of the tegmen of the round window niche may influence the surgical view of round window membrane. A structure connecting the round window area to the petrous apex, named the subcochlear canaliculus, is described. Cholesteatoma can invade the round window areas in some patients. Endoscopic approaches can guarantee a very detailed view and allow the exploration of the round window region. Exact anatomical knowledge of this region can have important advantages during surgery, since some pathology can invade inside cavities or tunnels otherwise not seen by instrumentation that produces a straight-line view (e.g. microscope).

  8. MapApp: A Java(TM) Applet for Accessing Geographic Databases

    NASA Astrophysics Data System (ADS)

    Haxby, W.; Carbotte, S.; Ryan, W. B.; OHara, S.

    2001-12-01

    MapApp (http://coast.ldeo.columbia.edu/help/MapApp.html) is a prototype Java(TM) applet that is intended to give easy and versatile access to geographic data sets through a web browser. It was developed initially to interface with the RIDGE Multibeam Synthesis. Subsequently, interfaces with other geophysical databases were added. At present, multibeam bathymetry grids, underway geophysics along ship tracks, and the LDEO Borehole Research Group's ODP well logging database are accessible through MapApp. We plan to add an interface with the Ridge Petrology Database in the near future. The central component of MapApp is a world physiographic map. Users may navigate around the map (zoom/pan) without waiting for HTTP requests to a remote server to be processed. A focus request loads image tiles from the server to compose a new map at the current viewing resolution. Areas in which multibeam grids are available may be focused to a pixel resolution of about 200 m. These areas may be identified by toggling a mask. Databases may be accessed through menus, and selected data objects may be loaded into MapApp by selecting items from tables. Once loaded, a bathymetry grid may be contoured or used to create bathymetric profiles; ship tracks and ODP sites may be overlain on the map and their geophysical data plotted in X-Y graphs. The advantage of applets over traditional web pages is that they permit dynamic interaction with data sets, while limiting time consuming interaction with a remote server. Users may customize the graphics display by modifying the scale, or the symbol or line characteristics of rendered data, contour interval, etc. The ease with which users can select areas, view the physiography of areas, and preview data sets and evaluate them for quality and applicability, makes MapApp a valuable tool for education and research.

  9. Validation of the European Prototype for Integrated Care at Municipal Level in Savona: Updating and Maintenance

    DTIC Science & Technology

    2001-10-25

    within one of the programmes sponsored by the European Commission.The system mainly consists of a shared care database in which each groups of...care database in which each community facility, or group of facilities, is supported by a local area network (LAN). Each of these LANs is connected over...functions. The software is layered, so that the client application is not affected by how the servers are implemented or which database system they use

  10. Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report

    DTIC Science & Technology

    2007-02-05

    34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about

  11. The economics of medicines optimization: policy developments, remaining challenges and research priorities

    PubMed Central

    Faria, Rita; Barbieri, Marco; Light, Kate; Elliott, Rachel A.; Sculpher, Mark

    2014-01-01

    Background This review scopes the evidence on the effectiveness and cost-effectiveness of interventions to improve suboptimal use of medicines in order to determine the evidence gaps and help inform research priorities. Sources of data Systematic searches of the National Health Service (NHS) Economic Evaluation Database, the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effects. Areas of agreement The majority of the studies evaluated interventions to improve adherence, inappropriate prescribing and prescribing errors. Areas of controversy Interventions tend to be specific to a particular stage of the pathway and/or to a particular disease and have mostly been evaluated for their effect on intermediate or process outcomes. Growing points Medicines optimization offers an opportunity to improve health outcomes and efficiency of healthcare. Areas timely for developing research The available evidence is insufficient to assess the effectiveness and cost-effectiveness of interventions to address suboptimal medicine use in the UK NHS. Decision modelling, evidence synthesis and elicitation have the potential to address the evidence gaps and help prioritize research. PMID:25190760

  12. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  13. Bibliometric trend and patent analysis in nano-alloys research for period 2000-2013.

    PubMed

    Živković, Dragana; Niculović, Milica; Manasijević, Dragan; Minić, Duško; Ćosović, Vladan; Sibinović, Maja

    2015-05-04

    This paper presents an overview of current situation in nano-alloys investigations based on bibliometric and patent analysis. Bibliometric analysis data, for period from 2000 to September 2013, were obtained using Scopus database as selected index database, whereas analyzed parameters were: number of scientific papers per years, authors, countries, affiliations, subject areas and document types. Analysis of nano-alloys patents was done with specific database, using the International Patent Classification and Patent Scope for the period from 2003 to 2013 year. Information found in this database was the number of patents, patent classification by country, patent applicators, main inventors and pub date.

  14. Bibliometric trend and patent analysis in nano-alloys research for period 2000-2013.

    PubMed

    Živković, Dragana; Niculović, Milica; Manasijević, Dragan; Minić, Duško; Ćosović, Vladan; Sibinović, Maja

    2015-01-01

    This paper presents an overview of current situation in nano-alloys investigations based on bibliometric and patent analysis. Bibliometric analysis data, for the period 2000 to 2013, were obtained using Scopus database as selected index database, whereas analyzed parameters were: number of scientific papers per year, authors, countries, affiliations, subject areas and document types. Analysis of nano-alloys patents was done with specific database, using the International Patent Classification and Patent Scope for the period 2003 to 2013. Information found in this database was the number of patents, patent classification by country, patent applicators, main inventors and publication date.

  15. Problem of Mistakes in Databases, Processing and Interpretation of Observations of the Sun. I.

    NASA Astrophysics Data System (ADS)

    Lozitska, N. I.

    In databases of observations unnoticed mistakes and misprints could occur at any stage of observation, preparation and processing of databases. The current detection of errors is complicated by the fact that the works of observer, databases compiler and researcher were divided. Data acquisition from a spacecraft requires the greater amount of researchers than for ground-based observations. As a result, the probability of errors is increasing. Keeping track of the errors on each stage is very difficult, so we use of cross-comparison of data from different sources. We revealed some misprints in the typographic and digital results of sunspot group area measurements.

  16. An evaluation of FIA's stand age variable

    Treesearch

    John D. Shaw

    2015-01-01

    The Forest Inventory and Analysis Database (FIADB) includes a large number of measured and computed variables. The definitions of measured variables are usually well-documented in FIA field and database manuals. Some computed variables, such as live basal area of the condition, are equally straightforward. Other computed variables, such as individual tree volume,...

  17. FACILITATING ADVANCED URBAN METEOROLOGY AND AIR QUALITY MODELING CAPABILITIES WITH HIGH RESOLUTION URBAN DATABASE AND ACCESS PORTAL TOOLS

    EPA Science Inventory

    Information of urban morphological features at high resolution is needed to properly model and characterize the meteorological and air quality fields in urban areas. We describe a new project called National Urban Database with Access Portal Tool, (NUDAPT) that addresses this nee...

  18. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  19. A compilation of spatial digital databases for selected U.S. Geological Survey nonfuel mineral resource assessments for parts of Idaho and Montana

    USGS Publications Warehouse

    Carlson, Mary H.; Zientek, Michael L.; Causey, J. Douglas; Kayser, Helen Z.; Spanski, Gregory T.; Wilson, Anna B.; Van Gosen, Bradley S.; Trautwein, Charles M.

    2007-01-01

    This report compiles selected results from 13 U.S. Geological Survey (USGS) mineral resource assessment studies conducted in Idaho and Montana into consistent spatial databases that can be used in a geographic information system. The 183 spatial databases represent areas of mineral potential delineated in these studies and include attributes on mineral deposit type, level of mineral potential, certainty, and a reference. The assessments were conducted for five 1? x 2? quadrangles (Butte, Challis, Choteau, Dillon, and Wallace), several U.S. Forest Service (USFS) National Forests (including Challis, Custer, Gallatin, Helena, and Payette), and one Bureau of Land Management (BLM) Resource Area (Dillon). The data contained in the spatial databases are based on published information: no new interpretations are made. This digital compilation is part of an ongoing effort to provide mineral resource information formatted for use in spatial analysis. In particular, this is one of several reports prepared to address USFS needs for science information as forest management plans are revised in the Northern Rocky Mountains.

  20. New Zealand's National Landslide Database

    NASA Astrophysics Data System (ADS)

    Rosser, B.; Dellow, S.; Haubrook, S.; Glassey, P.

    2016-12-01

    Since 1780, landslides have caused an average of about 3 deaths a year in New Zealand and have cost the economy an average of at least NZ$250M/a (0.1% GDP). To understand the risk posed by landslide hazards to society, a thorough knowledge of where, when and why different types of landslides occur is vital. The main objective for establishing the database was to provide a centralised national-scale, publically available database to collate landslide information that could be used for landslide hazard and risk assessment. Design of a national landslide database for New Zealand required consideration of both existing landslide data stored in a variety of digital formats, and future data, yet to be collected. Pre-existing databases were developed and populated with data reflecting the needs of the landslide or hazard project, and the database structures of the time. Bringing these data into a single unified database required a new structure capable of storing and delivering data at a variety of scales and accuracy and with different attributes. A "unified data model" was developed to enable the database to hold old and new landslide data irrespective of scale and method of capture. The database contains information on landslide locations and where available: 1) the timing of landslides and the events that may have triggered them; 2) the type of landslide movement; 3) the volume and area; 4) the source and debris tail; and 5) the impacts caused by the landslide. Information from a variety of sources including aerial photographs (and other remotely sensed data), field reconnaissance and media accounts has been collated and is presented for each landslide along with metadata describing the data sources and quality. There are currently nearly 19,000 landslide records in the database that include point locations, polygons of landslide source and deposit areas, and linear features. Several large datasets are awaiting upload which will bring the total number of landslides to over 100,000. The geo-spatial database is publicly available via the Internet. Software components, including the underlying database (PostGIS), Web Map Server (GeoServer) and web application use open-source software. The hope is that others will add relevant information to the database as well as download the data contained in it.

  1. Forest Conservation Opportunity Areas - Conservative Model (ECO_RES.COA_FORREST66)

    EPA Pesticide Factsheets

    This layer designates areas with potential for forest conservation. These are areas of natural or semi-natural forest land cover patches that area at least 395 meters away from roads and away from patch edges. OAs were modeled by creating distance grids using the National Land Cover Database and the Census Bureau's TIGER road files.

  2. Chicago Area Transportation Study (CATS): Methodological Overview

    DOT National Transportation Integrated Search

    1994-04-01

    This report contains a methodological discussion of the Chicago Area : Transportation Study (CATS) 1990 Household Travel Survey. It was prepared to : assist those who are working with the Household Travel Survey database. This : report concentrates o...

  3. Sprawl in European urban areas

    NASA Astrophysics Data System (ADS)

    Prastacos, Poulicos; Lagarias, Apostolos

    2016-08-01

    In this paper the 2006 edition of the Urban Atlas database is used to tabulate areas of low development density, usually referred to as "sprawl", for many European cities. The Urban Atlas database contains information on the land use distribution in the 305 largest European cities. Twenty different land use types are recognized, with six of them representing urban fabric. Urban fabric classes are residential areas differentiated by the density of development, which is measured by the sealing degree parameter that ranges from 0% to 100% (non-developed, fully developed). Analysis is performed on the distribution of the middle to low density areas defined as those with sealing degree less than 50%. Seven different country groups in which urban areas have similar sprawl characteristics are identified and some key characteristics of sprawl are discussed. Population of an urban area is another parameter considered in the analysis. Two spatial metrics, average patch size and mean distance to the nearest neighboring patch of the same class, are used to describe proximity/separation characteristics of sprawl in the urban areas of the seven groups.

  4. Percentage of Protected Area Amounts within each Watershed Boundary for the Conterminous US

    EPA Science Inventory

    Abstract: This dataset uses spatial information from the Watershed Boundary Dataset (WBD, March 2011) and the Protected Areas Database of the United States (PAD-US Version 1.0). The resulting data layer, with percentages of protected areas by category, was created using the ATtI...

  5. Empirical Determination of Competence Areas to Computer Science Education

    ERIC Educational Resources Information Center

    Zendler, Andreas; Klaudt, Dieter; Seitz, Cornelia

    2014-01-01

    The authors discuss empirically determined competence areas to K-12 computer science education, emphasizing the cognitive level of competence. The results of a questionnaire with 120 professors of computer science serve as a database. By using multi-dimensional scaling and cluster analysis, four competence areas to computer science education…

  6. 77 FR 65546 - Western Area Power Administration; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EF11-7-000] Western Area Power Administration; Notice of Filing Take notice that on July 14, 2011, Western Area Power Administration submitted its revised version of its Tariff Title for the Western Rate Schedules database, to be...

  7. Updating the 2001 National Land Cover Database Impervious Surface Products to 2006 using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, George; Homer, Collin G.

    2010-01-01

    A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline.

  8. Protection of Marine Mammals.

    PubMed

    Knoll, Michaela; Ciaccia, Ettore; Dekeling, René; Kvadsheim, Petter; Liddell, Kate; Gunnarsson, Stig-Lennart; Ludwig, Stefan; Nissen, Ivor; Lorenzen, Dirk; Kreimeyer, Roman; Pavan, Gianni; Meneghetti, Nello; Nordlund, Nina; Benders, Frank; van der Zwan, Timo; van Zon, Tim; Fraser, Leanne; Johansson, Torbjörn; Garmelius, Martin

    2016-01-01

    Within the European Defense Agency (EDA), the Protection of Marine Mammals (PoMM) project, a comprehensive common marine mammal database essential for risk mitigation tools, was established. The database, built on an extensive dataset collection with the focus on areas of operational interest for European navies, consists of annual and seasonal distribution and density maps, random and systematic sightings, an encyclopedia providing knowledge on the characteristics of 126 marine mammal species, data on marine mammal protection areas, and audio information including numerous examples of various vocalizations. Special investigations on marine mammal acoustics were carried out to improve the detection and classification capabilities.

  9. Algorithms and methodology used in constructing high-resolution terrain databases

    NASA Astrophysics Data System (ADS)

    Williams, Bryan L.; Wilkosz, Aaron

    1998-07-01

    This paper presents a top-level description of methods used to generate high-resolution 3D IR digital terrain databases using soft photogrammetry. The 3D IR database is derived from aerial photography and is made up of digital ground plane elevation map, vegetation height elevation map, material classification map, object data (tanks, buildings, etc.), and temperature radiance map. Steps required to generate some of these elements are outlined. The use of metric photogrammetry is discussed in the context of elevation map development; and methods employed to generate the material classification maps are given. The developed databases are used by the US Army Aviation and Missile Command to evaluate the performance of various missile systems. A discussion is also presented on database certification which consists of validation, verification, and accreditation procedures followed to certify that the developed databases give a true representation of the area of interest, and are fully compatible with the targeted digital simulators.

  10. Risk model of valve surgery in Japan using the Japan Adult Cardiovascular Surgery Database.

    PubMed

    Motomura, Noboru; Miyata, Hiroaki; Tsukihara, Hiroyuki; Takamoto, Shinichi

    2010-11-01

    Risk models of cardiac valve surgery using a large database are useful for improving surgical quality. In order to obtain accurate, high-quality assessments of surgical outcome, each geographic area should maintain its own database. The study aim was to collect Japanese data and to prepare a risk stratification of cardiac valve procedures, using the Japan Adult Cardiovascular Surgery Database (JACVSD). A total of 6562 valve procedure records from 97 participating sites throughout Japan was analyzed, using a data entry form with 255 variables that was sent to the JACVSD office from a web-based data collection system. The statistical model was constructed using multiple logistic regression. Model discrimination was tested using the area under the receiver operating characteristic curve (C-index). The model calibration was tested using the Hosmer-Lemeshow (H-L) test. Among 6562 operated cases, 15% had diabetes mellitus, 5% were urgent, and 12% involved preoperative renal failure. The observed 30-day and operative mortality rates were 2.9% and 4.0%, respectively. Significant variables with high odds ratios included emergent or salvage status (3.83), reoperation (3.43), and left ventricular dysfunction (3.01). The H-L test and C-index values for 30-day mortality were satisfactory (0.44 and 0.80, respectively). The results obtained in Japan were at least as good as those reported elsewhere. The performance of this risk model also matched that of the STS National Adult Cardiac Database and the European Society Database.

  11. A high performance, ad-hoc, fuzzy query processing system for relational databases

    NASA Technical Reports Server (NTRS)

    Mansfield, William H., Jr.; Fleischman, Robert M.

    1992-01-01

    Database queries involving imprecise or fuzzy predicates are currently an evolving area of academic and industrial research. Such queries place severe stress on the indexing and I/O subsystems of conventional database environments since they involve the search of large numbers of records. The Datacycle architecture and research prototype is a database environment that uses filtering technology to perform an efficient, exhaustive search of an entire database. It has recently been modified to include fuzzy predicates in its query processing. The approach obviates the need for complex index structures, provides unlimited query throughput, permits the use of ad-hoc fuzzy membership functions, and provides a deterministic response time largely independent of query complexity and load. This paper describes the Datacycle prototype implementation of fuzzy queries and some recent performance results.

  12. Database of tsunami scenario simulations for Western Iberia: a tool for the TRIDEC Project Decision Support System for tsunami early warning

    NASA Astrophysics Data System (ADS)

    Armigliato, Alberto; Pagnoni, Gianluca; Zaniboni, Filippo; Tinti, Stefano

    2013-04-01

    TRIDEC is a EU-FP7 Project whose main goal is, in general terms, to develop suitable strategies for the management of crises possibly arising in the Earth management field. The general paradigms adopted by TRIDEC to develop those strategies include intelligent information management, the capability of managing dynamically increasing volumes and dimensionality of information in complex events, and collaborative decision making in systems that are typically very loosely coupled. The two areas where TRIDEC applies and tests its strategies are tsunami early warning and industrial subsurface development. In the field of tsunami early warning, TRIDEC aims at developing a Decision Support System (DSS) that integrates 1) a set of seismic, geodetic and marine sensors devoted to the detection and characterisation of possible tsunamigenic sources and to monitoring the time and space evolution of the generated tsunami, 2) large-volume databases of pre-computed numerical tsunami scenarios, 3) a proper overall system architecture. Two test areas are dealt with in TRIDEC: the western Iberian margin and the eastern Mediterranean. In this study, we focus on the western Iberian margin with special emphasis on the Portuguese coasts. The strategy adopted in TRIDEC plans to populate two different databases, called "Virtual Scenario Database" (VSDB) and "Matching Scenario Database" (MSDB), both of which deal only with earthquake-generated tsunamis. In the VSDB we simulate numerically few large-magnitude events generated by the major known tectonic structures in the study area. Heterogeneous slip distributions on the earthquake faults are introduced to simulate events as "realistically" as possible. The members of the VSDB represent the unknowns that the TRIDEC platform must be able to recognise and match during the early crisis management phase. On the other hand, the MSDB contains a very large number (order of thousands) of tsunami simulations performed starting from many different simple earthquake sources of different magnitudes and located in the "vicinity" of the virtual scenario earthquake. In the DSS perspective, the members of the MSDB have to be suitably combined based on the information coming from the sensor networks, and the results are used during the crisis evolution phase to forecast the degree of exposition of different coastal areas. We provide examples from both databases whose members are computed by means of the in-house software called UBO-TSUFD, implementing the non-linear shallow-water equations and solving them over a set of nested grids that guarantee a suitable spatial resolution (few tens of meters) in specific, suitably chosen, coastal areas.

  13. Global gridded crop specific agricultural areas from 1961-2014

    NASA Astrophysics Data System (ADS)

    Konar, M.; Jackson, N. D.

    2017-12-01

    Current global cropland datasets are limited in crop specificity and temporal resolution. Time series maps of crop specific agricultural areas would enable us to better understand the global agricultural geography of the 20th century. To this end, we develop a global gridded dataset of crop specific agricultural areas from 1961-2014. To do this, we downscale national cropland information using a probabilistic approach. Our method relies upon gridded Global Agro-Ecological Zones (GAEZ) maps, the History Database of the Global Environment (HYDE), and crop calendars from Sacks et al. (2010). We estimate crop-specific agricultural areas for a 0.25 degree spatial grid and annual time scale for all major crops. We validate our global estimates for the year 2000 with Monfreda et al. (2008) and our time series estimates within the United States using government data. This database will contribute to our understanding of global agricultural change of the past century.

  14. NYC Reservoirs Watershed Areas (HUC 12)

    EPA Pesticide Factsheets

    This NYC Reservoirs Watershed Areas (HUC 12) GIS layer was derived from the 12-Digit National Watershed Boundary Database (WBD) at 1:24,000 for EPA Region 2 and Surrounding States. HUC 12 polygons were selected from the source based on interactively comparing these HUC 12s in our GIS with images of the New York City's Water Supply System Map found at http://www.nyc.gov/html/dep/html/drinking_water/wsmaps_wide.shtml. The 12 digit Hydrologic Units (HUCs) for EPA Region 2 and surrounding states (Northeastern states, parts of the Great Lakes, Puerto Rico and the USVI) are a subset of the National Watershed Boundary Database (WBD), downloaded from the Natural Resources Conservation Service (NRCS) Geospatial Gateway and imported into the EPA Region 2 Oracle/SDE database. This layer reflects 2009 updates to the WBD that included new boundary data for New York and New Jersey.

  15. Toward uniform probabilistic seismic hazard assessments for Southeast Asia

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Wang, Y.; Shi, X.; Ornthammarath, T.; Warnitchai, P.; Kosuwan, S.; Thant, M.; Nguyen, P. H.; Nguyen, L. M.; Solidum, R., Jr.; Irsyam, M.; Hidayati, S.; Sieh, K.

    2017-12-01

    Although most Southeast Asian countries have seismic hazard maps, various methodologies and quality result in appreciable mismatches at national boundaries. We aim to conduct a uniform assessment across the region by through standardized earthquake and fault databases, ground-shaking scenarios, and regional hazard maps. Our earthquake database contains earthquake parameters obtained from global and national seismic networks, harmonized by removal of duplicate events and the use of moment magnitude. Our active-fault database includes fault parameters from previous studies and from the databases implemented for national seismic hazard maps. Another crucial input for seismic hazard assessment is proper evaluation of ground-shaking attenuation. Since few ground-motion prediction equations (GMPEs) have used local observations from this region, we evaluated attenuation by comparison of instrumental observations and felt intensities for recent earthquakes with predicted ground shaking from published GMPEs. We then utilize the best-fitting GMPEs and site conditions into our seismic hazard assessments. Based on the database and proper GMPEs, we have constructed regional probabilistic seismic hazard maps. The assessment shows highest seismic hazard levels near those faults with high slip rates, including the Sagaing Fault in central Myanmar, the Sumatran Fault in Sumatra, the Palu-Koro, Matano and Lawanopo Faults in Sulawesi, and the Philippine Fault across several islands of the Philippines. In addition, our assessment demonstrates the important fact that regions with low earthquake probability may well have a higher aggregate probability of future earthquakes, since they encompass much larger areas than the areas of high probability. The significant irony then is that in areas of low to moderate probability, where building codes are usually to provide less seismic resilience, seismic risk is likely to be greater. Infrastructural damage in East Malaysia during the 2015 Sabah earthquake offers a case in point.

  16. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2014-08-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation-type data from the European Space Agency (ESA) GlobCover project, and 30 arc-sec leaf area index and fraction of absorbed photosynthetically active radiation data from the ESA GlobCarbon project. Simulations are carried out for the metropolitan area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering three periods of time are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, grid resolution, topographic and land-use databases. Our comparisons show overall good agreement between simulated and observational data, mainly for the potential temperature and the wind speed fields, and clearly indicate that the use of high-resolution databases improves significantly our ability to predict the local atmospheric circulation.

  17. Idea and implementation studies of populating TOPO250 component with the data from TOPO10 - generalization of geographic information in the BDG database. (Polish Title: Koncepcja i studium implementacji procesu zasilania komponentu TOPO250 danymi TOPO10 - generalizacja informacji geograficznej w bazie danych BDG )

    NASA Astrophysics Data System (ADS)

    Olszewski, R.; Pillich-Kolipińska, A.; Fiedukowicz, A.

    2013-12-01

    Implementation of INSPIRE Directive in Poland requires not only legal transposition but also development of a number of technological solutions. The one of such tasks, associated with creation of Spatial Information Infrastructure in Poland, is developing a complex model of georeference database. Significant funding for GBDOT project enables development of the national basic topographical database as a multiresolution database (MRDB). Effective implementation of this type of database requires developing procedures for generalization of geographic information (generalization of digital landscape model - DLM), which, treating TOPO10 component as the only source for creation of TOPO250 component, will allow keeping conceptual and classification consistency between those database elements. To carry out this task, the implementation of the system's concept (prepared previously for Head Office of Geodesy and Cartography) is required. Such system is going to execute the generalization process using constrained-based modeling and allows to keep topological relationships between the objects as well as between the object classes. Full implementation of the designed generalization system requires running comprehensive tests which would help with its calibration and parameterization of the generalization procedures (related to the character of generalized area). Parameterization of this process will allow determining the criteria of specific objects selection, simplification algorithms as well as the operation order. Tests with the usage of differentiated, related to the character of the area, generalization process parameters become nowadays the priority issue. Parameters are delivered to the system in the form of XML files, which, with the help of dedicated tool, are generated from the spreadsheet files (XLS) filled in by user. Using XLS file makes entering and modifying the parameters easier. Among the other elements defined by the external parametric files there are: criteria of object selection, metric parameters of generalization algorithms (e.g. simplification or aggregation) and the operations' sequence. Testing on the trial areas of diverse character will allow developing the rules of generalization process' realization, its parameterization with the proposed tool within the multiresolution reference database. The authors have attempted to develop a generalization process' parameterization for a number of different trial areas. The generalization of the results will contribute to the development of a holistic system of generalized reference data stored in the national geodetic and cartographic resources.

  18. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes

    PubMed Central

    Corcoran, Callan C.; Grady, Cameron R.; Pisitkun, Trairak; Parulekar, Jaya

    2017-01-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database (https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database (Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. PMID:27974320

  19. USGS cold-water coral geographic database-Gulf of Mexico and western North Atlantic Ocean, version 1.0

    USGS Publications Warehouse

    Scanlon, Kathryn M.; Waller, Rhian G.; Sirotek, Alexander R.; Knisel, Julia M.; O'Malley, John; Alesandrini, Stian

    2010-01-01

    The USGS Cold-Water Coral Geographic Database (CoWCoG) provides a tool for researchers and managers interested in studying, protecting, and/or utilizing cold-water coral habitats in the Gulf of Mexico and western North Atlantic Ocean.  The database makes information about the locations and taxonomy of cold-water corals available to the public in an easy-to-access form while preserving the scientific integrity of the data.  The database includes over 1700 entries, mostly from published scientific literature, museum collections, and other databases.  The CoWCoG database is easy to search in a variety of ways, and data can be quickly displayed in table form and on a map by using only the software included with this publication.  Subsets of the database can be selected on the basis of geographic location, taxonomy, or other criteria and exported to one of several available file formats.  Future versions of the database are being planned to cover a larger geographic area and additional taxa.

  20. Database for Regional Geology, Phase 1: A Tool for Informing Regional Evaluations of Alternative Geologic Media and Decision Making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, Frank Vinton; Kelley, Richard E.; Birdsell, Suzanne M.

    Reported is progress in the following areas: Phase 1 and 2 websites for the regional geology GIS database; terrane maps of crystalline basement rocks; inventory of shale formations in the US; and rock properties and in-situ conditions for shale estimated from sonic velocity measurements.

  1. INFOSAM: A Sample Database Management System.

    DTIC Science & Technology

    1981-12-01

    PROGRAM ELEMENT. PROJECT, TASA Sloan School of Management AREA WORK UNIT NUMBERS Massachusetts Institute of Technology Cambridge, MA 02139 II...96 NSETCAT .. ............................. 96 Inter -level Communication Databases .... 99 DEEAR ...................... 100 DVAR...Conceptual level, and the External level. The Inter - nal level represents a union of Hsu’s proposed Unary and Binary levels. The rationale for combining the

  2. 43 CFR Appendix III to Part 11 - Format for Data Inputs and Modifications to the NRDAM/GLE

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... area represented by its geographic database. Any water within the geographic boundaries of the NRDAM... U.S. Department of Commerce/Bureau of Economic Analysis, 1441 L Street, NW, Washington, D.C., 20230, (202) 606-9900.] Modifications to the NRDAM/GLE Databases (if Any) Documentation of the source of the...

  3. 43 CFR Appendix III to Part 11 - Format for Data Inputs and Modifications to the NRDAM/GLE

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... area represented by its geographic database. Any water within the geographic boundaries of the NRDAM... U.S. Department of Commerce/Bureau of Economic Analysis, 1441 L Street, NW, Washington, D.C., 20230, (202) 606-9900.] Modifications to the NRDAM/GLE Databases (if Any) Documentation of the source of the...

  4. 43 CFR Appendix III to Part 11 - Format for Data Inputs and Modifications to the NRDAM/GLE

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... area represented by its geographic database. Any water within the geographic boundaries of the NRDAM... U.S. Department of Commerce/Bureau of Economic Analysis, 1441 L Street, NW, Washington, D.C., 20230, (202) 606-9900.] Modifications to the NRDAM/GLE Databases (if Any) Documentation of the source of the...

  5. 43 CFR Appendix III to Part 11 - Format for Data Inputs and Modifications to the NRDAM/GLE

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... area represented by its geographic database. Any water within the geographic boundaries of the NRDAM... U.S. Department of Commerce/Bureau of Economic Analysis, 1441 L Street, NW, Washington, D.C., 20230, (202) 606-9900.] Modifications to the NRDAM/GLE Databases (if Any) Documentation of the source of the...

  6. NASA Cold Land Processes Experiment (CLPX 2002/03): Ground-based and near-surface meteorological observations

    Treesearch

    Kelly Elder; Don Cline; Angus Goodbody; Paul Houser; Glen E. Liston; Larry Mahrt; Nick Rutter

    2009-01-01

    A short-term meteorological database has been developed for the Cold Land Processes Experiment (CLPX). This database includes meteorological observations from stations designed and deployed exclusively for CLPXas well as observations available from other sources located in the small regional study area (SRSA) in north-central Colorado. The measured weather parameters...

  7. Alaska digital aeromagnetic database description

    USGS Publications Warehouse

    Connard, G.G.; Saltus, R.W.; Hill, P.L.; Carlson, L.; Milicevic, B.

    1999-01-01

    Northwest Geophysical Associates, Inc. (NGA) was contracted by the U.S. Geological Survey (USGS) to construct a database containing original aeromagnotic data (in digital form) from surveys, maps and grids for the State of Alaska from existing public-domain magnetic data. This database facilitates thedetailed study and interpretation of aeromagnetic data along flightline profiles and allows construction of custom grids for selected regions of Alaska. The database is linked to and reflect? the work from the statewide gridded compilation completed under a prior contract. The statewide gridded compilation is also described in Saltus and Simmons (1997) and in Saltus and others (1999). The database area generally covers the on-shore portion of the State of Alaska and the northern Gulf of Alaska excluding the Aleutian Islands. The area extends from 54'N to 72'N latitude and 129'W to 169'W longitude. The database includes the 85 surveys that were included in the previous statewide gridded compilation. Figure (1) shows the extents of the 85 individual data sets included in the statewide grids. NGA subcontracted a significant portion of the work described in this report to Paterson, Grant, and Watson Limited (PGW). Prior work by PGW (described in Meyer and Saltus, 1995 and Meyer and others, 1998) for the interior portion of Alrska (INTAK) is included in this present study. The previous PGW project compiled 25 of the 85 surveys included in the statewide grids. PGW also contributed 10 additional data sets that were not included in either of the prior contracts or the statewide grids. These additional data sets are included in the current project in the interest of making the database as complete as possible. Figure (2) shows the location of the additional data sets.

  8. Intra- and Inter-database Study for Arabic, English, and German Databases: Do Conventional Speech Features Detect Voice Pathology?

    PubMed

    Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H

    2017-05-01

    A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  9. Marine Biodiversity in the Australian Region

    PubMed Central

    Butler, Alan J.; Rees, Tony; Beesley, Pam; Bax, Nicholas J.

    2010-01-01

    The entire Australian marine jurisdictional area, including offshore and sub-Antarctic islands, is considered in this paper. Most records, however, come from the Exclusive Economic Zone (EEZ) around the continent of Australia itself. The counts of species have been obtained from four primary databases (the Australian Faunal Directory, Codes for Australian Aquatic Biota, Online Zoological Collections of Australian Museums, and the Australian node of the Ocean Biogeographic Information System), but even these are an underestimate of described species. In addition, some partially completed databases for particular taxonomic groups, and specialized databases (for introduced and threatened species) have been used. Experts also provided estimates of the number of known species not yet in the major databases. For only some groups could we obtain an (expert opinion) estimate of undiscovered species. The databases provide patchy information about endemism, levels of threat, and introductions. We conclude that there are about 33,000 marine species (mainly animals) in the major databases, of which 130 are introduced, 58 listed as threatened and an unknown percentage endemic. An estimated 17,000 more named species are either known from the Australian EEZ but not in the present databases, or potentially occur there. It is crudely estimated that there may be as many as 250,000 species (known and yet to be discovered) in the Australian EEZ. For 17 higher taxa, there is sufficient detail for subdivision by Large Marine Domains, for comparison with other National and Regional Implementation Committees of the Census of Marine Life. Taxonomic expertise in Australia is unevenly distributed across taxa, and declining. Comments are given briefly on biodiversity management measures in Australia, including but not limited to marine protected areas. PMID:20689847

  10. Development of a database of instruments for resource-use measurement: purpose, feasibility, and design.

    PubMed

    Ridyard, Colin H; Hughes, Dyfrig A

    2012-01-01

    Health economists frequently rely on methods based on patient recall to estimate resource utilization. Access to questionnaires and diaries, however, is often limited. This study examined the feasibility of establishing an open-access Database of Instruments for Resource-Use Measurement, identified relevant fields for data extraction, and outlined its design. An electronic survey was sent to authors of full UK economic evaluations listed in the National Health Service Economic Evaluation Database (2008-2010), authors of monographs of Health Technology Assessments (1998-2010), and subscribers to the JISCMail health economics e-mailing list. The survey included questions on piloting, validation, recall period, and data capture method. Responses were analyzed and data extracted to generate relevant fields for the database. A total of 143 responses to the survey provided data on 54 resource-use instruments for inclusion in the database. All were reliant on patient or carer recall, and a majority (47) were questionnaires. Thirty-seven were designed for self-completion by the patient, carer, or guardian, and the remainder were designed for completion by researchers or health care professionals while interviewing patients. Methods of development were diverse, particularly in areas such as the planning of resource itemization (evident in 25 instruments), piloting (25), and validation (29). On the basis of the present analysis, we developed a Web-enabled Database of Instruments for Resource-Use Measurement, accessible via www.DIRUM.org. This database may serve as a practical resource for health economists, as well as a means to facilitate further research in the area of resource-use data collection. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Geodata Modeling and Query in Geographic Information Systems

    NASA Technical Reports Server (NTRS)

    Adam, Nabil

    1996-01-01

    Geographic information systems (GIS) deal with collecting, modeling, man- aging, analyzing, and integrating spatial (locational) and non-spatial (attribute) data required for geographic applications. Examples of spatial data are digital maps, administrative boundaries, road networks, and those of non-spatial data are census counts, land elevations and soil characteristics. GIS shares common areas with a number of other disciplines such as computer- aided design, computer cartography, database management, and remote sensing. None of these disciplines however, can by themselves fully meet the requirements of a GIS application. Examples of such requirements include: the ability to use locational data to produce high quality plots, perform complex operations such as network analysis, enable spatial searching and overlay operations, support spatial analysis and modeling, and provide data management functions such as efficient storage, retrieval, and modification of large datasets; independence, integrity, and security of data; and concurrent access to multiple users. It is on the data management issues that we devote our discussions in this monograph. Traditionally, database management technology have been developed for business applications. Such applications require, among other things, capturing the data requirements of high-level business functions and developing machine- level implementations; supporting multiple views of data and yet providing integration that would minimize redundancy and maintain data integrity and security; providing a high-level language for data definition and manipulation; allowing concurrent access to multiple users; and processing user transactions in an efficient manner. The demands on database management systems have been for speed, reliability, efficiency, cost effectiveness, and user-friendliness. Significant progress have been made in all of these areas over the last two decades to the point that many generalized database platforms are now available for developing data intensive applications that run in real-time. While continuous improvement is still being made at a very fast-paced and competitive rate, new application areas such as computer aided design, image processing, VLSI design, and GIS have been identified by many as the next generation of database applications. These new application areas pose serious challenges to the currently available database technology. At the core of these challenges is the nature of data that is manipulated. In traditional database applications, the database objects do not have any spatial dimension, and as such, can be thought of as point data in a multi-dimensional space. For example, each instance of an entity EMPLOYEE will have a unique value corresponding to every attribute such as employee id, employee name, employee address and so on. Thus, every Employee instance can be thought of as a point in a multi-dimensional space where each dimension is represented by an attribute. Furthermore, all operations on such data are one-dimensional. Thus, users may retrieve all entities satisfying one or more constraints. Examples of such constraints include employees with addresses in a certain area code, or salaries within a certain range. Even though constraints can be specified on multiple attributes (dimensions), the search for such data is essentially orthogonal across these dimensions.

  12. 2016 American Indian/Alaska Native/Native Hawaiian Areas (AIANNH) Michigan, Minnesota, and Wisconsin

    EPA Pesticide Factsheets

    The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line shapefile is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. The American Indian/Alaska Native/Native Hawaiian (AIANNH) Areas Shapefile includes the following legal entities: federally recognized American Indian reservations and off-reservation trust land areas, state-recognized American Indian reservations, and Hawaiian home lands (HHLs). The statistical entities included are Alaska Native village statistical areas (ANVSAs), Oklahoma tribal statistical areas (OTSAs), tribal designated statistical areas (TDSAs), and state designated tribal statistical areas (SDTSAs). Joint use areas are also included in this shapefile refer to areas that are administered jointly and/or claimed by two or more American Indian tribes. The Census Bureau designates both legal and statistical joint use areas as unique geographic entities for the purpose of presenting statistical data. The Bureau of Indian Affairs (BIA) within the U.S. Department of the Interior (DOI) provides the list of federally recognized tribes and only provides legal boundary infor

  13. 76 FR 60004 - Proposed Information Collection; Comment Request; Data Collection and Verification for the Marine...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... Collection; Comment Request; Data Collection and Verification for the Marine Protected Areas Inventory AGENCY... developing a national system of marine protected areas (MPAs). These departments are working closely with... Administration (NOAA) and DOI have created the Marine Protected Areas Inventory, an online spatial database that...

  14. Critical Needs for Robust and Reliable Database for Design and Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, M.

    1999-01-01

    Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.

  15. The new Cloud Dynamics and Radiation Database algorithms for AMSR2 and GMI: exploitation of the GPM observational database for operational applications

    NASA Astrophysics Data System (ADS)

    Cinzia Marra, Anna; Casella, Daniele; Martins Costa do Amaral, Lia; Sanò, Paolo; Dietrich, Stefano; Panegrossi, Giulia

    2017-04-01

    Two new precipitation retrieval algorithms for the Advanced Microwave Scanning Radiometer 2 (AMSR2) and for the GPM Microwave Imager (GMI) are presented. The algorithms are based on the Cloud Dynamics and Radiation Database (CDRD) Bayesian approach and represent an evolution of the previous version applied to Special Sensor Microwave Imager/Sounder (SSMIS) observations, and used operationally within the EUMETSAT Satellite Application Facility on support to Operational Hydrology and Water Management (H-SAF). These new products present as main innovation the use of an extended database entirely empirical, derived from coincident radar and radiometer observations from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) (Dual-frequency Precipitation Radar-DPR and GMI). The other new aspects are: 1) a new rain-no-rain screening approach; 2) the use of Empirical Orthogonal Functions (EOF) and Canonical Correlation Analysis (CCA) both in the screening approach, and in the Bayesian algorithm; 2) the use of new meteorological and environmental ancillary variables to categorize the database and mitigate the problem of non-uniqueness of the retrieval solution; 3) the development and implementations of specific modules for computational time minimization. The CDRD algorithms for AMSR2 and GMI are able to handle an extremely large observational database available from GPM-CO and provide the rainfall estimate with minimum latency, making them suitable for near-real time hydrological and operational applications. As far as CDRD for AMSR2, a verification study over Italy using ground-based radar data and over the MSG full disk area using coincident GPM-CO/AMSR2 observations has been carried out. Results show remarkable AMSR2 capabilities for rainfall rate (RR) retrieval over ocean (for RR > 0.25 mm/h), good capabilities over vegetated land (for RR > 1 mm/h), while for coastal areas the results are less certain. Comparisons with NASA GPM products, and with ground-based radar data, show that CDRD for AMSR2 is able to depict very well the areas of high precipitation over all surface types. Similarly, preliminary results of the application of CDRD for GMI are also shown and discussed, highlighting the advantage of the availability of high frequency channels (> 90 GHz) for precipitation retrieval over land and coastal areas.

  16. Documentation of the U.S. Geological Survey Stress and Sediment Mobility Database

    USGS Publications Warehouse

    Dalyander, P. Soupy; Butman, Bradford; Sherwood, Christopher R.; Signell, Richard P.

    2012-01-01

    The U.S. Geological Survey Sea Floor Stress and Sediment Mobility Database contains estimates of bottom stress and sediment mobility for the U.S. continental shelf. This U.S. Geological Survey database provides information that is needed to characterize sea floor ecosystems and evaluate areas for human use. The estimates contained in the database are designed to spatially and seasonally resolve the general characteristics of bottom stress over the U.S. continental shelf and to estimate sea floor mobility by comparing critical stress thresholds based on observed sediment texture data to the modeled stress. This report describes the methods used to make the bottom stress and mobility estimates, statistics used to characterize stress and mobility, data validation procedures, and the metadata for each dataset and provides information on how to access the database online.

  17. Measuring the extent and effectiveness of protected areas as an indicator for meeting global biodiversity targets

    PubMed Central

    Chape, S; Harrison, J; Spalding, M; Lysenko, I

    2005-01-01

    There are now over 100 000 protected areas worldwide, covering over 12% of the Earth's land surface. These areas represent one of the most significant human resource use allocations on the planet. The importance of protected areas is reflected in their widely accepted role as an indicator for global targets and environmental assessments. However, measuring the number and extent of protected areas only provides a unidimensional indicator of political commitment to biodiversity conservation. Data on the geographic location and spatial extent of protected areas will not provide information on a key determinant for meeting global biodiversity targets: ‘effectiveness’ in conserving biodiversity. Although tools are being devised to assess management effectiveness, there is no globally accepted metric. Nevertheless, the numerical, spatial and geographic attributes of protected areas can be further enhanced by investigation of the biodiversity coverage of these protected areas, using species, habitats or biogeographic classifications. This paper reviews the current global extent of protected areas in terms of geopolitical and habitat coverage, and considers their value as a global indicator of conservation action or response. The paper discusses the role of the World Database on Protected Areas and collection and quality control issues, and identifies areas for improvement, including how conservation effectiveness indicators may be included in the database to improve the value of protected areas data as an indicator for meeting global biodiversity targets. PMID:15814356

  18. Managing Documents in the Wider Area: Intelligent Document Management.

    ERIC Educational Resources Information Center

    Bittleston, Richard

    1995-01-01

    Discusses techniques for managing documents in wide area networks, reviews technique limitations, and offers recommendations to database designers. Presented techniques include: increasing bandwidth, reducing data traffic, synchronizing documentation, partial synchronization, audit trials, navigation, and distribution control and security. Two…

  19. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizesmore » the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)« less

  20. The 24th annual Nucleic Acids Research database issue: a look back and upcoming changes

    PubMed Central

    Rigden, Daniel J

    2017-01-01

    Abstract This year's Database Issue of Nucleic Acids Research contains 152 papers that include descriptions of 54 new databases and update papers on 98 databases, of which 16 have not been previously featured in NAR. As always, these databases cover a broad range of molecular biology subjects, including genome structure, gene expression and its regulation, proteins, protein domains, and protein–protein interactions. Following the recent trend, an increasing number of new and established databases deal with the issues of human health, from cancer-causing mutations to drugs and drug targets. In accordance with this trend, three recently compiled databases that have been selected by NAR reviewers and editors as ‘breakthrough’ contributions, denovo-db, the Monarch Initiative, and Open Targets, cover human de novo gene variants, disease-related phenotypes in model organisms, and a bioinformatics platform for therapeutic target identification and validation, respectively. We expect these databases to attract the attention of numerous researchers working in various areas of genetics and genomics. Looking back at the past 12 years, we present here the ‘golden set’ of databases that have consistently served as authoritative, comprehensive, and convenient data resources widely used by the entire community and offer some lessons on what makes a successful database. The Database Issue is freely available online at the https://academic.oup.com/nar web site. An updated version of the NAR Molecular Biology Database Collection is available at http://www.oxfordjournals.org/nar/database/a/. PMID:28053160

  1. Real-time Geographic Information System (GIS) for Monitoring the Area of Potential Water Level Using Rule Based System

    NASA Astrophysics Data System (ADS)

    Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro

    2018-02-01

    Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.

  2. Dentist shortage: an analysis of dentists, practices, and populations in the underserved areas.

    PubMed

    Voinea-Griffin, Andreea; Solomon, Eric S

    2016-09-01

    The objectives of this study are to identify and describe the characteristics of dental underserved geographic areas. Understanding these characteristics is an important step in addressing access to dental care barriers. Dental underserved areas were identified from the Health Resources and Services Administration (HRSA) database and converted to census tracts for analysis. Characteristics of dental underserved geographic areas were compared with areas not designated as underserved. Dental practices included in the Dun & Bradstreet Business information database were geocoded and analyzed according to the underserved designation of their location and census demographic data. Thus, the relationships between dental underserved status, practice, and population characteristics were evaluated. Dental underserved areas are more likely to comprise individuals with lower socio-economic status (income and education levels), higher levels of underrepresented population groups, and have lower population densities than non-underserved areas. The populations living in dental underserved areas are more likely to experience geographic, financial, and educational barriers to dental care. The study identifies the geographic and financial barriers to dental care access. These findings suggest that the likelihood of a market-driven solution to dental underserved geographic areas is low and support public sector interventions to improve the status quo. © 2016 American Association of Public Health Dentistry.

  3. Citation Help in Databases: The More Things Change, the More They Stay the Same

    ERIC Educational Resources Information Center

    Van Ullen, Mary; Kessler, Jane

    2012-01-01

    In 2005, the authors reviewed citation help in databases and found an error rate of 4.4 errors per citation. This article describes a follow-up study that revealed a modest improvement in the error rate to 3.4 errors per citation, still unacceptably high. The most problematic area was retrieval statements. The authors conclude that librarians…

  4. An Interactive Iterative Method for Electronic Searching of Large Literature Databases

    ERIC Educational Resources Information Center

    Hernandez, Marco A.

    2013-01-01

    PubMed® is an on-line literature database hosted by the U.S. National Library of Medicine. Containing over 21 million citations for biomedical literature--both abstracts and full text--in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return…

  5. Implementing Relational Operations in an Object-Oriented Database

    DTIC Science & Technology

    1992-03-01

    computer aided software engineering (CASE) and computer aided design (CAD) tools. There has been some research done in the area of combining...35 2. Prograph Database Engine .................................................................. 38 III. W HY A N R/O...in most business applications where the bulk of data being stored and manipulated is simply textual or numeric data that can be stored and manipulated

  6. GIDL: a rule based expert system for GenBank Intelligent Data Loading into the Molecular Biodiversity database

    PubMed Central

    2012-01-01

    Background In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. Methods The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing. Results and conclusions Entries coming from Virus (814,122), Plant (1,365,360) and Invertebrate (959,065) divisions of GenBank rel.180 have been loaded in the Molecular Biodiversity Database by GIDL. Our system, combining the Sequence Ontology and the Chado schema, allows a more powerful query expressiveness compared with the most commonly used sequence retrieval systems like Entrez or SRS. PMID:22536971

  7. The National Institute on Disability, Independent Living, and Rehabilitation Research Burn Model System: Twenty Years of Contributions to Clinical Service and Research.

    PubMed

    Goverman, Jeremy; Mathews, Katie; Holavanahalli, Radha K; Vardanian, Andrew; Herndon, David N; Meyer, Walter J; Kowalske, Karen; Fauerbach, Jim; Gibran, Nicole S; Carrougher, Gretchen J; Amtmann, Dagmar; Schneider, Jeffrey C; Ryan, Colleen M

    The National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) established the Burn Model System (BMS) in 1993 to improve the lives of burn survivors. The BMS program includes 1) a multicenter longitudinal database describing the functional and psychosocial recovery of burn survivors; 2) site-specific burn-related research; and 3) a knowledge dissemination component directed toward patients and providers. Output from each BMS component was analyzed. Database structure, content, and access procedures are described. Publications using the database were identified and categorized to illustrate the content area of the work. Unused areas of the database were identified for future study. Publications related to site-specific projects were cataloged. The most frequently cited articles are summarized to illustrate the scope of these projects. The effectiveness of dissemination activities was measured by quantifying website hits and information downloads. There were 25 NIDILRR-supported publications that utilized the database. These articles covered topics related to psychological outcomes, functional outcomes, community reintegration, and burn demographics. There were 172 site-specific publications; highly cited articles demonstrate a wide scope of study. For information dissemination, visits to the BMS website quadrupled between 2013 and 2014, with 124,063 downloads of educational material in 2014. The NIDILRR BMS program has played a major role in defining the course of burn recovery, and making that information accessible to the general public. The accumulating information in the database serves as a rich resource to the burn community for future study. The BMS is a model for collaborative research that is multidisciplinary and outcome focused.

  8. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2014-05-01

    This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system running on PostgreSQL/PostGIS. This provides advanced functionality for spatial data analysis and forms the basis for future data provision and visualization using a WebGIS application. Analysis of landslide database contents shows that in most parts of Germany landslides primarily affect transportation infrastructures. Although with distinct lower frequency, recent landslides are also recorded to cause serious damage to hydraulic facilities and waterways, supply and disposal infrastructures, sites of cultural heritage, as well as forest, agricultural, and mining areas. The main types of landslide damage are failure of cut and fill slopes, destruction of retaining walls, street lights, and forest stocks, burial of roads, backyards, and garden areas, as well as crack formation in foundations, sewer lines, and building walls. Landslide repair and mitigation at transportation infrastructures is dominated by simple solutions such as catch barriers or rock fall drapery. These solutions are often undersized and fail under stress. The use of costly slope stabilization or protection systems is proven to reduce these risks effectively over longer maintenance cycles. The right balancing of landslide mitigation is thus a crucial problem in managing landslide risks. Development and analysis of such landslide databases helps to support decision-makers in finding efficient solutions to minimize landslide risks for human beings, infrastructures, and financial assets.

  9. Forest Conservation Opportunity Areas - Liberal Model (ECO_RES.COA_FORREST33)

    EPA Pesticide Factsheets

    This layer designates areas with potential for forest conservation. These are areas of natural or semi-natural forest land cover patches that are at least 75 meters away from roads and away from patch edges. OAs were modeled by creating distance grids using the National Land Cover Database and the Census Bureau's TIGER roads files.

  10. IMPROVING EMISSIONS ESTIMATES WITH COMPUTATIONAL INTELLIGENCE, DATABASE EXPANSION, AND COMPREHENSIVE VALIDATION

    EPA Science Inventory

    The report discusses an EPA investigation of techniques to improve methods for estimating volatile organic compound (VOC) emissions from area sources. Using the automobile refinishing industry for a detailed area source case study, an emission estimation method is being developed...

  11. An Index to PGE-Ni-Cr Deposits and Occurrences in Selected Mineral-Occurrence Databases

    USGS Publications Warehouse

    Causey, J. Douglas; Galloway, John P.; Zientek, Michael L.

    2009-01-01

    Databases of mineral deposits and occurrences are essential to conducting assessments of undiscovered mineral resources. In the USGS's (U.S. Geological Survey) global assessment of undiscovered resources of copper, potash, and the platinum-group elements (PGE), only a few mineral deposit types will be evaluated. For example, only porphyry-copper and sediment-hosted copper deposits will be considered for the copper assessment. To support the global assessment, the USGS prepared comprehensive compilations of the occurrences of these two deposit types in order to develop grade and tonnage models and delineate permissive areas for undiscovered deposits of those types. This publication identifies previously published databases and database records that describe PGE, nickel, and chromium deposits and occurrences. Nickel and chromium were included in this overview because of the close association of PGE with nickel and chromium mineralization. Users of this database will need to refer to the original databases for detailed information about the deposits and occurrences. This information will be used to develop a current and comprehensive global database of PGE deposits and occurrences.

  12. Speech Databases of Typical Children and Children with SLI

    PubMed Central

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children’s speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children’s speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children’s speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing. PMID:26963508

  13. Data base management system for lymphatic filariasis--a neglected tropical disease.

    PubMed

    Upadhyayula, Suryanaryana Murty; Mutheneni, Srinivasa Rao; Kadiri, Madhusudhan Rao; Kumaraswamy, Sriram; Nelaturu, Sarat Chandra Babu

    2012-01-01

    Researchers working in the area of Public Health are being confronted with large volumes of data on various aspects of entomology and epidemiology. To obtain the relevant information out of these data requires particular database management system. In this paper, we have described about the usages of our developed database on lymphatic filariasis. This database application is developed using Model View Controller (MVC) architecture, with MySQL as database and a web based interface. We have collected and incorporated the data on filariasis in the database from Karimnagar, Chittoor, East and West Godavari districts of Andhra Pradesh, India. The importance of this database is to store the collected data, retrieve the information and produce various combinational reports on filarial aspects which in turn will help the public health officials to understand the burden of disease in a particular locality. This information is likely to have an imperative role on decision making for effective control of filarial disease and integrated vector management operations.

  14. Recent Progress in the Development of Metabolome Databases for Plant Systems Biology

    PubMed Central

    Fukushima, Atsushi; Kusano, Miyako

    2013-01-01

    Metabolomics has grown greatly as a functional genomics tool, and has become an invaluable diagnostic tool for biochemical phenotyping of biological systems. Over the past decades, a number of databases involving information related to mass spectra, compound names and structures, statistical/mathematical models and metabolic pathways, and metabolite profile data have been developed. Such databases complement each other and support efficient growth in this area, although the data resources remain scattered across the World Wide Web. Here, we review available metabolome databases and summarize the present status of development of related tools, particularly focusing on the plant metabolome. Data sharing discussed here will pave way for the robust interpretation of metabolomic data and advances in plant systems biology. PMID:23577015

  15. Development and validation of a Database Forensic Metamodel (DBFM)

    PubMed Central

    Al-dhaqm, Arafat; Razak, Shukor; Othman, Siti Hajar; Ngadi, Asri; Ahmed, Mohammed Nazir; Ali Mohammed, Abdulalem

    2017-01-01

    Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent. PMID:28146585

  16. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix and in selected bivariate trait relationships. MICE yielded imputations which better preserved the variability and covariance structure of the data and provided an estimate of between-imputation uncertainty. We found that adding species identity as a predictor in MICE and kNN improved imputation for all traits, but adding climate did not lead to any appreciable improvement. However, forest structure and spatial structure did reduce imputation errors in maximum height and in leaf biomass to sapwood area ratios, respectively. Although species mean imputations showed the lowest error for 3 out the 5 studied traits, dataset-averaged errors were lowest for MICE imputations with all additional predictors, when missing data levels were 50% or lower. Species mean imputations always resulted in larger errors in the correlation matrix and appreciably altered the studied bivariate trait relationships. In conclusion, MICE imputations using species identity, climate, forest structure and spatial structure as predictors emerged as the most suitable method of the ones tested here, but it was also evident that imputation performance deteriorates at high levels of missing data (80%).

  17. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes.

    PubMed

    Corcoran, Callan C; Grady, Cameron R; Pisitkun, Trairak; Parulekar, Jaya; Knepper, Mark A

    2017-03-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database ( https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database ( Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. Copyright © 2017 the American Physiological Society.

  18. All Conservation Opportunity Areas (ECO.RES.ALL_OP_AREAS)

    EPA Pesticide Factsheets

    The All_OP_Areas GIS layer are all the Conservation Opportunity Areas identified by MoRAP (produced for EPA Region 7). They designate areas with potential for forest, grassland and forest/grassland mosaic conservation. These are areas of natural or semi-natural forest land cover that are at least 75 meters away from roads and away from patch edges. OAs were modeled by creating distance grids using the National Land Cover Database and the Census Bureau's TIGER roads files.

  19. Hydrogeologic characterization of the Modesto Area, San Joaquin Valley, California

    USGS Publications Warehouse

    Burow, Karen R.; Shelton, Jennifer L.; Hevesi, Joseph A.; Weissmann, Gary S.

    2004-01-01

    Hydrogeologic characterization was done to develop an understanding of the hydrogeologic setting near Modesto by maximizing the use of existing data and building on previous work in the region. A substantial amount of new lithologic and hydrologic data are available that allow a more complete and updated characterization of the aquifer system. In this report, geologic units are described, a database of well characteristics and lithology is developed and used to update the regional stratigraphy, a water budget is estimated for water year 2000, a three-dimensional spatial correlation map of aquifer texture is created, and recommendations for future data collection are summarized. The general physiography of the study area is reflected in the soils. The oldest soils, which have low permeability, exist in terrace deposits, in the interfan areas between the Stanislaus, Tuolumne, and Merced Rivers, at the distal end of the fans, and along the San Joaquin River floodplain. The youngest soils have high permeability and generally have been forming on the recently deposited alluvium along the major stream channels. Geologic materials exposed or penetrated by wells in the Modesto area range from pre-Cretaceous rocks to recent alluvium; however, water-bearing materials are mostly Late Tertiary and Quaternary in age. A database containing information from more than 3,500 drillers'logs was constructed to organize information on well characteristics and subsurface lithology in the study area. The database was used in conjunction with a limited number of geophysical logs and county soil maps to define the stratigraphic framework of the study area. Sequences of red paleosols were identified in the database and used as stratigraphic boundaries. Associated with these paleosols are very coarse grained incised valley-fill deposits. Some geophysical well logs and other sparse well information suggest the presence of one of these incised valley-fill deposits along and adjacent to the Tuolumne River east of Modesto, a feature that may have important implications for ground-water flow and transport in the region. Although extensive work has been done by earlier investigators to define the structure of the Modesto area aquifer system, this report has resulted in some modification to the lateral extent of the Corcoran Clay and the regional dip of the Mehrten Formation. Well logs in the database indicating the presence of the Corcoran Clay were used to revise the eastern extent of the Corcoran Clay, which lies approximately parallel to the axis of valley. The Mehrten Formation is distinguished in the well-log database by its characteristic black sands consisting of predominantly andesitic fragments. Black sands in wells listed in the database indicate that the formation may lie as shallow as 120 meters (400 feet) below land surface under Modesto, approximately 90 meters (300 feet) shallower than previously thought. The alluvial aquifer system in the Modesto area comprises an unconfined to semiconfined aquifer above and east of the Corcoran Clay confining unit and a confined aquifer beneath the Corcoran Clay. The unconfined aquifer is composed of alluvial sediments of the Modesto, Riverbank, and upper Turlock Lake formations. The unconfined aquifer east of the Corcoran Clay becomes semiconfined with depth due to the numerous discontinuous clay lenses and extensive paleosols throughout the aquifer thickness. The confined aquifer is composed primarily of alluvial sediments of the Turlock Lake and upper Mehrten Formations, extending from beneath the Corcoran Clay to the base of fresh water. Ground water in the unconfined to semiconfined aquifer flows to the west and southwest. The primary source of present-day recharge is percolating excess irrigation water. The primary ground-water discharge is extensive ground-water pumping in the unconfined to semiconfined aquifer, imposing a significant component of vertical flo

  20. A Web-Based GIS for Reporting Water Usage in the High Plains Underground Water Conservation District

    NASA Astrophysics Data System (ADS)

    Jia, M.; Deeds, N.; Winckler, M.

    2012-12-01

    The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Recent rule changes have motivated HPWD to develop a more automated system to allow owners and operators to report well locations, meter locations, meter readings, the association between meters and wells, and contiguous acres. INTERA, Inc. has developed a web-based interactive system for HPWD water users to report water usage and for the district to better manage its water resources. The HPWD web management system utilizes state-of-the-art GIS techniques, including cloud-based Amazon EC2 virtual machine, ArcGIS Server, ArcSDE and ArcGIS Viewer for Flex, to support web-based water use management. The system enables users to navigate to their area of interest using a well-established base-map and perform a variety of operations and inquiries against their spatial features. The application currently has six components: user privilege management, property management, water meter registration, area registration, meter-well association and water use report. The system is composed of two main databases: spatial database and non-spatial database. With the help of Adobe Flex application at the front end and ArcGIS Server as the middle-ware, the spatial feature geometry and attributes update will be reflected immediately in the back end. As a result, property owners, along with the HPWD staff, collaborate together to weave the fabric of the spatial database. Interactions between the spatial and non-spatial databases are established by Windows Communication Foundation (WCF) services to record water-use report, user-property associations, owner-area associations, as well as meter-well associations. Mobile capabilities will be enabled in the near future for field workers to collect data and synchronize them to the spatial database. The entire solution is built on a highly scalable cloud server to dynamically allocate the computational resources so as to reduce the cost on security and hardware maintenance. In addition to the default capabilities provided by ESRI, customizations include 1) enabling interactions between spatial and non-spatial databases, 2) providing role-based feature editing, 3) dynamically filtering spatial features on the map based on user accounts and 4) comprehensive data validation.

  1. Karst in the United States: a digital map compilation and database

    USGS Publications Warehouse

    Weary, David J.; Doctor, Daniel H.

    2014-01-01

    This report describes new digital maps delineating areas of the United States, including Puerto Rico and the U.S. Virgin Islands, having karst or the potential for development of karst and pseudokarst. These maps show areas underlain by soluble rocks and also by volcanic rocks, sedimentary deposits, and permafrost that have potential for karst or pseudokarst development. All 50 States contain rocks with potential for karst development, and about 18 percent of their area is underlain by soluble rocks having karst or the potential for development of karst features. The areas of soluble rocks shown are based primarily on selection from State geologic maps of rock units containing significant amounts of carbonate or evaporite minerals. Areas underlain by soluble rocks are further classified by general climate setting, degree of induration, and degree of exposure. Areas having potential for volcanic pseudokarst are those underlain chiefly by basaltic-flow rocks no older than Miocene in age. Areas with potential for pseudokarst features in sedimentary rocks are in relatively unconsolidated rocks from which pseudokarst features, such as piping caves, have been reported. Areas having potential for development of thermokarst features, mapped exclusively in Alaska, contain permafrost in relatively thick surficial deposits containing ground ice. This report includes a GIS database with links from the map unit polygons to online geologic unit descriptions.

  2. Geologic map of Chickasaw National Recreation Area, Murray County, Oklahoma

    USGS Publications Warehouse

    Blome, Charles D.; Lidke, David J.; Wahl, Ronald R.; Golab, James A.

    2013-01-01

    This 1:24,000-scale geologic map is a compilation of previous geologic maps and new geologic mapping of areas in and around Chickasaw National Recreation Area. The geologic map includes revisions of numerous unit contacts and faults and a number of previously “undifferentiated” rock units were subdivided in some areas. Numerous circular-shaped hills in and around Chickasaw National Recreation Area are probably the result of karst-related collapse and may represent the erosional remnants of large, exhumed sinkholes. Geospatial registration of existing, smaller scale (1:72,000- and 1:100,000-scale) geologic maps of the area and construction of an accurate Geographic Information System (GIS) database preceded 2 years of fieldwork wherein previously mapped geology (unit contacts and faults) was verified and new geologic mapping was carried out. The geologic map of Chickasaw National Recreation Area and this pamphlet include information pertaining to how the geologic units and structural features in the map area relate to the formation of the northern Arbuckle Mountains and its Arbuckle-Simpson aquifer. The development of an accurate geospatial GIS database and the use of a handheld computer in the field greatly increased both the accuracy and efficiency in producing the 1:24,000-scale geologic map.

  3. The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set

    PubMed Central

    Dalrymple, Kirsten A.; Gomez, Jesse; Duchaine, Brad

    2013-01-01

    Facial identity and expression play critical roles in our social lives. Faces are therefore frequently used as stimuli in a variety of areas of scientific research. Although several extensive and well-controlled databases of adult faces exist, few databases include children’s faces. Here we present the Dartmouth Database of Children’s Faces, a set of photographs of 40 male and 40 female Caucasian children between 6 and 16 years-of-age. Models posed eight facial expressions and were photographed from five camera angles under two lighting conditions. Models wore black hats and black gowns to minimize extra-facial variables. To validate the images, independent raters identified facial expressions, rated their intensity, and provided an age estimate for each model. The Dartmouth Database of Children’s Faces is freely available for research purposes and can be downloaded by contacting the corresponding author by email. PMID:24244434

  4. Application of real-time database to LAMOST control system

    NASA Astrophysics Data System (ADS)

    Xu, Lingzhe; Xu, Xinqi

    2004-09-01

    The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX"s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.

  5. NASA aerospace database subject scope: An overview

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Outlined here is the subject scope of the NASA Aerospace Database, a publicly available subset of the NASA Scientific and Technical (STI) Database. Topics of interest to NASA are outlined and placed within the framework of the following broad aerospace subject categories: aeronautics, astronautics, chemistry and materials, engineering, geosciences, life sciences, mathematical and computer sciences, physics, social sciences, space sciences, and general. A brief discussion of the subject scope is given for each broad area, followed by a similar explanation of each of the narrower subject fields that follow. The subject category code is listed for each entry.

  6. 34 CFR 682.210 - Deferment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... school teachers; or (B) A specific grade level or academic, instructional, subject-matter, or discipline... are teaching in academic subject areas other than their area of preparation. (iii) If the total number... information from the other FFEL loan holder or the Secretary or from an authoritative electronic database...

  7. GTAP model and database modification to better handle cropping changes on the intensive margin.

    DOT National Transportation Integrated Search

    2017-07-25

    Previously, induced land use change estimates considered only the extensive margin; that is, adding to harvested area by converting forest or pasture to cropland. However, recent data suggests that some of the increase in global harvested area comes ...

  8. CUNY+ Web: Usability Study of the Web-Based GUI Version of the Bibliographic Database of the City University of New York (CUNY).

    ERIC Educational Resources Information Center

    Oulanov, Alexei; Pajarillo, Edmund J. Y.

    2002-01-01

    Describes the usability evaluation of the CUNY (City University of New York) information system in Web and Graphical User Interface (GUI) versions. Compares results to an earlier usability study of the basic information database available on CUNY's wide-area network and describes the applicability of the previous usability instrument to this…

  9. Design and Implementation of an Operational Database for the Fleet Area Control and Surveillance Facility, NAS North Island, San Diego, California

    DTIC Science & Technology

    1989-03-01

    deficiencies of the present system. Specific objectives critical in de,’cloping an under- standing of the system are: " Identify all knowledge workers who use...to be negligible. The limited training require- ments are primarily due to the users existing knowledge of tK2 Oracle database system, the Users

  10. X-48B Phase 1 Flight Maneuver Database and ICP Airspace Constraint Analysis

    NASA Technical Reports Server (NTRS)

    Fast, Peter Alan

    2010-01-01

    The work preformed during the Summer 2010 by Peter Fast. The main tasks assigned were to update and improve the X-48 Flight Maneuver Database and conduct an Airspace Constraint Analysis for the Remotely Operated Aircraft Area used to flight test Unmanned Arial Vehicles. The final task was to develop and demonstrate a working knowledge of flight control theory.

  11. A Quantitative Approach to Determine Analogous Areas Using Environmental Parameters

    DTIC Science & Technology

    2008-03-01

    degrees Celsius COADS Comprehensive Ocean - Atmosphere Data Set CONUS Continental United States CTD Conductivity/Temperature/Depth probe...consolidation of a marine database. Out of this effort came the Comprehensive Ocean - Atmosphere Data Set (COADS). The original 17 data sets were...National Oceanic and Atmospheric Administration (NOAA) has compiled a database of total sediment thickness of the global oceans and seas. These data are

  12. Cost Considerations in Cloud Computing

    DTIC Science & Technology

    2014-01-01

    investments. 2. Database Options The potential promise that “ big data ” analytics holds for many enterprise mission areas makes relevant the question of the...development of a range of new distributed file systems and data - bases that have better scalability properties than traditional SQL databases. Hadoop ... data . Many systems exist that extend or supplement Hadoop —such as Apache Accumulo, which provides a highly granular mechanism for managing security

  13. Information And Data-Sharing Plan of IPY China Activity

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Cheng, W.

    2007-12-01

    Polar Data-Sharing is an effective resolution to global system and polar science problems and to interdisciplinary and sustainable study, as well as an important means to deal with IPY scientific heritages and realize IPY goals. Corresponding to IPY Data-Sharing policies, Information and Data-Sharing Plan was listed in five sub-plans of IPY Chinese Programme launched in March, 2007,they are Scientific research program of the Prydz Bay, Amery Ice Shelf and Dome A transects(short title:'PANDA'), the Arctic Scientific Research Expedition Plan, International Cooperation Plan, Information and Data-Sharing Plan, Education and Outreach. China, since the foundation of Antarctic Zhongshan Station in 1989, has carried out systematic scientific expeditions and researches in Larsemann Hills, Prydz Bay and the neighbouring sea areas, organized 14 Prydz Bay oceanographic investigations, 3 Amery Ice Shelf expeditions, 4 Grove Mountains expeditions and 5 inland ice cap scientific expeditions. 2 comprehensive oceanographic investigations in the Arctic Ocean were conducted in 1999 and 2003, acquired a large amount of data and samples in PANDA section and fan areas of Pacific Ocean in the Arctic Ocean. A mechanism of basic data submitting ,sharing and archiving has been gradually set up since 2000. Presently, Polar Science Database and Polar Sample Resource Sharing Platform of China with the aim of sharing polar data and samples has been initially established and began to provide sharing service to domestic and oversea users. According to IPY Chinese Activity, 2 scientific expeditions in the Arctic Ocean, 3 in the South Ocean, 2 at Amery Ice Shelf, 1 on Grove Mountains and 2 inland ice cap expeditions on Dome A will be carried out during IPY period. According to the experiences accumulated in the past and the jobs in the future, the Information and Data- Sharing Plan, during 2007-2010, will save, archive, and provide exchange and sharing services upon the data obtained by scientific expeditions on the site of IPY Chinese Programme. Meanwhile, focusing on areas in east Antarctic Dome A-Grove Mountain-Zhongshan Station-Amery Ice Shelf-Prydz Bay Section and the fan areas of Pacific Ocean in the Arctic Ocean, the Plan will also collect and integrate IPY data and historical data and establish database of PANDA Section and the Arctic Ocean. The details are as follows: On the basis of integrating the observed data acquired during the expeditions of China, the Plan will, adopting portal technology, develop 5 subject databases (English version included):(1) Database of Zhongshan Station- Dome A inner land ice cap section;(2) Database of interaction of ocean-ice-atmosphere-ice shelf in east Antarctica;(3) Database of geological and glaciological advance and retreat evolvement in Grove Mountains; (4) Database of Solar Terrestrial Physics at Zhongshan Station; (5) Oceanographic database of fan area of Pacific Ocean in the Arctic Ocean. CN-NADC of PRIC is the institute which assumes the responsibility for the Plan, specifically, it coordinates and organizes the operation of the Plan which includes data management, developing the portal of data and information sharing, and international exchanges. The specific assignments under the Plan will be carried out by research institutes under CAS (Chinese Academy of Sciences), SOA ( State Oceanic Administration), State Bureau of Surveying and Mapping and Ministry of Education.

  14. Nano-enabled drug delivery: a research profile.

    PubMed

    Zhou, Xiao; Porter, Alan L; Robinson, Douglas K R; Shim, Min Suk; Guo, Ying

    2014-07-01

    Nano-enabled drug delivery (NEDD) systems are rapidly emerging as a key area for nanotechnology application. Understanding the status and developmental prospects of this area around the world is important to determine research priorities, and to evaluate and direct progress. Global research publication and patent databases provide a reservoir of information that can be tapped to provide intelligence for such needs. Here, we present a process to allow for extraction of NEDD-related information from these databases by involving topical experts. This process incorporates in-depth analysis of NEDD literature review papers to identify key subsystems and major topics. We then use these to structure global analysis of NEDD research topical trends and collaborative patterns, inform future innovation directions. This paper describes the process of how to derive nano-enabled drug delivery-related information from global research and patent databases in an effort to perform comprehensive global analysis of research trends and directions, along with collaborative patterns. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Integrative interactive visualization of crystal structure, band structure, and Brillouin zone

    NASA Astrophysics Data System (ADS)

    Hanson, Robert; Hinke, Ben; van Koevering, Matthew; Oses, Corey; Toher, Cormac; Hicks, David; Gossett, Eric; Plata Ramos, Jose; Curtarolo, Stefano; Aflow Collaboration

    The AFLOW library is an open-access database for high throughput ab-initio calculations that serves as a resource for the dissemination of computational results in the area of materials science. Our project aims to create an interactive web-based visualization of any structure in the AFLOW database that has associate band structure data in a way that allows novel simultaneous exploration of the crystal structure, band structure, and Brillouin zone. Interactivity is obtained using two synchronized JSmol implementations, one for the crystal structure and one for the Brillouin zone, along with a D3-based band-structure diagram produced on the fly from data obtained from the AFLOW database. The current website portal (http://aflowlib.mems.duke.edu/users/jmolers/matt/website) allows interactive access and visualization of crystal structure, Brillouin zone and band structure for more than 55,000 inorganic crystal structures. This work was supported by the US Navy Office of Naval Research through a Broad Area Announcement administered by Duke University.

  16. 75 FR 65229 - Amendment and Establishment of Restricted Areas and Other Special Use Airspace, Razorback Range...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-22

    ... to publishing the final rule, three geographic coordinates along Arkansas State Highway 10 and three... areas R-2402A, R-2402B, and R- 2402C changed in the aeronautical database. This action corrects those... in the Federal Register to establish two restricted areas (R-2402B and R-2402C) and amend an existing...

  17. Landslide databases review in the Geological Surveys of Europe

    NASA Astrophysics Data System (ADS)

    Herrera, Gerardo

    2017-04-01

    Landslides are one of the most widespread geohazards in Europe, producing significant social and economic damages. Rapid population growth in urban areas throughout many countries in Europe and extreme climatic scenarios can considerably increase landslide risk in the near future. However, many European countries do not include landslide risk into their legislation. Countries lack official methodological assessment guidelines and knowledge about landslide impacts. Although regional and national landslide databases exist in most countries, they are often not integrated because they are owed by different institutions. Hence, a European Landslides Directive, that provides a common legal framework for dealing with landslides, is necessary. With this long-term goal in mind, we present a review of the landslide databases from the Geological Surveys of Europe focusing on their interoperability. The same landslide classification was used for the 849,543 landslide records from the Geological Surveys, from which 36% are slides, 10 % falls, 20% flows, 11% complex slides and 24% remain either unclassified or correspond to another typology. A landslide density map was produced from the available records of the Geological Surveys of 17 countries showing the variable distribution of landslides. There are 0.2 million km2 of landslide prone areas. The comparison of this map with the European landslide susceptibility map ELSUS v1 was successful for 73% of the predictions, and permitted identification of 25% of susceptible areas where landslide records are not available from the Geological Surveys. Taking these results into account the completeness of these landslide databases was evaluated, revealing different landslide hazard management approaches between surveys and countries.

  18. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  19. 23 CFR 450.312 - Metropolitan planning area boundaries.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false Metropolitan planning area boundaries. 450.312 Section 450.312 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PLANNING AND RESEARCH... descriptions shall be submitted either as a geo-spatial database or described in sufficient detail to enable...

  20. Methods for structuring scientific knowledge from many areas related to aging research.

    PubMed

    Zhavoronkov, Alex; Cantor, Charles R

    2011-01-01

    Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.

  1. Generation And Understanding Of Natural Language Using Information In A Frame Structure

    NASA Astrophysics Data System (ADS)

    Perkins, Walton A.

    1989-03-01

    Many expert systems and relational database systems store factual information in the form of attributes values of objects. Problems arise in transforming from that attribute (frame) database representation into English surface structure and in transforming the English surface structure into a representation that references information in the frame database. In this paper we consider mainly the generation process, as it is this area in which we have made the most significant progress. In its interaction with the user, the expert system must generate questions, declarations, and uncertain declarations. Attributes such as COLOR, LENGTH, and ILLUMINATION can be referenced using the template: " of " for both questions and declarations. However, many other attributes, such as RATTLES, in "What is RATTLES of the light bulb?", and HAS_STREP_THROAT in, "HAS_STREP_THROAT of Dan is true." do not fit this template. We examined over 300 attributes from several knowledge bases and have grouped them into 16 classes. For each class there is one "question" template, one "declaration" template, and one "uncertain declaration" template for generating English surface structure. The internal databases identifiers (e.g., HAS_STREP_THROAT and DISEASE_35) must also be replaced by output synonyms. Classifying each attribute in combination with synonym translation remarkably improved the English surface structure that the system generated. In the area of understanding, synonym translation and knowledge of the attribute properties, such as legal values, has resulted in a robust database query capability.

  2. The choice of normative pediatric reference database changes spine bone mineral density Z-scores but not the relationship between bone mineral density and prevalent vertebral fractures.

    PubMed

    Ma, Jinhui; Siminoski, Kerry; Alos, Nathalie; Halton, Jacqueline; Ho, Josephine; Lentle, Brian; Matzinger, MaryAnn; Shenouda, Nazih; Atkinson, Stephanie; Barr, Ronald; Cabral, David A; Couch, Robert; Cummings, Elizabeth A; Fernandez, Conrad V; Grant, Ronald M; Rodd, Celia; Sbrocchi, Anne Marie; Scharke, Maya; Rauch, Frank; Ward, Leanne M

    2015-03-01

    Our objectives were to assess the magnitude of the disparity in lumbar spine bone mineral density (LSBMD) Z-scores generated by different reference databases and to evaluate whether the relationship between LSBMD Z-scores and vertebral fractures (VF) varies by choice of database. Children with leukemia underwent LSBMD by cross-calibrated dual-energy x-ray absorptiometry, with Z-scores generated according to Hologic and Lunar databases. VF were assessed by the Genant method on spine radiographs. Logistic regression was used to assess the association between fractures and LSBMD Z-scores. Net reclassification improvement and area under the receiver operating characteristic curve were calculated to assess the predictive accuracy of LSBMD Z-scores for VF. For the 186 children from 0 to 18 years of age, 6 different age ranges were studied. The Z-scores generated for the 0 to 18 group were highly correlated (r ≥ 0.90), but the proportion of children with LSBMD Z-scores ≤-2.0 among those with VF varied substantially (from 38-66%). Odds ratios (OR) for the association between LSBMD Z-score and VF were similar regardless of database (OR = 1.92, 95% confidence interval 1.44, 2.56 to OR = 2.70, 95% confidence interval 1.70, 4.28). Area under the receiver operating characteristic curve and net reclassification improvement ranged from 0.71 to 0.75 and -0.15 to 0.07, respectively. Although the use of a LSBMD Z-score threshold as part of the definition of osteoporosis in a child with VF does not appear valid, the study of relationships between BMD and VF is valid regardless of the BMD database that is used.

  3. Database integration in a multimedia-modeling environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include toolsmore » to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.« less

  4. Spatial variation of volcanic rock geochemistry in the Virunga Volcanic Province: Statistical analysis of an integrated database

    NASA Astrophysics Data System (ADS)

    Barette, Florian; Poppe, Sam; Smets, Benoît; Benbakkar, Mhammed; Kervyn, Matthieu

    2017-10-01

    We present an integrated, spatially-explicit database of existing geochemical major-element analyses available from (post-) colonial scientific reports, PhD Theses and international publications for the Virunga Volcanic Province, located in the western branch of the East African Rift System. This volcanic province is characterised by alkaline volcanism, including silica-undersaturated, alkaline and potassic lavas. The database contains a total of 908 geochemical analyses of eruptive rocks for the entire volcanic province with a localisation for most samples. A preliminary analysis of the overall consistency of the database, using statistical techniques on sets of geochemical analyses with contrasted analytical methods or dates, demonstrates that the database is consistent. We applied a principal component analysis and cluster analysis on whole-rock major element compositions included in the database to study the spatial variation of the chemical composition of eruptive products in the Virunga Volcanic Province. These statistical analyses identify spatially distributed clusters of eruptive products. The known geochemical contrasts are highlighted by the spatial analysis, such as the unique geochemical signature of Nyiragongo lavas compared to other Virunga lavas, the geochemical heterogeneity of the Bulengo area, and the trachyte flows of Karisimbi volcano. Most importantly, we identified separate clusters of eruptive products which originate from primitive magmatic sources. These lavas of primitive composition are preferentially located along NE-SW inherited rift structures, often at distance from the central Virunga volcanoes. Our results illustrate the relevance of a spatial analysis on integrated geochemical data for a volcanic province, as a complement to classical petrological investigations. This approach indeed helps to characterise geochemical variations within a complex of magmatic systems and to identify specific petrologic and geochemical investigations that should be tackled within a study area.

  5. Influence of high-resolution surface databases on the modeling of local atmospheric circulation systems

    NASA Astrophysics Data System (ADS)

    Paiva, L. M. S.; Bodstein, G. C. R.; Pimentel, L. C. G.

    2013-12-01

    Large-eddy simulations are performed using the Advanced Regional Prediction System (ARPS) code at horizontal grid resolutions as fine as 300 m to assess the influence of detailed and updated surface databases on the modeling of local atmospheric circulation systems of urban areas with complex terrain. Applications to air pollution and wind energy are sought. These databases are comprised of 3 arc-sec topographic data from the Shuttle Radar Topography Mission, 10 arc-sec vegetation type data from the European Space Agency (ESA) GlobCover Project, and 30 arc-sec Leaf Area Index and Fraction of Absorbed Photosynthetically Active Radiation data from the ESA GlobCarbon Project. Simulations are carried out for the Metropolitan Area of Rio de Janeiro using six one-way nested-grid domains that allow the choice of distinct parametric models and vertical resolutions associated to each grid. ARPS is initialized using the Global Forecasting System with 0.5°-resolution data from the National Center of Environmental Prediction, which is also used every 3 h as lateral boundary condition. Topographic shading is turned on and two soil layers with depths of 0.01 and 1.0 m are used to compute the soil temperature and moisture budgets in all runs. Results for two simulated runs covering the period from 6 to 7 September 2007 are compared to surface and upper-air observational data to explore the dependence of the simulations on initial and boundary conditions, topographic and land-use databases and grid resolution. Our comparisons show overall good agreement between simulated and observed data and also indicate that the low resolution of the 30 arc-sec soil database from United States Geological Survey, the soil moisture and skin temperature initial conditions assimilated from the GFS analyses and the synoptic forcing on the lateral boundaries of the finer grids may affect an adequate spatial description of the meteorological variables.

  6. Guideline.gov: A Database of Clinical Specialty Guidelines.

    PubMed

    El-Khayat, Yamila M; Forbes, Carrie S; Coghill, Jeffrey G

    2017-01-01

    The National Guidelines Clearinghouse (NGC), also known as Guideline.gov, is a database of resources to assist health care providers with a central depository of guidelines for clinical specialty areas in medicine. The database is provided free of charge and is sponsored by the U.S. Department of Health and Human Services and the Agency for Healthcare Research and Quality. The guidelines for treatment are updated regularly, with new guidelines replacing older guidelines every five years. There are hundreds of current guidelines with more added each week. The purpose and goal of NGC is to provide physicians, nurses, and other health care providers, insurance companies, and others in the field of health care with a unified database of the most current, detailed, relevant, and objective clinical practice guidelines.

  7. Has upwelling strengthened along worldwide coasts over 1982-2010?

    NASA Astrophysics Data System (ADS)

    Varela, R.; Álvarez, I.; Santos, F.; Decastro, M.; Gómez-Gesteira, M.

    2015-05-01

    Changes in coastal upwelling strength have been widely studied since 1990 when Bakun proposed that global warming can induce the intensification of upwelling in coastal areas. Whether present wind trends support this hypothesis remains controversial, as results of previous studies seem to depend on the study area, the length of the time series, the season, and even the database used. In this study, temporal and spatial trends in the coastal upwelling regime worldwide were investigated during upwelling seasons from 1982 to 2010 using a single wind database (Climate Forecast System Reanalysis) with high spatial resolution (0.3°). Of the major upwelling systems, increasing trends were only observed in the coastal areas of Benguela, Peru, Canary, and northern California. A tendency for an increase in upwelling-favourable winds was also identified along several less studied regions, such as the western Australian and southern Caribbean coasts.

  8. Has upwelling strengthened along worldwide coasts over 1982-2010?

    PubMed Central

    Varela, R.; Álvarez, I.; Santos, F.;  deCastro, M.; Gómez-Gesteira, M.

    2015-01-01

    Changes in coastal upwelling strength have been widely studied since 1990 when Bakun proposed that global warming can induce the intensification of upwelling in coastal areas. Whether present wind trends support this hypothesis remains controversial, as results of previous studies seem to depend on the study area, the length of the time series, the season, and even the database used. In this study, temporal and spatial trends in the coastal upwelling regime worldwide were investigated during upwelling seasons from 1982 to 2010 using a single wind database (Climate Forecast System Reanalysis) with high spatial resolution (0.3°). Of the major upwelling systems, increasing trends were only observed in the coastal areas of Benguela, Peru, Canary, and northern California. A tendency for an increase in upwelling-favourable winds was also identified along several less studied regions, such as the western Australian and southern Caribbean coasts. PMID:25952477

  9. Mapping the World's Marine Protected and Managed Areas - Promoting Awareness, Compliance, and Enforcement via Open Data and Tools.

    NASA Astrophysics Data System (ADS)

    Vincent, T.; Zetterlind, V.; Tougher, B.

    2016-12-01

    Marine Protected and Managed Areas (MPAs) are a cornerstone of coastal and ocean conservation efforts and reflect years of dedicated effort to protect species and habitats through science-based regulation. When they are effective, biomass increases dramatically, and up to 14 fold and play a significant role in conserving biodiversity. Effective MPAs have enforcement. Enforcement cannot occur without awareness of their location among ocean stakeholders and the general public. The Anthropocene Institute, in partnership with the NOAA Marine Protected Area Center, is creating an actively managed, free and open, worldwide database of MPAs, including normalized metadata and regulation summaries, full GIS boundaries, revision history, and public facing interactive web maps. This project employs 2 full-time lawyers that first comb the relevant regulation; 2 full-time geographers and a full-time GIS database/web engineer.

  10. Can different primary care databases produce comparable estimates of burden of disease: results of a study exploring venous leg ulceration.

    PubMed

    Petherick, Emily S; Pickett, Kate E; Cullum, Nicky A

    2015-08-01

    Primary care databases from the UK have been widely used to produce evidence on the epidemiology and health service usage of a wide range of conditions. To date there have been few evaluations of the comparability of estimates between different sources of these data. To estimate the comparability of two widely used primary care databases, the Health Improvement Network Database (THIN) and the General Practice Research Database (GPRD) using venous leg ulceration as an exemplar condition. Cross prospective cohort comparison. GPRD and the THIN databases using data from 1998 to 2006. A data set was extracted from both databases containing all cases of persons aged 20 years or greater with a database diagnosis of venous leg ulceration recorded in the databases for the period 1998-2006. Annual rates of incidence and prevalence of venous leg ulceration were calculated within each database and standardized to the European standard population and compared using standardized rate ratios. Comparable estimates of venous leg ulcer incidence from the GPRD and THIN databases could be obtained using data from 2000 to 2006 and of prevalence using data from 2001 to 2006. Recent data collected by these two databases are more likely to produce comparable results of the burden venous leg ulceration. These results require confirmation in other disease areas to enable researchers to have confidence in the comparability of findings from these two widely used primary care research resources. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    NASA Astrophysics Data System (ADS)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  12. An original imputation technique of missing data for assessing exposure of newborns to perchlorate in drinking water.

    PubMed

    Caron, Alexandre; Clement, Guillaume; Heyman, Christophe; Aernout, Eva; Chazard, Emmanuel; Le Tertre, Alain

    2015-01-01

    Incompleteness of epidemiological databases is a major drawback when it comes to analyzing data. We conceived an epidemiological study to assess the association between newborn thyroid function and the exposure to perchlorates found in the tap water of the mother's home. Only 9% of newborn's exposure to perchlorate was known. The aim of our study was to design, test and evaluate an original method for imputing perchlorate exposure of newborns based on their maternity of birth. In a first database, an exhaustive collection of newborn's thyroid function measured during a systematic neonatal screening was collected. In this database the municipality of residence of the newborn's mother was only available for 2012. Between 2004 and 2011, the closest data available was the municipality of the maternity of birth. Exposure was assessed using a second database which contained the perchlorate levels for each municipality. We computed the catchment area of every maternity ward based on the French nationwide exhaustive database of inpatient stay. Municipality, and consequently perchlorate exposure, was imputed by a weighted draw in the catchment area. Missing values for remaining covariates were imputed by chained equation. A linear mixture model was computed on each imputed dataset. We compared odds ratios (ORs) and 95% confidence intervals (95% CI) estimated on real versus imputed 2012 data. The same model was then carried out for the whole imputed database. The ORs estimated on 36,695 observations by our multiple imputation method are comparable to the real 2012 data. On the 394,979 observations of the whole database, the ORs remain stable but the 95% CI tighten considerably. The model estimates computed on imputed data are similar to those calculated on real data. The main advantage of multiple imputation is to provide unbiased estimate of the ORs while maintaining their variances. Thus, our method will be used to increase the statistical power of future studies by including all 394,979 newborns.

  13. [The design and implementation of the web typical surface object spectral information system in arid areas based on .NET and SuperMap].

    PubMed

    Xia, Jun; Tashpolat, Tiyip; Zhang, Fei; Ji, Hong-jiang

    2011-07-01

    The characteristic of object spectrum is not only the base of the quantification analysis of remote sensing, but also the main content of the basic research of remote sensing. The typical surface object spectral database in arid areas oasis is of great significance for applied research on remote sensing in soil salinization. In the present paper, the authors took the Ugan-Kuqa River Delta Oasis as an example, unified .NET and the SuperMap platform with SQL Server database stored data, used the B/S pattern and the C# language to design and develop the typical surface object spectral information system, and established the typical surface object spectral database according to the characteristics of arid areas oasis. The system implemented the classified storage and the management of typical surface object spectral information and the related attribute data of the study areas; this system also implemented visualized two-way query between the maps and attribute data, the drawings of the surface object spectral response curves and the processing of the derivative spectral data and its drawings. In addition, the system initially possessed a simple spectral data mining and analysis capabilities, and this advantage provided an efficient, reliable and convenient data management and application platform for the Ugan-Kuqa River Delta Oasis's follow-up study in soil salinization. Finally, It's easy to maintain, convinient for secondary development and practically operating in good condition.

  14. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  15. Analysis of the Database of Theses and Dissertations from DME/UFSCAR about Astronomy Education

    NASA Astrophysics Data System (ADS)

    Rodrigues Ferreira, Orlando; Voelzke, Marcos Rincon

    2013-11-01

    The paper presents a brief analysis of the "Database of Theses and Dissertations about Astronomy Education" from the Department of Teaching Methodology (DME) of the Federal University of São Carlos(UFSCar). This kind of study made it possible to develop new analysis and statistical data, as well as to conduct a rating of Brazilian institutions that produce academic work in the area.

  16. Early ICU Standardized Rehabilitation Therapy for the Critically Injured Burn Patient

    DTIC Science & Technology

    2017-10-01

    phase proposed to examine medical records within a large national hospital database to identify optimal care delivery patters. Minimizing the...The original study was deemed phase I and closed. The second phase proposed to examine medical records within a large national hospital database to...engineering, and the academic world on areas such as: • improving public knowledge, attitudes, skills, and abilities; • changing behavior, practices

  17. Geochronology Database for Central Colorado

    USGS Publications Warehouse

    Klein, T.L.; Evans, K.V.; deWitt, E.H.

    2010-01-01

    This database is a compilation of published and some unpublished isotopic and fission track age determinations in central Colorado. The compiled area extends from the southern Wyoming border to the northern New Mexico border and from approximately the longitude of Denver on the east to Gunnison on the west. Data for the tephrochronology of Pleistocene volcanic ash, carbon-14, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals have been excluded.

  18. Analysis of User Need with CD-ROM Databases: A Case Study Based on Work Sampling at One University Library.

    ERIC Educational Resources Information Center

    Wells, Amy Tracy

    Analysis of the needs of users of Compact Disk-Read Only Memory (CD-ROM) was performed at the Tampa campus of the University of South Florida. A review of the literature indicated that problems associated with selecting the appropriate database, searching, and requiring technical assistance were the probable areas of user need. The library has 17…

  19. The Disease Portals, disease-gene annotation and the RGD disease ontology at the Rat Genome Database.

    PubMed

    Hayman, G Thomas; Laulederkind, Stanley J F; Smith, Jennifer R; Wang, Shur-Jen; Petri, Victoria; Nigam, Rajni; Tutaj, Marek; De Pons, Jeff; Dwinell, Melinda R; Shimoyama, Mary

    2016-01-01

    The Rat Genome Database (RGD;http://rgd.mcw.edu/) provides critical datasets and software tools to a diverse community of rat and non-rat researchers worldwide. To meet the needs of the many users whose research is disease oriented, RGD has created a series of Disease Portals and has prioritized its curation efforts on the datasets important to understanding the mechanisms of various diseases. Gene-disease relationships for three species, rat, human and mouse, are annotated to capture biomarkers, genetic associations, molecular mechanisms and therapeutic targets. To generate gene-disease annotations more effectively and in greater detail, RGD initially adopted the MEDIC disease vocabulary from the Comparative Toxicogenomics Database and adapted it for use by expanding this framework with the addition of over 1000 terms to create the RGD Disease Ontology (RDO). The RDO provides the foundation for, at present, 10 comprehensive disease area-related dataset and analysis platforms at RGD, the Disease Portals. Two major disease areas are the focus of data acquisition and curation efforts each year, leading to the release of the related Disease Portals. Collaborative efforts to realize a more robust disease ontology are underway. Database URL:http://rgd.mcw.edu. © The Author(s) 2016. Published by Oxford University Press.

  20. An authoritative global database for active submarine hydrothermal vent fields

    NASA Astrophysics Data System (ADS)

    Beaulieu, Stace E.; Baker, Edward T.; German, Christopher R.; Maffei, Andrew

    2013-11-01

    The InterRidge Vents Database is available online as the authoritative reference for locations of active submarine hydrothermal vent fields. Here we describe the revision of the database to an open source content management system and conduct a meta-analysis of the global distribution of known active vent fields. The number of known active vent fields has almost doubled in the past decade (521 as of year 2009), with about half visually confirmed and others inferred active from physical and chemical clues. Although previously known mainly from mid-ocean ridges (MORs), active vent fields at MORs now comprise only half of the total known, with about a quarter each now known at volcanic arcs and back-arc spreading centers. Discoveries in arc and back-arc settings resulted in an increase in known vent fields within exclusive economic zones, consequently reducing the proportion known in high seas to one third. The increase in known vent fields reflects a number of factors, including increased national and commercial interests in seafloor hydrothermal deposits as mineral resources. The purpose of the database now extends beyond academic research and education and into marine policy and management, with at least 18% of known vent fields in areas granted or pending applications for mineral prospecting and 8% in marine protected areas.

  1. Geologic and aeromagnetic maps of the Fossil Ridge area and vicinity, Gunnison County, Colorado

    USGS Publications Warehouse

    DeWitt, Ed; Zech, R.S.; Chase, C.G.; Zartman, R.E.; Kucks, R.P.; Bartelson, Bruce; Rosenlund, G.C.; Earley, Drummond

    2002-01-01

    This data set includes a GIS geologic map database of an Early Proterozoic metavolcanic and metasedimentary terrane extensively intruded by Early and Middle Proterozoic granitic plutons. Laramide to Tertiary deformation and intrusion of felsic plutons have created numerous small mineral deposits that are described in the tables and are shown on the figures in the accompanying text pamphlet. Also included in the pamphlet are numerous chemical analyses of igneous and meta-igneous bodies of all ages in tables and in summary geochemical diagrams. The text pamphlet also contains a detailed description of map units and discussions of the aeromagnetic survey, igneous and metmorphic rocks, and mineral deposits. The printed map sheet and browse graphic pdf file include the aeromagnetic map of the study area, as well as figures and photographs. Purpose: This GIS geologic map database is provided to facilitate the presentation and analysis of earth-science data for this region of Colorado. This digital map database may be displayed at any scale or projection. However, the geologic data in this coverage are not intended for use at a scale other than 1:30,000. Supplemental useful data accompanying the database are extensive geochemical and mineral deposits data, as well as an aeromagnetic map.

  2. Transportation Services in Rural Areas. January 1979-December 1988. Quick Bibliography Series.

    ERIC Educational Resources Information Center

    La Caille John, Patricia, Comp.

    This bibliography contains 137 entries of English-language materials available from the National Agricultural Library's (NAL) AGRICOLA database. Each of the bibliography's 137 entries pertains to some aspect of transportation services in rural areas. Each entry, including books, reports, studies, and so forth, offers bibliographical information…

  3. Transit profiles : agencies in urbanized areas exceeding 200,000 population for the 1993 National Transit Database section 15 report year

    DOT National Transportation Integrated Search

    1994-12-01

    This publication consists of individual profiles for each reporting transit agency located in an urbanized area with a population exceeding 200,000. The data contained in each profile consists of general and summary reports, as well as modal, perform...

  4. Transit profiles : agencies in urbanized areas exceeding 200,000 population for the 1994 National Transit Database report year

    DOT National Transportation Integrated Search

    1995-12-01

    This publication consists of individual profiles for each reporting transit agency located in an urbanized area with a population exceeding 200,000. The data contained in each profile consists of general and summary reports, as well as modal, perform...

  5. Search extension transforms Wiki into a relational system: a case for flavonoid metabolite database.

    PubMed

    Arita, Masanori; Suwa, Kazuhiro

    2008-09-17

    In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated.

  6. Search extension transforms Wiki into a relational system: A case for flavonoid metabolite database

    PubMed Central

    Arita, Masanori; Suwa, Kazuhiro

    2008-01-01

    Background In computer science, database systems are based on the relational model founded by Edgar Codd in 1970. On the other hand, in the area of biology the word 'database' often refers to loosely formatted, very large text files. Although such bio-databases may describe conflicts or ambiguities (e.g. a protein pair do and do not interact, or unknown parameters) in a positive sense, the flexibility of the data format sacrifices a systematic query mechanism equivalent to the widely used SQL. Results To overcome this disadvantage, we propose embeddable string-search commands on a Wiki-based system and designed a half-formatted database. As proof of principle, a database of flavonoid with 6902 molecular structures from over 1687 plant species was implemented on MediaWiki, the background system of Wikipedia. Registered users can describe any information in an arbitrary format. Structured part is subject to text-string searches to realize relational operations. The system was written in PHP language as the extension of MediaWiki. All modifications are open-source and publicly available. Conclusion This scheme benefits from both the free-formatted Wiki style and the concise and structured relational-database style. MediaWiki supports multi-user environments for document management, and the cost for database maintenance is alleviated. PMID:18822113

  7. Searching fee and non-fee toxicology information resources: an overview of selected databases.

    PubMed

    Wright, L L

    2001-01-12

    Toxicology profiles organize information by broad subjects, the first of which affirms identity of the agent studied. Studies here show two non-fee databases (ChemFinder and ChemIDplus) verify the identity of compounds with high efficiency (63% and 73% respectively) with the fee-based Chemical Abstracts Registry file serving well to fill data gaps (100%). Continued searching proceeds using knowledge of structure, scope and content to select databases. Valuable sources for information are factual databases that collect data and facts in special subject areas organized in formats available for analysis or use. Some sources representative of factual files are RTECS, CCRIS, HSDB, GENE-TOX and IRIS. Numerous factual databases offer a wealth of reliable information; however, exhaustive searches probe information published in journal articles and/or technical reports with records residing in bibliographic databases such as BIOSIS, EMBASE, MEDLINE, TOXLINE and Web of Science. Listed with descriptions are numerous factual and bibliographic databases supplied by 11 producers. Given the multitude of options and resources, it is often necessary to seek service desk assistance. Questions were posed by telephone and e-mail to service desks at DIALOG, ISI, MEDLARS, Micromedex and STN International. Results of the survey are reported.

  8. Comparison of Ethnic-specific Databases in Heidelberg Retina Tomography-3 to Discriminate Between Early Glaucoma and Normal Chinese Eyes.

    PubMed

    Tan, Xiu Ling; Yap, Sae Cheong; Li, Xiang; Yip, Leonard W

    2017-01-01

    To compare the diagnostic accuracy of the 3 race-specific normative databases in Heidelberg Retina Tomography (HRT)-3, in differentiating between early glaucomatous and healthy normal Chinese eyes. 52 healthy volunteers and 25 glaucoma patients were recruited for this prospective cross-sectional study. All underwent standardized interviews, ophthalmic examination, perimetry and HRT optic disc imaging. Area under the curve (AUC) receiver operating characteristics, sensitivity and specificity were derived to assess the discriminating abilities of the 3 normative databases, for both Moorfields Regression Analysis (MRA) and Glaucoma Probability Score (GPS). A significantly higher percentage (65%) of patients were classified as "within normal limits" using the MRA-Indian database, as compared to the MRA-Caucasian and MRA-African-American databases. However, for GPS, this was observed using the African-American database. For MRA, the highest sensitivity was obtained with both Caucasian and African-American databases (68%), while the highest specificity was from the Indian database (94%). The AUC for discrimination between glaucomatous and normal eyes by MRA-Caucasian, MRA-African-American and MRA-Indian databases were 0.77 (95% CI, 0.67-0.88), 0.79 (0.69-0.89) and 0.73 (0.63-0.84) respectively. For GPS, the highest sensitivity was obtained using either Caucasian or Indian databases (68%). The highest specificity was seen with the African-American database (98%). The AUC for GPS-Caucasian, GPS-African-American and GPS-Indian databases were 0.76 (95% CI, 0.66-0.87), 0.77 (0.67-0.87) and 0.76 (0.66-0.87) respectively. Comparison of the 3 ethnic databases did not reveal significant differences to differentiate early glaucomatous from normal Chinese eyes.

  9. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  10. Quaternary Geology and Liquefaction Susceptibility, San Francisco, California 1:100,000 Quadrangle: A Digital Database

    USGS Publications Warehouse

    Knudsen, Keith L.; Noller, Jay S.; Sowers, Janet M.; Lettis, William R.

    1997-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There are no paper maps included in the Open-File report. The report does include, however, PostScript plot files containing the images of the geologic map sheets with explanations, as well as the accompanying text describing the geology of the area. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled 'For Those Who Aren't Familiar With Digital Geologic Map Databases' below. This digital map database, compiled from previously unpublished data, and new mapping by the authors, represents the general distribution of surficial deposits in the San Francisco bay region. Together with the accompanying text file (sf_geo.txt or sf_geo.pdf), it provides current information on Quaternary geology and liquefaction susceptibility of the San Francisco, California, 1:100,000 quadrangle. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller. The content and character of the database, as well as three methods of obtaining the database, are described below.

  11. Depth-area-duration characteristics of storm rainfall in Texas using Multi-Sensor Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    McEnery, J. A.; Jitkajornwanich, K.

    2012-12-01

    This presentation will describe the methodology and overall system development by which a benchmark dataset of precipitation information has been used to characterize the depth-area-duration relations in heavy rain storms occurring over regions of Texas. Over the past two years project investigators along with the National Weather Service (NWS) West Gulf River Forecast Center (WGRFC) have developed and operated a gateway data system to ingest, store, and disseminate NWS multi-sensor precipitation estimates (MPE). As a pilot project of the Integrated Water Resources Science and Services (IWRSS) initiative, this testbed uses a Standard Query Language (SQL) server to maintain a full archive of current and historic MPE values within the WGRFC service area. These time series values are made available for public access as web services in the standard WaterML format. Having this volume of information maintained in a comprehensive database now allows the use of relational analysis capabilities within SQL to leverage these multi-sensor precipitation values and produce a valuable derivative product. The area of focus for this study is North Texas and will utilize values that originated from the West Gulf River Forecast Center (WGRFC); one of three River Forecast Centers currently represented in the holdings of this data system. Over the past two decades, NEXRAD radar has dramatically improved the ability to record rainfall. The resulting hourly MPE values, distributed over an approximate 4 km by 4 km grid, are considered by the NWS to be the "best estimate" of rainfall. The data server provides an accepted standard interface for internet access to the largest time-series dataset of NEXRAD based MPE values ever assembled. An automated script has been written to search and extract storms over the 18 year period of record from the contents of this massive historical precipitation database. Not only can it extract site-specific storms, but also duration-specific storms and storms separated by user defined inter-event periods. A separate storm database has been created to store the selected output. By storing output within tables in a separate database, we can make use of powerful SQL capabilities to perform flexible pattern analysis. Previous efforts have made use of historic data from limited clusters of irregularly spaced physical gauges. Spatial extent of the observational network has been a limiting factor. The relatively dense distribution of MPE provides a virtual mesh of observations stretched over the landscape. This work combines a unique hydrologic data resource with programming and database analysis to characterize storm depth-area-duration relationships.

  12. Coverage and overlaps in bibliographic databases relevant to forensic medicine: a comparative analysis of MEDLINE.

    PubMed Central

    Yonker, V A; Young, K P; Beecham, S K; Horwitz, S; Cousin, K

    1990-01-01

    This study was designed to make a comparative evaluation of the performance of MEDLINE in covering serial literature. Forensic medicine was chosen because it is an interdisciplinary subject area that would test MEDLARS at the periphery of the system. The evaluation of database coverage was based upon articles included in the bibliographies of scholars in the field of forensic medicine. This method was considered appropriate for characterizing work used by researchers in this field. The results of comparing MEDLINE to other databases evoked some concerns about the selective indexing policy of MEDLINE in serving the interests of those working in forensic medicine. PMID:2403829

  13. A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6).

    PubMed

    Darwish, Omar; Li, Shuxian; May, Zane; Matthews, Benjamin; Alkharouf, Nadim W

    2016-01-01

    Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe- Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx.

  14. A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6)

    PubMed Central

    May, Zane; Matthews, Benjamin; Alkharouf, Nadim W.

    2016-01-01

    Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe– Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. Availability: http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx PMID:28197060

  15. A unique database for gathering data from a mobile app and medical prescription software: a useful data source to collect and analyse patient-reported outcomes of depression and anxiety symptoms.

    PubMed

    Watanabe, Yoshinori; Hirano, Yoko; Asami, Yuko; Okada, Maki; Fujita, Kazuya

    2017-11-01

    A unique database named 'AN-SAPO' was developed by Iwato Corp. and Japan Brain Corp. in collaboration with the psychiatric clinics run by Himorogi Group in Japan. The AN-SAPO database includes patients' depression/anxiety score data from a mobile app named AN-SAPO and medical records from medical prescription software named 'ORCA'. On the mobile app, depression/anxiety severity can be evaluated by answering 20 brief questions and the scores are transferred to the AN-SAPO database together with the patients' medical records on ORCA. Currently, this database is used at the Himorogi Group's psychiatric clinics and has over 2000 patients' records accumulated since November 2013. Since the database covers patients' demographic data, prescribed drugs, and the efficacy and safety information, it could be a useful supporting tool for decision-making in clinical practice. We expect it to be utilised in wider areas of medical fields and for future pharmacovigilance and pharmacoepidemiological studies.

  16. Utilisation of a thoracic oncology database to capture radiological and pathological images for evaluation of response to chemotherapy in patients with malignant pleural mesothelioma

    PubMed Central

    Carey, George B; Kazantsev, Stephanie; Surati, Mosmi; Rolle, Cleo E; Kanteti, Archana; Sadiq, Ahad; Bahroos, Neil; Raumann, Brigitte; Madduri, Ravi; Dave, Paul; Starkey, Adam; Hensing, Thomas; Husain, Aliya N; Vokes, Everett E; Vigneswaran, Wickii; Armato, Samuel G; Kindler, Hedy L; Salgia, Ravi

    2012-01-01

    Objective An area of need in cancer informatics is the ability to store images in a comprehensive database as part of translational cancer research. To meet this need, we have implemented a novel tandem database infrastructure that facilitates image storage and utilisation. Background We had previously implemented the Thoracic Oncology Program Database Project (TOPDP) database for our translational cancer research needs. While useful for many research endeavours, it is unable to store images, hence our need to implement an imaging database which could communicate easily with the TOPDP database. Methods The Thoracic Oncology Research Program (TORP) imaging database was designed using the Research Electronic Data Capture (REDCap) platform, which was developed by Vanderbilt University. To demonstrate proof of principle and evaluate utility, we performed a retrospective investigation into tumour response for malignant pleural mesothelioma (MPM) patients treated at the University of Chicago Medical Center with either of two analogous chemotherapy regimens and consented to at least one of two UCMC IRB protocols, 9571 and 13473A. Results A cohort of 22 MPM patients was identified using clinical data in the TOPDP database. After measurements were acquired, two representative CT images and 0–35 histological images per patient were successfully stored in the TORP database, along with clinical and demographic data. Discussion We implemented the TORP imaging database to be used in conjunction with our comprehensive TOPDP database. While it requires an additional effort to use two databases, our database infrastructure facilitates more comprehensive translational research. Conclusions The investigation described herein demonstrates the successful implementation of this novel tandem imaging database infrastructure, as well as the potential utility of investigations enabled by it. The data model presented here can be utilised as the basis for further development of other larger, more streamlined databases in the future. PMID:23103606

  17. A Remote Sensing and GIS Approach for State to Aquifer Scale Mapping of Groundwater Dependent Ecosystems

    NASA Astrophysics Data System (ADS)

    Gonzalez, S.; Gou, S.; Miller, G. R.

    2012-12-01

    Ecosystems which rely on either the surface expression or subsurface presence of groundwater are known as groundwater dependent ecosystems (GDEs). A comprehensive inventory of GDE locations at a management scale is a necessary first-step for sustainable management of effected aquifers; however, this information is unavailable for most areas of concern. To address this gap, this study derives algorithms to identify the spatial distribution of GDEs at the state and aquifer scales and to generate an example geospatial database of potential GDEs located throughout Texas. We first constructed a geospatial information system (GIS) database with current climate, topography, hydrology, and ecology data, synthesized from both existing feature sets and sets created with information from published documents. The created features included potential groundwater dependent vegetation types in Texas and gaining and loosing streams produces with data from flow measuring stations. The resulting state-scale GIS database was used to delineate the areas where conditions are favorable for GDEs. Next, an aquifer-scale remote sensing based algorithm was created to identify the ecosystems that exhibit the physiological hallmarks groundwater dependence. This algorithm used Landsat 7 and MODIS images to calculate the seasonal and inter-annual changes of NDVI for each vegetation pixel. The NDVI dynamics were used to identify the vegetation with high potential to use groundwater—such plants remain mostly green and physiologically active during extended dry periods of the year and also exhibit low inter-annual leaf area changes between dry and wet years. Combining the results of GIS and remote sensing methods, we group the vegetated areas into five levels from "very high" to "very low" potential to use groundwater. The product of this research, a state-level GIS database of potential GDEs in Texas, indicates that the vegetation with highest groundwater use possibility is around the springs, along the gaining streams, or within the shallow water table areas. It also reveals that the Edwards aquifer region has the highest density of potential GDEs. Out of a total area of 105 km2 in this region, 24% was found to have a high or very high probability of having GDEs. In addition, we highlight the significance of GDE identification to sustainable groundwater management and demonstrate the necessity of unconfined groundwater table monitoring.

  18. Global cost analysis on adaptation to sea level rise based on RCP/SSP scenarios

    NASA Astrophysics Data System (ADS)

    Kumano, N.; Tamura, M.; Yotsukuri, M.; Kuwahara, Y.; Yokoki, H.

    2017-12-01

    Low-lying areas are the most vulnerable to sea level rise (SLR) due to climate change in the future. In order to adapt to SLR, it is necessary to decide whether to retreat from vulnerable areas or to install dykes to protect them from inundation. Therefore, cost- analysis of adaptation using coastal dykes is one of the most essential issues in the context of climate change and its countermeasures. However, few studies have globally evaluated the future costs of adaptation in coastal areas. This study tries to globally analyze the cost of adaptation in coastal areas. First, global distributions of projected inundation impacts induced by SLR including astronomical high tide were assessed. Economic damage was estimated on the basis of the econometric relationship between past hydrological disasters, affected population, and per capita GDP using CRED's EM-DAT database. Second, the cost of adaptation was also determined using the cost database and future scenarios. The authors have built a cost database for installed coastal dykes worldwide and applied it to estimating the future cost of adaptation. The unit costs of dyke construction will increase with socio-economic scenario (SSP) such as per capita GDP. Length of vulnerable coastline is calculated by identifying inundation areas using ETOPO1. Future cost was obtained by multiplying the length of vulnerable coastline and the unit cost of dyke construction. Third, the effectiveness of dyke construction was estimated by comparing cases with and without adaptation.As a result, it was found that incremental adaptation cost is lower than economic damage in the cases of SSP1 and SSP3 under RCP scenario, while the cost of adaptation depends on the durability of the coastal dykes.

  19. Post flood damage data collection and assessment in Albania based on DesInventar methodology

    NASA Astrophysics Data System (ADS)

    Toto, Emanuela; Massabo, Marco; Deda, Miranda; Rossello, Laura

    2015-04-01

    In 2013 in Albania was implemented a collection of disaster losses based on Desinventar. The DesInventar system consists in a methodology and software tool that lead to the systematic collection, documentation and analysis of loss data on disasters. The main sources of information about disasters used for the Albanian database were the Albanian Ministry of Internal Affairs, the National Library and the State archive. Specifically for floods the database created contains nearly 900 datasets, for a period of 148 years (from 1865 to 2013). The data are georeferenced on the administrative units of Albania: Region, Provinces and Municipalities. The datasets describe the events by reporting the date of occurrence, the duration, the localization in administrative units and the cause. Additional information regards the effects and damage that the event caused on people (deaths, injured, missing, affected, relocated, evacuated, victims) and on houses (houses damaged or destroyed). Other quantitative indicators are the losses in local currency or US dollars, the damage on roads, the crops affected , the lost cattle and the involvement of social elements over the territory such as education and health centers. Qualitative indicators simply register the sectors (e.g. transportations, communications, relief, agriculture, water supply, sewerage, power and energy, industries, education, health sector, other sectors) that were affected. Through the queries and analysis of the data collected it was possible to identify the most affected areas, the economic loss, the damage in agriculture, the houses and people affected and many other variables. The most vulnerable Regions for the past floods in Albania were studied and individuated, as well as the rivers that cause more damage in the country. Other analysis help to estimate the damage and losses during the main flood events of the recent years, occurred in 2010 and 2011, and to recognize the most affected sectors. The database was used to find the most frequent drivers that cause floods and to identify the areas with a higher priority for intervention and the areas with a higher economic loss. In future the loss and damage database could address interventions for risk mitigation and decision making processes. Using the database is also possible to build Empirical Loss Exceedance Curves, that permit to find the average number of times for year that a certain level of loss happened. The users of the database information can be researchers, students, citizens and policy makers. The operators of the National Operative Center for Civil Emergencies (Albanian Ministry of Internal Affairs) use the database daily to insert new data. Nowadays in Albania there isn't an entity in charge for the registration of damage and consequences of floods in a systematic and organized way. In this sense, the database DesInventar provides a basis for the future and helps to identify priorities to create a national database.

  20. Evaluation of low wind modeling approaches for two tall-stack databases.

    PubMed

    Paine, Robert; Samani, Olga; Kaplan, Mary; Knipping, Eladio; Kumar, Naresh

    2015-11-01

    The performance of the AERMOD air dispersion model under low wind speed conditions, especially for applications with only one level of meteorological data and no direct turbulence measurements or vertical temperature gradient observations, is the focus of this study. The analysis documented in this paper addresses evaluations for low wind conditions involving tall stack releases for which multiple years of concurrent emissions, meteorological data, and monitoring data are available. AERMOD was tested on two field-study databases involving several SO2 monitors and hourly emissions data that had sub-hourly meteorological data (e.g., 10-min averages) available using several technical options: default mode, with various low wind speed beta options, and using the available sub-hourly meteorological data. These field study databases included (1) Mercer County, a North Dakota database featuring five SO2 monitors within 10 km of the Dakota Gasification Company's plant and the Antelope Valley Station power plant in an area of both flat and elevated terrain, and (2) a flat-terrain setting database with four SO2 monitors within 6 km of the Gibson Generating Station in southwest Indiana. Both sites featured regionally representative 10-m meteorological databases, with no significant terrain obstacles between the meteorological site and the emission sources. The low wind beta options show improvement in model performance helping to reduce some of the over-prediction biases currently present in AERMOD when run with regulatory default options. The overall findings with the low wind speed testing on these tall stack field-study databases indicate that AERMOD low wind speed options have a minor effect for flat terrain locations, but can have a significant effect for elevated terrain locations. The performance of AERMOD using low wind speed options leads to improved consistency of meteorological conditions associated with the highest observed and predicted concentration events. The available sub-hourly modeling results using the Sub-Hourly AERMOD Run Procedure (SHARP) are relatively unbiased and show that this alternative approach should be seriously considered to address situations dominated by low-wind meander conditions. AERMOD was evaluated with two tall stack databases (in North Dakota and Indiana) in areas of both flat and elevated terrain. AERMOD cases included the regulatory default mode, low wind speed beta options, and use of the Sub-Hourly AERMOD Run Procedure (SHARP). The low wind beta options show improvement in model performance (especially in higher terrain areas), helping to reduce some of the over-prediction biases currently present in regulatory default AERMOD. The SHARP results are relatively unbiased and show that this approach should be seriously considered to address situations dominated by low-wind meander conditions.

  1. Documentation of a spatial data-base management system for monitoring pesticide application in Washington

    USGS Publications Warehouse

    Schurr, K.M.; Cox, S.E.

    1994-01-01

    The Pesticide-Application Data-Base Management System was created as a demonstration project and was tested with data submitted to the Washington State Department of Agriculture by pesticide applicators from a small geographic area. These data were entered into the Department's relational data-base system and uploaded into the system's ARC/INFO files. Locations for pesticide applica- tions are assigned within the Public Land Survey System grids, and ARC/INFO programs in the Pesticide-Application Data-Base Management System can subdivide each survey section into sixteen idealized quarter-quarter sections for display map grids. The system provides data retrieval and geographic information system plotting capabilities from a menu of seven basic retrieval options. Additionally, ARC/INFO coverages can be created from the retrieved data when required for particular applications. The Pesticide-Application Data-Base Management System, or the general principles used in the system, could be adapted to other applica- tions or to other states.

  2. Supplier Management System

    NASA Technical Reports Server (NTRS)

    Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann

    2008-01-01

    Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.

  3. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  4. BGDB: a database of bivalent genes.

    PubMed

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  5. Footprint Database and web services for the Herschel space observatory

    NASA Astrophysics Data System (ADS)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  6. Gis-Based Crop Support System For Common Oatand Naked Oat in China

    NASA Astrophysics Data System (ADS)

    Wan, Fan; Wang, Zhen; Li, Fengmin; Cao, Huhua; Sun, Guojun

    The identification of the suitable areas for common oat (Avena sativa L.) and naked oat (Avena nuda L.) in China using Multi-Criteria Evaluation (MCE) approach based on GIS is presented in the current article. Climate, topography, soil, land use and oat variety databases were created. Relevant criteria,suitability levels and their weights for each factor were defined. Then the criteria maps were obtained and turned into the MCE process, and suitability maps for common oat and naked oat were created. The land use and the suitability maps were crossed to identify the suitable areas for each crop. The results identified 397,720 km2 of suitable areas for common oats of forage purpose distributed in 744 counties in 17 provinces, and 556,232 km2 of suitable areas for naked oats of grain purpose distributed in 779 counties in 19 provinces. This result is in accordance with the distribution of farmingpastoral ecozones located in semi-arid regions of northern China. The mapped areas can help define the working limits and serve as indicative zones for oat in China. The created databases, mapped results, interface of expert system and relevant hardware facilities could construct a complete crop support system for oats.

  7. A New Global Open Source Marine Hydrocarbon Emission Site Database

    NASA Astrophysics Data System (ADS)

    Onyia, E., Jr.; Wood, W. T.; Barnard, A.; Dada, T.; Qazzaz, M.; Lee, T. R.; Herrera, E.; Sager, W.

    2017-12-01

    Hydrocarbon emission sites (e.g. seeps) discharge large volumes of fluids and gases into the oceans that are not only important for biogeochemical budgets, but also support abundant chemosynthetic communities. Documenting the locations of modern emissions is a first step towards understanding and monitoring how they affect the global state of the seafloor and oceans. Currently, no global open source (i.e. non-proprietry) detailed maps of emissions sites are available. As a solution, we have created a database that is housed within an Excel spreadsheet and use the latest versions of Earthpoint and Google Earth for position coordinate conversions and data mapping, respectively. To date, approximately 1,000 data points have been collected from referenceable sources across the globe, and we are continualy expanding the dataset. Due to the variety of spatial extents encountered, to identify each site we used two different methods: 1) point (x, y, z) locations for individual sites and; 2) delineation of areas where sites are clustered. Certain well-known areas, such as the Gulf of Mexico and the Mediterranean Sea, have a greater abundance of information; whereas significantly less information is available in other regions due to the absence of emission sites, lack of data, or because the existing data is proprietary. Although the geographical extent of the data is currently restricted to regions where the most data is publicly available, as the database matures, we expect to have more complete coverage of the world's oceans. This database is an information resource that consolidates and organizes the existing literature on hydrocarbons released into the marine environment, thereby providing a comprehensive reference for future work. We expect that the availability of seafloor hydrocarbon emission maps will benefit scientific understanding of hydrocarbon rich areas as well as potentially aiding hydrocarbon exploration and environmental impact assessements.

  8. Physiographic rim of the Grand Canyon, Arizona: a digital database

    USGS Publications Warehouse

    Billingsley, George H.; Hampton, Haydee M.

    1999-01-01

    This Open-File report is a digital physiographic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. The report does include, however, PostScript and PDF format plot files, each containing an image of the map. For those interested in a paper plot of information contained in the database or in obtaining the PostScript plot files, please see the section entitled "For Those Who Don't Use Digital Geologic Map Databases" below. This physiographic map of the Grand Canyon is modified from previous versions by Billingsley and Hendricks (1989), and Billingsley and others (1997). The boundary is drawn approximately along the topographic rim of the Grand Canyon and its tributary canyons between Lees Ferry and Lake Mead (shown in red). Several isolated small mesas, buttes, and plateaus are within this area, which overall encompasses about 2,600 square miles. The Grand Canyon lies within the southwestern part of the Colorado Plateaus of northern Arizona between Lees Ferry, Colorado River Mile 0, and Lake Mead, Colorado River Mile 277. The Colorado River is the corridor for raft trips through the Grand Canyon. Limestone rocks of the Kaibab Formation form most of the north and south rims of the Grand Canyon, and a few volcanic rocks form the north rim of parts of the Uinkaret and Shivwits Plateaus. Limestones of the Redwall Limestone and lower Supai Group form the rim of the Hualapai Plateau area, and Limestones of Devonian and Cambrian age form the boundary rim near the mouth of Grand Canyon at the Lake Mead. The natural physiographic boundary of the Grand Canyon is roughly the area a visitor would first view any part of the Grand Canyon and its tributaries.

  9. Deriving leaf mass per area (LMA) from foliar reflectance across a variety of plant species using continuous wavelet analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Tao; Rivard, Benoit; Sánchez-Azofeifa, Arturo G.; Féret, Jean-Baptiste; Jacquemoud, Stéphane; Ustin, Susan L.

    2014-01-01

    Leaf mass per area (LMA), the ratio of leaf dry mass to leaf area, is a trait of central importance to the understanding of plant light capture and carbon gain. It can be estimated from leaf reflectance spectroscopy in the infrared region, by making use of information about the absorption features of dry matter. This study reports on the application of continuous wavelet analysis (CWA) to the estimation of LMA across a wide range of plant species. We compiled a large database of leaf reflectance spectra acquired within the framework of three independent measurement campaigns (ANGERS, LOPEX and PANAMA) and generated a simulated database using the PROSPECT leaf optical properties model. CWA was applied to the measured and simulated databases to extract wavelet features that correlate with LMA. These features were assessed in terms of predictive capability and robustness while transferring predictive models from the simulated database to the measured database. The assessment was also conducted with two existing spectral indices, namely the Normalized Dry Matter Index (NDMI) and the Normalized Difference index for LMA (NDLMA). Five common wavelet features were determined from the two databases, which showed significant correlations with LMA (R2: 0.51-0.82, p < 0.0001). The best robustness (R2 = 0.74, RMSE = 18.97 g/m2 and Bias = 0.12 g/m2) was obtained using a combination of two low-scale features (1639 nm, scale 4) and (2133 nm, scale 5), the first being predominantly important. The transferability of the wavelet-based predictive model to the whole measured database was either better than or comparable to those based on spectral indices. Additionally, only the wavelet-based model showed consistent predictive capabilities among the three measured data sets. In comparison, the models based on spectral indices were sensitive to site-specific data sets. Integrating the NDLMA spectral index and the two robust wavelet features improved the LMA prediction. One of the bands used by this spectral index, 1368 nm, was located in a strong atmospheric water absorption region and replacing it with the next available band (1340 nm) led to lower predictive accuracies. However, the two wavelet features were not affected by data quality in the atmospheric absorption regions and therefore showed potential for canopy-level investigations. The wavelet approach provides a different perspective into spectral responses to LMA variation than the traditional spectral indices and holds greater promise for implementation with airborne or spaceborne imaging spectroscopy data for mapping canopy foliar dry biomass.

  10. User’s guide to the North Pacific Pelagic Seabird Database 2.0

    USGS Publications Warehouse

    Drew, Gary S.; Piatt, John F.; Renner, Martin

    2015-07-13

    The North Pacific Pelagic Seabird Database (NPPSD) was created in 2005 to consolidate data on the oceanic distribution of marine bird species in the North Pacific. Most of these data were collected on surveys by counting species within defined areas and at known locations (that is, on strip transects). The NPPSD also contains observations of other bird species and marine mammals. The original NPPSD combined data from 465 surveys conducted between 1973 and 2002, primarily in waters adjacent to Alaska. These surveys included 61,195 sample transects with location, environment, and metadata information, and the data were organized in a flat-file format. In developing NPPSD 2.0, our goals were to add new datasets, to make significant improvements to database functionality and to provide the database online. NPPSD 2.0 includes data from a broader geographic range within the North Pacific, including new observations made offshore of the Russian Federation, Japan, Korea, British Columbia (Canada), Oregon, and California. These data were imported into a relational database, proofed, and structured in a common format. NPPSD 2.0 contains 351,674 samples (transects) collected between 1973 and 2012, representing a total sampled area of 270,259 square kilometers, and extends the time series of samples in some areas—notably the Bering Sea—to four decades. It contains observations of 16,988,138 birds and 235,545 marine mammals and is available on the NPPSD Web site. Supplementary materials include an updated set of standardized taxonomic codes, reference maps that show the spatial and temporal distribution of the survey efforts and a downloadable query tool.

  11. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  12. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  13. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  14. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  15. 32 CFR 989.32 - Noise.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... database entry. Utilize the current NOISEMAP computer program for air installations and the Assessment System for Aircraft Noise for military training routes and military operating areas. Guidance on...

  16. Population Migration in Rural Areas, January 1979-December 1988. Quick Bibliography Series.

    ERIC Educational Resources Information Center

    La Caille John, Patricia, Comp.

    This bibliography consists of 87 entries of materials related to population trends in rural and nonmetropolitan areas. This collection is the result of a computerized search of the AGRICOLA database. The bibliography covers topics of rural population change, migration and migrants, farm labor supplies and social conditions, and different patterns…

  17. Integrating Health Information Systems into a Database Course: A Case Study

    ERIC Educational Resources Information Center

    Anderson, Nicole; Zhang, Mingrui; McMaster, Kirby

    2011-01-01

    Computer Science is a rich field with many growing application areas, such as Health Information Systems. What we suggest here is that multi-disciplinary threads can be introduced to supplement, enhance, and strengthen the primary area of study in a course. We call these supplementary materials "threads," because they are executed…

  18. 77 FR 65253 - Amendment of Area Navigation Route T-240; AK

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-26

    ... contains regulatory documents #0;having general applicability and legal effect, most of which are keyed #0... the legal description of area navigation (RNAV) route T-240 in Alaska by removing one waypoint that is no longer required and has been deleted from the National Airspace System Resources (NASR) database...

  19. Tuberous Sclerosis Complex National Database

    DTIC Science & Technology

    2006-10-01

    about conditions in the following areas (as applicable to the participant): academic, cardiac, cognitive, dental , dermatological, liver, neurological...Search Criteria Add To These Results With an Affected Area Academic Cardiac Dental Dermatological Liver Neurological...The Research Integrity Officer, with the assistance of counsel (if needed), will convene the first meeting of the investigation committee to review

  20. Transit profiles : agencies in urbanized areas with a population of less than 200,000 for the 1994 National Transit Database report year

    DOT National Transportation Integrated Search

    1995-12-01

    This publication consists of individual profiles for each reporting transit agency located in an urbanized area with a population less than 200,000. The data contained in each profile consists of general and summary reports, as well as modal, perform...

  1. Trends in Doctoral Research on English Language Teaching in Turkey

    ERIC Educational Resources Information Center

    Özmen, Kemal Sinan; Cephe, Pasa Tevfik; Kinik, Betül

    2016-01-01

    This review examines the doctoral research in Turkey completed between 2010 and 2014 in the area of English language teaching and learning. All of the dissertations (N = 137) indexed in the National Theses Database have been included in order to analyze dissertations' subject areas, research paradigms/techniques, and research contexts as well as…

  2. Study on resources and environmental data integration towards data warehouse construction covering trans-boundary area of China, Russia and Mongolia

    NASA Astrophysics Data System (ADS)

    Wang, J.; Song, J.; Gao, M.; Zhu, L.

    2014-02-01

    The trans-boundary area between Northern China, Mongolia and eastern Siberia of Russia is a continuous geographical area located in north eastern Asia. Many common issues in this region need to be addressed based on a uniform resources and environmental data warehouse. Based on the practice of joint scientific expedition, the paper presented a data integration solution including 3 steps, i.e., data collection standards and specifications making, data reorganization and process, data warehouse design and development. A series of data collection standards and specifications were drawn up firstly covering more than 10 domains. According to the uniform standard, 20 resources and environmental survey databases in regional scale, and 11 in-situ observation databases were reorganized and integrated. North East Asia Resources and Environmental Data Warehouse was designed, which included 4 layers, i.e., resources layer, core business logic layer, internet interoperation layer, and web portal layer. The data warehouse prototype was developed and deployed initially. All the integrated data in this area can be accessed online.

  3. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.

    PubMed

    Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui

    2017-03-29

    In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.

  4. Subsurface structure around Omi basin using borehole database

    NASA Astrophysics Data System (ADS)

    Kitada, N.; Ito, H.; Takemura, K.; Mitamura, M.

    2015-12-01

    Kansai Geo-informatics Network (KG-NET) is organized as a new system of management of GI-base in 2005. This organization collects the geotechnical and geological information of borehole data more than 60,000 data. GI-base is the database system of the KG-NET and platform to use these borehole data. Kansai Geo-informatics Research Committee (KG-R) is tried to explain the geotechnical properties and geological environment using borehole database in Kansai area. In 2014, KG-R established the 'Shin-Kansai Jiban Omi plain', and explain the subsurface geology and characteristics of geotechnical properties. In this study we introduce this result and consider the sedimental environment and characteristics in this area. Omi Basin is located in the central part of Shiga Prefecture which includes the largest lake in Japan called Lake Biwa. About 15,000 borehole data are corrected to consider the subsurface properties. The outline of topographical and geological characteristics of the basin is divided into west side and east side. The west side area is typical reverse fault called Biwako-Seigan fault zone along the lakefront. From Biwako-Seigan fault, the Omi basin is tilting down from east to west. Otherwise, the east areas distribute lowland and hilly area comparatively. The sedimentary facies are also complicate and difficult to be generally evaluated. So the discussion has been focused about mainly the eastern and western part of Lake Biwa. The widely dispersed volcanic ash named Aira-Tn (AT) deposited before 26,000-29,000 years ago (Machida and Arai, 2003), is sometimes interbedded the humic layers in the low level ground area. However, because most of the sediments are comprised by thick sand and gravels whose deposit age could not be investigated, it is difficult to widely identify the boundary of strata. Three types of basement rocks are distributed mainly (granite, sediment rock, rhyolite), and characteristics of deposit are difference of each backland basement rock. Therefore, we considered the characteristics of area deposit as each riverine system. Otherwise, lakeside area are distributes many humic layers and sandy beach ridge. The results of each distinctive trend are useful to estimate of seismic properties and zonation.

  5. A mathematical model of neuro-fuzzy approximation in image classification

    NASA Astrophysics Data System (ADS)

    Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.

    2016-06-01

    Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.

  6. Predictive landslide susceptibility mapping using spatial information in the Pechabun area of Thailand

    NASA Astrophysics Data System (ADS)

    Oh, Hyun-Joo; Lee, Saro; Chotikasathien, Wisut; Kim, Chang Hwan; Kwon, Ju Hyoung

    2009-04-01

    For predictive landslide susceptibility mapping, this study applied and verified probability model, the frequency ratio and statistical model, logistic regression at Pechabun, Thailand, using a geographic information system (GIS) and remote sensing. Landslide locations were identified in the study area from interpretation of aerial photographs and field surveys, and maps of the topography, geology and land cover were constructed to spatial database. The factors that influence landslide occurrence, such as slope gradient, slope aspect and curvature of topography and distance from drainage were calculated from the topographic database. Lithology and distance from fault were extracted and calculated from the geology database. Land cover was classified from Landsat TM satellite image. The frequency ratio and logistic regression coefficient were overlaid for landslide susceptibility mapping as each factor’s ratings. Then the landslide susceptibility map was verified and compared using the existing landslide location. As the verification results, the frequency ratio model showed 76.39% and logistic regression model showed 70.42% in prediction accuracy. The method can be used to reduce hazards associated with landslides and to plan land cover.

  7. Development of educational image databases and e-books for medical physics training.

    PubMed

    Tabakov, S; Roberts, V C; Jonsson, B-A; Ljungberg, M; Lewis, C A; Wirestam, R; Strand, S-E; Lamm, I-L; Milano, F; Simmons, A; Deane, C; Goss, D; Aitken, V; Noel, A; Giraud, J-Y; Sherriff, S; Smith, P; Clarke, G; Almqvist, M; Jansson, T

    2005-09-01

    Medical physics education and training requires the use of extensive imaging material and specific explanations. These requirements provide an excellent background for application of e-Learning. The EU projects Consortia EMERALD and EMIT developed five volumes of such materials, now used in 65 countries. EMERALD developed e-Learning materials in three areas of medical physics (X-ray diagnostic radiology, nuclear medicine and radiotherapy). EMIT developed e-Learning materials in two further areas: ultrasound and magnetic resonance imaging. This paper describes the development of these e-Learning materials (consisting of e-books and educational image databases). The e-books include tasks helping studying of various equipment and methods. The text of these PDF e-books is hyperlinked with respective images. The e-books are used through the readers' own Internet browser. Each Image Database (IDB) includes a browser, which displays hundreds of images of equipment, block diagrams and graphs, image quality examples, artefacts, etc. Both the e-books and IDB are engraved on five separate CD-ROMs. Demo of these materials can be taken from www.emerald2.net.

  8. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    NASA Astrophysics Data System (ADS)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  9. Evolution of a Patient Information Management System in a Local Area Network Environment at Loyola University of Chicago Medical Center

    PubMed Central

    Price, Ronald N; Chandrasekhar, Arcot J; Tamirisa, Balaji

    1990-01-01

    The Department of Medicine at Loyola University Medical Center (LUMC) of Chicago has implemented a local area network (LAN) based Patient Information Management System (PIMS) as part of its integrated departmental database management system. PIMS consists of related database applications encompassing demographic information, current medications, problem lists, clinical data, prior events, and on-line procedure results. Integration into the existing departmental database system permits PIMS to capture and manipulate data in other departmental applications. Standardization of clinical data is accomplished through three data tables that verify diagnosis codes, procedures codes and a standardized set of clinical data elements. The modularity of the system, coupled with standardized data formats, allowed the development of a Patient Information Protocol System (PIPS). PIPS, a userdefinable protocol processor, provides physicians with individualized data entry or review screens customized for their specific research protocols or practice habits. Physician feedback indicates that the PIMS/PIPS combination enhances their ability to collect and review specific patient information by filtering large amount of clinical data.

  10. Region 9 Census Block 2010

    EPA Pesticide Factsheets

    Geography:The TIGER Line Files are feature classes and related database files (.) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census Blocks are statistical areas bounded on all sides by visible features, such as streets, roads, streams, and railroad tracks, and/or by non visible boundaries such as city, town, township, and county limits, and short line-of-sight extensions of streets and roads. Census blocks are relatively small in area; for example, a block in a city bounded by streets. However, census blocks in remote areas are often large and irregular and may even be many square miles in area. A common misunderstanding is that data users think census blocks are used geographically to build all other census geographic areas, rather all other census geographic areas are updated and then used as the primary constraints, along with roads and water features, to delineate the tabulation blocks. As a result, all 2010 Census blocks nest within every other 2010 Census geographic area, so that Census Bureau statistical data can be tabulated at the block level and aggregated up t

  11. Monitoring the ecology and environment using remote sensing in the Jinta area/Middle Reaches of Heihe River Basin

    NASA Astrophysics Data System (ADS)

    Lu, Anxin; Wang, Lihong; Chen, Xianzhang

    2003-07-01

    A major monitoring area, a part of the middle reaches of Heihe basin, was selected. The Landsat TM data in summer of 1990 and 2000 were used with interpretation on the computer screen, classification and setting up environmental investigation database (1:100000) combined with DEM, land cover/land use, land type data and etc., according to the environmental classification system. Then towards to the main problems of environment, the spatial statistical analysis and dynamic comparisons were carried out using the database. The dynamic monitoring results of 1999 and 2000 show that the changing percentage with the area of 6 ground objects are as follows: land use and agriculture land use increased by 34.17% and 19.47% respectively, wet land and water-body also increased by 6.29% and 8.03% respectively; unused land increased by 1.73% and the biggest change is natural/semi-natural vegetation area, decreased by 42.78%, the main results above meat with the requirements of precise and practical conditions by the precise exam and spot check. With the combinations of using TM remote sensing data and rich un-remote sensing data, the investigations of ecology and environment and the dynamic monitoring would be carried out efficiently in the arid area. It is a dangerous signal of large area desertification if the area of natural/semi-natural vegetation is reduced continuously and obviously.

  12. Dynamic Agricultural Land Unit Profile Database Generation using Landsat Time Series Images

    NASA Astrophysics Data System (ADS)

    Torres-Rua, A. F.; McKee, M.

    2012-12-01

    Agriculture requires continuous supply of inputs to production, while providing final or intermediate outputs or products (food, forage, industrial uses, etc.). Government and other economic agents are interested in the continuity of this process and make decisions based on the available information about current conditions within the agriculture area. From a government point of view, it is important that the input-output chain in agriculture for a given area be enhanced in time, while any possible abrupt disruption be minimized or be constrained within the variation tolerance of the input-output chain. The stability of the exchange of inputs and outputs becomes of even more important in disaster-affected zones, where government programs will look for restoring the area to equal or enhanced social and economical conditions before the occurrence of the disaster. From an economical perspective, potential and existing input providers require up-to-date, precise information of the agriculture area to determine present and future inputs and stock amounts. From another side, agriculture output acquirers might want to apply their own criteria to sort out present and future providers (farmers or irrigators) based on the management done during the irrigation season. In the last 20 years geospatial information has become available for large areas in the globe, providing accurate, unbiased historical records of actual agriculture conditions at individual land units for small and large agricultural areas. This data, adequately processed and stored in any database format, can provide invaluable information for government and economic interests. Despite the availability of the geospatial imagery records, limited or no geospatial-based information about past and current farming conditions at the level of individual land units exists for many agricultural areas in the world. The absence of this information challenges the work of policy makers to evaluate previous or current government efforts for a given occurrence at the land unit level, and affecting the potential economic trade-off level in the area. In this study a framework is proposed to create and continuously update a land unit profile database using historical Landsat satellite imagery records. An experimental test is implemented for the agricultural lands in Central Utah. This location was selected because of their success in increasing the efficiency of water use and control along the entire irrigation system. A set of crop health metrics from the literature (NDVI, LAI, NDWI) is calculated and evaluated to measure crop response to farm management for its evaluation in time. The resulting land unit profile database is then tested to determine land unit profile groups based on land unit management characteristics. Comparison with essential inputs (water availability and climate conditions) and crop type (outputs) on a year basis is provided.

  13. Human population, urban settlement patterns and their impact on Plasmodium falciparum malaria endemicity.

    PubMed

    Tatem, Andrew J; Guerra, Carlos A; Kabaria, Caroline W; Noor, Abdisalan M; Hay, Simon I

    2008-10-27

    The efficient allocation of financial resources for malaria control and the optimal distribution of appropriate interventions require accurate information on the geographic distribution of malaria risk and of the human populations it affects. Low population densities in rural areas and high population densities in urban areas can influence malaria transmission substantially. Here, the Malaria Atlas Project (MAP) global database of Plasmodium falciparum parasite rate (PfPR) surveys, medical intelligence and contemporary population surfaces are utilized to explore these relationships and other issues involved in combining malaria risk maps with those of human population distribution in order to define populations at risk more accurately. First, an existing population surface was examined to determine if it was sufficiently detailed to be used reliably as a mask to identify areas of very low and very high population density as malaria free regions. Second, the potential of international travel and health guidelines (ITHGs) for identifying malaria free cities was examined. Third, the differences in PfPR values between surveys conducted in author-defined rural and urban areas were examined. Fourth, the ability of various global urban extent maps to reliably discriminate these author-based classifications of urban and rural in the PfPR database was investigated. Finally, the urban map that most accurately replicated the author-based classifications was analysed to examine the effects of urban classifications on PfPR values across the entire MAP database. Masks of zero population density excluded many non-zero PfPR surveys, indicating that the population surface was not detailed enough to define areas of zero transmission resulting from low population densities. In contrast, the ITHGs enabled the identification and mapping of 53 malaria free urban areas within endemic countries. Comparison of PfPR survey results showed significant differences between author-defined 'urban' and 'rural' designations in Africa, but not for the remainder of the malaria endemic world. The Global Rural Urban Mapping Project (GRUMP) urban extent mask proved most accurate for mapping these author-defined rural and urban locations, and further sub-divisions of urban extents into urban and peri-urban classes enabled the effects of high population densities on malaria transmission to be mapped and quantified. The availability of detailed, contemporary census and urban extent data for the construction of coherent and accurate global spatial population databases is often poor. These known sources of uncertainty in population surfaces and urban maps have the potential to be incorporated into future malaria burden estimates. Currently, insufficient spatial information exists globally to identify areas accurately where population density is low enough to impact upon transmission. Medical intelligence does however exist to reliably identify malaria free cities. Moreover, in Africa, urban areas that have a significant effect on malaria transmission can be mapped.

  14. International contributions to IAEA-NEA heat transfer databases for supercritical fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, L. K. H.; Yamada, K.

    2012-07-01

    An IAEA Coordinated Research Project on 'Heat Transfer Behaviour and Thermohydraulics Code Testing for SCWRs' is being conducted to facilitate collaboration and interaction among participants from 15 organizations. While the project covers several key technology areas relevant to the development of SCWR concepts, it focuses mainly on the heat transfer aspect, which has been identified as the most challenging. Through the collaborating effort, large heat-transfer databases have been compiled for supercritical water and surrogate fluids in tubes, annuli, and bundle subassemblies of various orientations over a wide range of flow conditions. Assessments of several supercritical heat-transfer correlations were performed usingmore » the complied databases. The assessment results are presented. (authors)« less

  15. The Coral Triangle Atlas: An Integrated Online Spatial Database System for Improving Coral Reef Management

    PubMed Central

    Cros, Annick; Ahamad Fatan, Nurulhuda; White, Alan; Teoh, Shwu Jiau; Tan, Stanley; Handayani, Christian; Huang, Charles; Peterson, Nate; Venegas Li, Ruben; Siry, Hendra Yusran; Fitriana, Ria; Gove, Jamison; Acoba, Tomoko; Knight, Maurice; Acosta, Renerio; Andrew, Neil; Beare, Doug

    2014-01-01

    In this paper we describe the construction of an online GIS database system, hosted by WorldFish, which stores bio-physical, ecological and socio-economic data for the ‘Coral Triangle Area’ in South-east Asia and the Pacific. The database has been built in partnership with all six (Timor-Leste, Malaysia, Indonesia, The Philippines, Solomon Islands and Papua New Guinea) of the Coral Triangle countries, and represents a valuable source of information for natural resource managers at the regional scale. Its utility is demonstrated using biophysical data, data summarising marine habitats, and data describing the extent of marine protected areas in the region. PMID:24941442

  16. Using CLIPS in a distributed system: The Network Control Center (NCC) expert system

    NASA Technical Reports Server (NTRS)

    Wannemacher, Tom

    1990-01-01

    This paper describes an intelligent troubleshooting system for the Help Desk domain. It was developed on an IBM-compatible 80286 PC using Microsoft C and CLIPS and an AT&T 3B2 minicomputer using the UNIFY database and a combination of shell script, C programs and SQL queries. The two computers are linked by a lan. The functions of this system are to help non-technical NCC personnel handle trouble calls, to keep a log of problem calls with complete, concise information, and to keep a historical database of problems. The database helps identify hardware and software problem areas and provides a source of new rules for the troubleshooting knowledge base.

  17. Big data and ophthalmic research.

    PubMed

    Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B

    2016-01-01

    Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Data-based discharge extrapolation: estimating annual discharge for a partially gauged large river basin from its small sub-basins

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2013-12-01

    Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.

  19. [Profile of a systematic search. Search areas, databases and reports].

    PubMed

    Korsbek, Lisa; Bendix, Ane Friis; Kidholm, Kristian

    2006-04-03

    Systematic literature search is a fundamental in evidence-based medicine. But systematic literature search is not yet a very well used way of retrieving evidence-based information. This article profiles a systematic literature search for evidence-based literature. It goes through the most central databases and gives an example of how to document the literature search. The article also sums up the literature search in all reviews in Ugeskrift for Laeger in the year 2004.

  20. The Era of the Large Databases: Outcomes After Gastroesophageal Surgery According to NSQIP, NIS, and NCDB Databases. Systematic Literature Review.

    PubMed

    Batista Rodríguez, Gabriela; Balla, Andrea; Fernández-Ananín, Sonia; Balagué, Carmen; Targarona, Eduard M

    2018-05-01

    The term big data refers to databases that include large amounts of information used in various areas of knowledge. Currently, there are large databases that allow the evaluation of postoperative evolution, such as the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP), the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS), and the National Cancer Database (NCDB). The aim of this review was to evaluate the clinical impact of information obtained from these registries regarding gastroesophageal surgery. A systematic review using the Meta-analysis of Observational Studies in Epidemiology guidelines was performed. The research was carried out using the PubMed database identifying 251 articles. All outcomes related to gastroesophageal surgery were analyzed. A total of 34 articles published between January 2007 and July 2017 were included, for a total of 345 697 patients. Studies were analyzed and divided according to the type of surgery and main theme in (1) esophageal surgery and (2) gastric surgery. The information provided by these databases is an effective way to obtain levels of evidence not obtainable by conventional methods. Furthermore, this information is useful for the external validation of previous studies, to establish benchmarks that allow comparisons between centers and have a positive impact on the quality of care.

  1. The construction and periodicity analysis of natural disaster database of Alxa area based on Chinese local records

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Mingzhong, Tian; Hengli, Wang

    2010-05-01

    Chinese hand-written local records were originated from the first century. Generally, these local records include geography, evolution, customs, education, products, people, historical sites, as well as writings of an area. Through such endeavors, the information of the natural materials of China nearly has had no "dark ages" in the evolution of its 5000-year old civilization. A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia, the second largest desert in China, is used here for the construction of a 500-year high resolution database. The database is divided into subsets according to the types of natural-disasters like sand-dust storm, drought events, cold wave, etc. Through applying trend, correlation, wavelet, and spectral analysis on these data, we can estimate the statistically periodicity of different natural-disasters, detect and quantify similarities and patterns of the periodicities of these records, and finally take these results in aggregate to find a strong and coherent cyclicity through the last 500 years which serves as the driving mechanism of these geological hazards. Based on the periodicity obtained from the above analysis, the paper discusses the probability of forecasting natural-disasters and the suitable measures to reduce disaster losses through history records. Keyword: Chinese local records; Alxa; natural disasters; database; periodicity analysis

  2. Diabetic retinopathy screening using deep neural network.

    PubMed

    Ramachandran, Nishanthan; Hong, Sheng Chiong; Sime, Mary J; Wilson, Graham A

    2017-09-07

    There is a burgeoning interest in the use of deep neural network in diabetic retinal screening. To determine whether a deep neural network could satisfactorily detect diabetic retinopathy that requires referral to an ophthalmologist from a local diabetic retinal screening programme and an international database. Retrospective audit. Diabetic retinal photos from Otago database photographed during October 2016 (485 photos), and 1200 photos from Messidor international database. Receiver operating characteristic curve to illustrate the ability of a deep neural network to identify referable diabetic retinopathy (moderate or worse diabetic retinopathy or exudates within one disc diameter of the fovea). Area under the receiver operating characteristic curve, sensitivity and specificity. For detecting referable diabetic retinopathy, the deep neural network had an area under receiver operating characteristic curve of 0.901 (95% confidence interval 0.807-0.995), with 84.6% sensitivity and 79.7% specificity for Otago and 0.980 (95% confidence interval 0.973-0.986), with 96.0% sensitivity and 90.0% specificity for Messidor. This study has shown that a deep neural network can detect referable diabetic retinopathy with sensitivities and specificities close to or better than 80% from both an international and a domestic (New Zealand) database. We believe that deep neural networks can be integrated into community screening once they can successfully detect both diabetic retinopathy and diabetic macular oedema. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  3. Monitoring Earth's reservoir and lake dynamics from space

    NASA Astrophysics Data System (ADS)

    Donchyts, G.; Eilander, D.; Schellekens, J.; Winsemius, H.; Gorelick, N.; Erickson, T.; Van De Giesen, N.

    2016-12-01

    Reservoirs and lakes constitute about 90% of the Earth's fresh surface water. They play a major role in the water cycle and are critical for the ever increasing demands of the world's growing population. Water from reservoirs is used for agricultural, industrial, domestic, and other purposes. Current digital databases of lakes and reservoirs are scarce, mainly providing only descriptive and static properties of the reservoirs. The Global Reservoir and Dam (GRanD) database contains almost 7000 entries while OpenStreetMap counts more than 500 000 entries tagged as a reservoir. In the last decade several research efforts already focused on accurate estimates of surface water dynamics, mainly using satellite altimetry, However, currently they are limited only to less than 1000 (mostly large) water bodies. Our approach is based on three main components. Firstly, a novel method, allowing automated and accurate estimation of surface area from (partially) cloud-free optical multispectral or radar satellite imagery. The algorithm uses satellite imagery measured by Landsat, Sentinel and MODIS missions. Secondly, a database to store reservoir static and dynamic parameters. Thirdly, a web-based tool, built on top of Google Earth Engine infrastructure. The tool allows estimation of surface area for lakes and reservoirs at planetary-scale at high spatial and temporal resolution. A prototype version of the method, database, and tool will be presented as well as validation using in-situ measurements.

  4. MMpI: A WideRange of Available Compounds of Matrix Metalloproteinase Inhibitors

    PubMed Central

    Muvva, Charuvaka; Patra, Sanjukta; Venkatesan, Subramanian

    2016-01-01

    Matrix metalloproteinases (MMPs) are a family of zinc-dependent proteinases involved in the regulation of the extracellular signaling and structural matrix environment of cells and tissues. MMPs are considered as promising targets for the treatment of many diseases. Therefore, creation of database on the inhibitors of MMP would definitely accelerate the research activities in this area due to its implication in above-mentioned diseases and associated limitations in the first and second generation inhibitors. In this communication, we report the development of a new MMpI database which provides resourceful information for all researchers working in this field. It is a web-accessible, unique resource that contains detailed information on the inhibitors of MMP including small molecules, peptides and MMP Drug Leads. The database contains entries of ~3000 inhibitors including ~72 MMP Drug Leads and ~73 peptide based inhibitors. This database provides the detailed molecular and structural details which are necessary for the drug discovery and development. The MMpI database contains physical properties, 2D and 3D structures (mol2 and pdb format files) of inhibitors of MMP. Other data fields are hyperlinked to PubChem, ChEMBL, BindingDB, DrugBank, PDB, MEROPS and PubMed. The database has extensive searching facility with MMpI ID, IUPAC name, chemical structure and with the title of research article. The MMP inhibitors provided in MMpI database are optimized using Python-based Hierarchical Environment for Integrated Xtallography (Phenix) software. MMpI Database is unique and it is the only public database that contains and provides the complete information on the inhibitors of MMP. Database URL: http://clri.res.in/subramanian/databases/mmpi/index.php. PMID:27509041

  5. Five hydrologic and landscape databases for selected National Wildlife Refuges in the Southeastern United States

    USGS Publications Warehouse

    Buell, Gary R.; Gurley, Laura N.; Calhoun, Daniel L.; Hunt, Alexandria M.

    2017-06-12

    This report serves as metadata and a user guide for five out of six hydrologic and landscape databases developed by the U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, to describe data-collection, data-reduction, and data-analysis methods used to construct the databases and provides statistical and graphical descriptions of the databases. Six hydrologic and landscape databases were developed: (1) the Cache River and White River National Wildlife Refuges (NWRs) and contributing watersheds in Arkansas, Missouri, and Oklahoma, (2) the Cahaba River NWR and contributing watersheds in Alabama, (3) the Caloosahatchee and J.N. “Ding” Darling NWRs and contributing watersheds in Florida, (4) the Clarks River NWR and contributing watersheds in Kentucky, Tennessee, and Mississippi, (5) the Lower Suwannee NWR and contributing watersheds in Georgia and Florida, and (6) the Okefenokee NWR and contributing watersheds in Georgia and Florida. Each database is composed of a set of ASCII files, Microsoft Access files, and Microsoft Excel files. The databases were developed as an assessment and evaluation tool for use in examining NWR-specific hydrologic patterns and trends as related to water availability and water quality for NWR ecosystems, habitats, and target species. The databases include hydrologic time-series data, summary statistics on landscape and hydrologic time-series data, and hydroecological metrics that can be used to assess NWR hydrologic conditions and the availability of aquatic and riparian habitat. Landscape data that describe the NWR physiographic setting and the locations of hydrologic data-collection stations were compiled and mapped. Categories of landscape data include land cover, soil hydrologic characteristics, physiographic features, geographic and hydrographic boundaries, hydrographic features, and regional runoff estimates. The geographic extent of each database covers an area within which human activities, climatic variation, and hydrologic processes can potentially affect the hydrologic regime of the NWRs and adjacent areas. The hydrologic and landscape database for the Cache and White River NWRs and contributing watersheds in Arkansas, Missouri, and Oklahoma has been described and documented in detail (Buell and others, 2012). This report serves as a companion to the Buell and others (2012) report to describe and document the five subsequent hydrologic and landscape databases that were developed: Chapter A—the Cahaba River NWR and contributing watersheds in Alabama, Chapter B—the Caloosahatchee and J.N. “Ding” Darling NWRs and contributing watersheds in Florida, Chapter C—the Clarks River NWR and contributing watersheds in Kentucky, Tennessee, and Mississippi, Chapter D—the Lower Suwannee NWR and contributing watersheds in Georgia and Florida, and Chapter E—the Okefenokee NWR and contributing watersheds in Georgia and Florida.

  6. LICA AstroCalc, a software to analyze the impact of artificial light: Extracting parameters from the spectra of street and indoor lamps

    NASA Astrophysics Data System (ADS)

    Ayuga, Carlos Eugenio Tapia; Zamorano, Jaime

    2018-07-01

    The night sky spectra of light-polluted areas is the result of the artificial light scattered back from the atmosphere and the reemission of the light after reflections in painted surfaces. This emission comes mainly from street and decorative lamps. We have built an extensive database of lamps spectra covering from UV to near IR and the software needed to analyze them. We describe the LICA-AstroCalc free software that is a user friendly GUI tool to extract information from our database spectra or any other user provided spectrum. The software also includes the complete color database of paints from NCS comprising 1950 types. This helps to evaluate how different colors modify the reflected spectra from different lamps. All spectroscopic measurements have been validated with recommendations from CIELAB and ISO from NCS database.

  7. Eglin virtual range database for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Talele, Sunjay E.; Pickard, J. W., Jr.; Owens, Monte A.; Foster, Joseph; Watson, John S.; Amick, Mary Amenda; Anthony, Kenneth

    1998-07-01

    Realistic backgrounds are necessary to support high fidelity hardware-in-the-loop testing. Advanced avionics and weapon system sensors are driving the requirement for higher resolution imagery. The model-test-model philosophy being promoted by the T&E community is resulting in the need for backgrounds that are realistic or virtual representations of actual test areas. Combined, these requirements led to a major upgrade of the terrain database used for hardware-in-the-loop testing at the Guided Weapons Evaluation Facility (GWEF) at Eglin Air Force Base, Florida. This paper will describe the process used to generate the high-resolution (1-foot) database of ten sites totaling over 20 square kilometers of the Eglin range. this process involved generating digital elevation maps from stereo aerial imagery and classifying ground cover material using the spectral content. These databases were then optimized for real-time operation at 90 Hz.

  8. Why do nursing homes close? An analysis of newspaper articles.

    PubMed

    Fisher, Andrew; Castle, Nicholas

    2012-01-01

    Using Non-numerical Unstructured Data Indexing Searching and Theorizing (NUD'IST) software to extract and examine keywords from text, the authors explored the phenomenon of nursing home closure through an analysis of 30 major-market newspapers over a period of 66 months (January 1, 1999 to June 1, 2005). Newspaper articles typically represent a careful analysis of staff impressions via interviews, managerial perspectives, and financial records review. There is a current reliance on the synthesis of information from large regulatory databases such as the Online Survey Certification And Reporting database, the California Office of Statewide Healthcare Planning and Development database, and Area Resource Files. Although such databases permit the construction of studies capable of revealing some reasons for nursing home closure, they are hampered by the confines of the data entered. Using our analysis of newspaper articles, the authors are able to add further to their understanding of nursing home closures.

  9. GEOTHERM Data Set

    DOE Data Explorer

    DeAngelo, Jacob

    1983-01-01

    GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey.

  10. Karst database development in Minnesota: Design and data assembly

    USGS Publications Warehouse

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  11. A lake-centric geospatial database to guide research and inform management decisions in an Arctic watershed in northern Alaska experiencing climate and land-use changes

    USGS Publications Warehouse

    Jones, Benjamin M.; Arp, Christopher D.; Whitman, Matthew S.; Nigro, Debora A.; Nitze, Ingmar; Beaver, John; Gadeke, Anne; Zuck, Callie; Liljedahl, Anna K.; Daanen, Ronald; Torvinen, Eric; Fritz, Stacey; Grosse, Guido

    2017-01-01

    Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.

  12. Database for the geologic map of the Mount Baker 30- by 60-minute quadrangle, Washington (I-2660)

    USGS Publications Warehouse

    Tabor, R.W.; Haugerud, R.A.; Hildreth, Wes; Brown, E.H.

    2006-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Mount Baker 30- by 60-Minute Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the geology at 1:100,000. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  13. Database for the geologic map of the Chelan 30-minute by 60-minute quadrangle, Washington (I-1661)

    USGS Publications Warehouse

    Tabor, R.W.; Frizzell, V.A.; Whetten, J.T.; Waitt, R.B.; Swanson, D.A.; Byerly, G.R.; Booth, D.B.; Hetherington, M.J.; Zartman, R.E.

    2006-01-01

    This digital map database has been prepared by R. W. Tabor from the published Geologic map of the Chelan 30-Minute Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  14. Database for the geologic map of the Snoqualmie Pass 30-minute by 60-minute quadrangle, Washington (I-2538)

    USGS Publications Warehouse

    Tabor, R.W.; Frizzell, V.A.; Booth, D.B.; Waitt, R.B.

    2006-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Snoqualmie Pass 30' X 60' Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  15. Geologic Map of the Wenatchee 1:100,000 Quadrangle, Central Washington: A Digital Database

    USGS Publications Warehouse

    Tabor, R.W.; Waitt, R.B.; Frizzell, V.A.; Swanson, D.A.; Byerly, G.R.; Bentley, R.D.

    2005-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Wenatchee 1:100,000 Quadrangle, Central Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  16. GMDD: a database of GMO detection methods.

    PubMed

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-06-04

    Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.

  17. A GIS-based DRASTIC model for assessing intrinsic groundwater vulnerability in northeastern Missan governorate, southern Iraq

    NASA Astrophysics Data System (ADS)

    Al-Abadi, Alaa M.; Al-Shamma'a, Ayser M.; Aljabbari, Mukdad H.

    2017-03-01

    In this study, intrinsic groundwater vulnerability for the shallow aquifer in northeastern Missan governorate, south of Iraq is evaluated using commonly used DRASTIC model in framework of GIS environment. Preparation of DRASTIC parameters is attained through gathering data from different sources including field survey, geological and meteorological data, a digital elevation model DEM of the study area, archival database, and published research. The different data used to build DRASTIC model are arranged in a geospatial database using spatial analyst extension of ArcGIS 10.2 software. The obtained results related to the vulnerability to general contaminants show that the study area is characterized by two vulnerability zones: low and moderate. Ninety-four percentage (94 %) of the study area has a low class of groundwater vulnerability to contamination, whereas a total of (6 %) of the study area has moderate vulnerability. The pesticides DRASTIC index map shows that the study area is also characterized by two zones of vulnerability: low and moderate. The DRASTIC map of this version clearly shows that small percentage (13 %) of the study area has low vulnerability to contamination, and most parts have moderate vulnerability (about 87 %). The final results indicate that the aquifer system in the interested area is relatively protected from contamination on the groundwater surface. To mitigate the contamination risks in the moderate vulnerability zones, a protective measure must be put before exploiting the aquifer and before comprehensive agricultural activities begin in the area.

  18. Application of data mining approaches to drug delivery.

    PubMed

    Ekins, Sean; Shimada, Jun; Chang, Cheng

    2006-11-30

    Computational approaches play a key role in all areas of the pharmaceutical industry from data mining, experimental and clinical data capture to pharmacoeconomics and adverse events monitoring. They will likely continue to be indispensable assets along with a growing library of software applications. This is primarily due to the increasingly massive amount of biology, chemistry and clinical data, which is now entering the public domain mainly as a result of NIH and commercially funded projects. We are therefore in need of new methods for mining this mountain of data in order to enable new hypothesis generation. The computational approaches include, but are not limited to, database compilation, quantitative structure activity relationships (QSAR), pharmacophores, network visualization models, decision trees, machine learning algorithms and multidimensional data visualization software that could be used to improve drug delivery after mining public and/or proprietary data. We will discuss some areas of unmet needs in the area of data mining for drug delivery that can be addressed with new software tools or databases of relevance to future pharmaceutical projects.

  19. Computer-aided auditing of prescription drug claims.

    PubMed

    Iyengar, Vijay S; Hermiz, Keith B; Natarajan, Ramesh

    2014-09-01

    We describe a methodology for identifying and ranking candidate audit targets from a database of prescription drug claims. The relevant audit targets may include various entities such as prescribers, patients and pharmacies, who exhibit certain statistical behavior indicative of potential fraud and abuse over the prescription claims during a specified period of interest. Our overall approach is consistent with related work in statistical methods for detection of fraud and abuse, but has a relative emphasis on three specific aspects: first, based on the assessment of domain experts, certain focus areas are selected and data elements pertinent to the audit analysis in each focus area are identified; second, specialized statistical models are developed to characterize the normalized baseline behavior in each focus area; and third, statistical hypothesis testing is used to identify entities that diverge significantly from their expected behavior according to the relevant baseline model. The application of this overall methodology to a prescription claims database from a large health plan is considered in detail.

  20. Wide-area-distributed storage system for a multimedia database

    NASA Astrophysics Data System (ADS)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  1. VizieR Online Data Catalog: 86 new variables in Andromed (Dimitrov+, 2007)

    NASA Astrophysics Data System (ADS)

    Dimitrov, D.; Popov, V.

    2016-05-01

    One of the most extensive sky surveys in the recent years is the Northern Sky Variability Survey (NSVS, Wozniak et al., 2004AJ....127.2436W). Light curves of about 14000000 objects with instrumental magnitudes between 8 and 15.5 are included in the database of that survey, for the period April 1999 - March 2000, covering all of the Northern hemisphere and reaching DE=-38° in the South. To look for different types of variables, we rely only upon internal NSVS data. We select an area on the sky and check for variability in the NSVS database. Our test area covers 46 deg in Andromeda, its coordinates are: 23:00<=RA<=23:45 and 43:30<=DE<=29:30 (2000.0). The galactic latitude is in the -10° - -20° range. The total number of NSVS light curves in this area is and every star has between 1 and 4 light curves, the mean value being 1.875 light curves per star. (2 data files).

  2. Reading in the Content Areas. Learning Package No. 4.

    ERIC Educational Resources Information Center

    Collins, Norma; Smith, Carl, Comp.

    Originally developed for the Department of Defense Schools (DoDDS) system, this learning package on reading in the content areas is designed for teachers who wish to upgrade or expand their teaching skills on their own. The package includes a comprehensive search of the ERIC database; a lecture giving an overview on the topic; the full text of…

  3. Transit profiles : agencies in urbanized areas with a population of less than 200,000 for the 1993 National Transit Database section 15 report year

    DOT National Transportation Integrated Search

    1994-12-01

    This publication consists of individual profiles for each reporting transit agency located in an urbanized area with a population less than 200,000. The data contained in each profile consists of general and summary reports, as well as modal, perform...

  4. Urban forest restoration cost modeling: a Seattle natural areas case study

    Treesearch

    Jean M. Daniels; Weston Brinkley; Michael D. Paruszkiewicz

    2016-01-01

    Cities have become more committed to ecological restoration and management activities in urban natural areas. Data about costs are needed for better planning and reporting. The objective of this study is to estimate the costs for restoration activities in urban parks and green space in Seattle, Washington. Stewardship activity data were generated from a new database...

  5. Metropolitan Growth and Economic Opportunity for the Poor: If You're Poor Does Place Matter?

    ERIC Educational Resources Information Center

    Foster-Bey, John A.

    This paper focuses on why metropolitan areas vary in their capacity to translate generally high employment rates into economic opportunity for the disadvantaged. Data come from the Urban Institute's Urban Underclass Database, which includes poverty and employment data for 1980 and 1990 for the 100 largest metropolitan areas down to the Census…

  6. Spatial databases of the Humboldt Basin mineral resource assessment, northern Nevada

    USGS Publications Warehouse

    Mihalasky, Mark J.; Moyer, Lorre A.

    2004-01-01

    This report describes the origin, generation, and format of tract map databases for deposit types that accompany the metallic mineral resource assessment for the Humboldt River Basin, northern Nevada, (Wallace and others, 2004, Chapter 2). The deposit types include pluton-related polymetallic, sedimentary rock-hosted Au-Ag, and epithermal Au-Ag. The tract maps constitute only part of the assessment, which also includes new research and data for northern Nevada, discussions on land classification, and interpretation of the assessment maps. The purpose of the assessment was to identify areas that may have a greater favorability for undiscovered metallic mineral deposits, provide analysis of the mineral-resource favorability, and present the assessment of the Humboldt River basin and adjacent areas in a digital format using a Geographic Information System (GIS).

  7. An ethnobotanical survey of medicinal plants used in the East Sepik province of Papua New Guinea.

    PubMed

    Koch, Michael; Kehop, Dickson Andrew; Kinminja, Boniface; Sabak, Malcolm; Wavimbukie, Graham; Barrows, Katherine M; Matainaho, Teatulohi K; Barrows, Louis R; Rai, Prem P

    2015-11-14

    Rapid modernization in the East Sepik (ES) Province of Papua New Guinea (PNG) is resulting in a decrease in individuals knowledgeable in medicinal plant use. Here we report a synthesis and comparison of traditional medicinal plant use from four ethnically distinct locations in the ES Province and furthermore compare them to two other previous reports of traditional plant use from different provinces of PNG. This manuscript is based on an annotated combination of four Traditional Medicines (TM) survey reports generated by University of Papua New Guinea (UPNG) trainees. The surveys utilized a questionnaire titled "Information sheet on traditional herbal preparations and medicinal plants of PNG", administered in the context of the TM survey project which is supported by WHO, US NIH and PNG governmental health care initiatives and funding. Regional and transregional comparison of medicinal plant utilization was facilitated by using existing plant databases: the UPNG TM Database and the PNG Plant Database (PNG Plants) using Bayesian statistical analysis. Medicinal plant use between four distinct dialect study areas in the ES Province of PNG showed that only a small fraction of plants had shared use in each area, however usually utilizing different plant parts, being prepared differently and to treat different medical conditions. Several instances of previously unreported medicinal plants could be located. Medicinally under- and over-utilized plants were found both in the regional reports and in a transregional analysis, thus showing that these medicinal utilization frequencies differ between provinces. Documentation of consistent plant use argues for efficacy and is particularly important since established and effective herbal medicinal interventions are sorely needed in the rural areas of PNG, and unfortunately clinical validation for the same is often lacking. Despite the existence of a large corpus of medical annotation of plants for PNG, previously unknown medical uses of plants can be uncovered. Furthermore, comparisons of medicinal plant utilization is possible if databases are reformatted for consistencies that allow comparisons. A concerted effort in building easily comparable databases could dramatically facilitate ethnopharmacological analysis of the existing plant diversity.

  8. Extending GIS Technology to Study Karst Features of Southeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.

    2001-12-01

    This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.

  9. A database of 10 min average measurements of solar radiation and meteorological variables in Ostrava, Czech Republic

    NASA Astrophysics Data System (ADS)

    Opálková, Marie; Navrátil, Martin; Špunda, Vladimír; Blanc, Philippe; Wald, Lucien

    2018-04-01

    A database containing 10 min means of solar irradiance measured on a horizontal plane in several ultraviolet and visible bands from July 2014 to December 2016 at three stations in the area of the city of Ostrava (Czech Republic) is presented. The database contains time series of 10 min average irradiances or photosynthetic photon flux densities measured in the following spectral bands: 280-315 nm (UVB); 315-380 nm (UVA); and 400-700 nm (photosynthetically active radiation, PAR); 510-700 nm; 600-700 nm; 610-680 nm; 690-780 nm; 400-1100 nm. A series of meteorological variables including relative air humidity and air temperature at surface is also provided at the same 10 min time step at all three stations, and precipitation is provided for two stations. Air pressure, wind speed, wind direction, and concentrations of air pollutants PM10, SO2, NOx, NO, NO2 were measured at the 1 h time step at the fourth station owned by the Public Health Institute of Ostrava. The details of the experimental sites and instruments used for the measurements are given. Special attention is given to the data quality, and the original approach to the data quality which was established is described in detail. About 130 000 records for each of the three stations are available in the database. This database offers a unique ensemble of variables having a high temporal resolution and it is a reliable source for radiation in relation to environment and vegetation in highly polluted areas of industrial cities in the of northern mid-latitudes. The database has been placed on the PANGAEA repository (https://doi.org/10.1594/PANGAEA.879722) and contains individual data files for each station.

  10. The Footprint Database and Web Services of the Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.

  11. Rapid Tsunami Inundation Forecast from Near-field or Far-field Earthquakes using Pre-computed Tsunami Database: Pelabuhan Ratu, Indonesia

    NASA Astrophysics Data System (ADS)

    Gusman, A. R.; Setiyono, U.; Satake, K.; Fujii, Y.

    2017-12-01

    We built pre-computed tsunami inundation database in Pelabuhan Ratu, one of tsunami-prone areas on the southern coast of Java, Indonesia. The tsunami database can be employed for a rapid estimation of tsunami inundation during an event. The pre-computed tsunami waveforms and inundations are from a total of 340 scenarios ranging from 7.5 to 9.2 in moment magnitude scale (Mw), including simple fault models of 208 thrust faults and 44 tsunami earthquakes on the plate interface, as well as 44 normal faults and 44 reverse faults in the outer-rise region. Using our tsunami inundation forecasting algorithm (NearTIF), we could rapidly estimate the tsunami inundation in Pelabuhan Ratu for three different hypothetical earthquakes. The first hypothetical earthquake is a megathrust earthquake type (Mw 9.0) offshore Sumatra which is about 600 km from Pelabuhan Ratu to represent a worst-case event in the far-field. The second hypothetical earthquake (Mw 8.5) is based on a slip deficit rate estimation from geodetic measurements and represents a most likely large event near Pelabuhan Ratu. The third hypothetical earthquake is a tsunami earthquake type (Mw 8.1) which often occur south off Java. We compared the tsunami inundation maps produced by the NearTIF algorithm with results of direct forward inundation modeling for the hypothetical earthquakes. The tsunami inundation maps produced from both methods are similar for the three cases. However, the tsunami inundation map from the inundation database can be obtained in much shorter time (1 min) than the one from a forward inundation modeling (40 min). These indicate that the NearTIF algorithm based on pre-computed inundation database is reliable and useful for tsunami warning purposes. This study also demonstrates that the NearTIF algorithm can work well even though the earthquake source is located outside the area of fault model database because it uses a time shifting procedure for the best-fit scenario searching.

  12. Optimizing literature search in systematic reviews - are MEDLINE, EMBASE and CENTRAL enough for identifying effect studies within the area of musculoskeletal disorders?

    PubMed

    Aagaard, Thomas; Lund, Hans; Juhl, Carsten

    2016-11-22

    When conducting systematic reviews, it is essential to perform a comprehensive literature search to identify all published studies relevant to the specific research question. The Cochrane Collaborations Methodological Expectations of Cochrane Intervention Reviews (MECIR) guidelines state that searching MEDLINE, EMBASE and CENTRAL should be considered mandatory. The aim of this study was to evaluate the MECIR recommendations to use MEDLINE, EMBASE and CENTRAL combined, and examine the yield of using these to find randomized controlled trials (RCTs) within the area of musculoskeletal disorders. Data sources were systematic reviews published by the Cochrane Musculoskeletal Review Group, including at least five RCTs, reporting a search history, searching MEDLINE, EMBASE, CENTRAL, and adding reference- and hand-searching. Additional databases were deemed eligible if they indexed RCTs, were in English and used in more than three of the systematic reviews. Relative recall was calculated as the number of studies identified by the literature search divided by the number of eligible studies i.e. included studies in the individual systematic reviews. Finally, cumulative median recall was calculated for MEDLINE, EMBASE and CENTRAL combined followed by the databases yielding additional studies. Deemed eligible was twenty-three systematic reviews and the databases included other than MEDLINE, EMBASE and CENTRAL was AMED, CINAHL, HealthSTAR, MANTIS, OT-Seeker, PEDro, PsychINFO, SCOPUS, SportDISCUS and Web of Science. Cumulative median recall for combined searching in MEDLINE, EMBASE and CENTRAL was 88.9% and increased to 90.9% when adding 10 additional databases. Searching MEDLINE, EMBASE and CENTRAL was not sufficient for identifying all effect studies on musculoskeletal disorders, but additional ten databases did only increase the median recall by 2%. It is possible that searching databases is not sufficient to identify all relevant references, and that reviewers must rely upon additional sources in their literature search. However further research is needed.

  13. Development of a medical module for disaster information systems.

    PubMed

    Calik, Elif; Atilla, Rıdvan; Kaya, Hilal; Aribaş, Alirıza; Cengiz, Hakan; Dicle, Oğuz

    2014-01-01

    This study aims to improve a medical module which provides a real-time medical information flow about pre-hospital processes that gives health care in disasters; transferring, storing and processing the records that are in electronic media and over internet as a part of disaster information systems. In this study which is handled within the frame of providing information flow among professionals in a disaster case, to supply the coordination of healthcare team and transferring complete information to specified people at real time, Microsoft Access database and SQL query language were used to inform database applications. System was prepared on Microsoft .Net platform using C# language. Disaster information system-medical module was designed to be used in disaster area, field hospital, nearby hospitals, temporary inhabiting areas like tent city, vehicles that are used for dispatch, and providing information flow between medical officials and data centres. For fast recording of the disaster victim data, accessing to database which was used by health care professionals was provided (or granted) among analysing process steps and creating minimal datasets. Database fields were created in the manner of giving opportunity to enter new data and search old data which is recorded before disaster. Web application which provides access such as data entry to the database and searching towards the designed interfaces according to the login credentials access level. In this study, homepage and users' interfaces which were built on database in consequence of system analyses were provided with www.afmedinfo.com web site to the user access. With this study, a recommendation was made about how to use disaster-based information systems in the field of health. Awareness has been developed about the fact that disaster information system should not be perceived only as an early warning system. Contents and the differences of the health care practices of disaster information systems were revealed. A web application was developed supplying a link between the user and the database to make date entry and data query practices by the help of the developed interfaces.

  14. Building the Ferretome

    PubMed Central

    Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.

    2016-01-01

    Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503

  15. Correcting ligands, metabolites, and pathways

    PubMed Central

    Ott, Martin A; Vriend, Gert

    2006-01-01

    Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry) and that a considerable number (about one third) had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect) reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and visualization. It is freely available at provided that the copyright notice of all original data is cited. The database will be useful for querying and browsing biochemical pathways, and to obtain reference information for identifying compounds. However, these applications require that the underlying data be correct, and that is the focus of BioMeta. PMID:17132165

  16. Building the Ferretome.

    PubMed

    Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C

    2016-01-01

    Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.

  17. Development of a One-Stop Data Search and Discovery Engine using Ontologies for Semantic Mappings (HydroSeek)

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Beran, B.

    2007-12-01

    Search engines have changed the way we see the Internet. The ability to find the information by just typing in keywords was a big contribution to the overall web experience. While the conventional search engine methodology worked well for textual documents, locating scientific data remains a problem since they are stored in databases not readily accessible by search engine bots. Considering different temporal, spatial and thematic coverage of different databases, especially for interdisciplinary research it is typically necessary to work with multiple data sources. These sources can be federal agencies which generally offer national coverage or regional sources which cover a smaller area with higher detail. However for a given geographic area of interest there often exists more than one database with relevant data. Thus being able to query multiple databases simultaneously is a desirable feature that would be tremendously useful for scientists. Development of such a search engine requires dealing with various heterogeneity issues. In scientific databases, systems often impose controlled vocabularies which ensure that they are generally homogeneous within themselves but are semantically heterogeneous when moving between different databases. This defines the boundaries of possible semantic related problems making it easier to solve than with the conventional search engines that deal with free text. We have developed a search engine that enables querying multiple data sources simultaneously and returns data in a standardized output despite the aforementioned heterogeneity issues between the underlying systems. This application relies mainly on metadata catalogs or indexing databases, ontologies and webservices with virtual globe and AJAX technologies for the graphical user interface. Users can trigger a search of dozens of different parameters over hundreds of thousands of stations from multiple agencies by providing a keyword, a spatial extent, i.e. a bounding box, and a temporal bracket. As part of this development we have also added an environment that allows users to do some of the semantic tagging, i.e. the linkage of a variable name (which can be anything they desire) to defined concepts in the ontology structure which in turn provides the backbone of the search engine.

  18. Bedrock geologic map of the Worcester South quadrangle, Worcester County, Massachusetts

    USGS Publications Warehouse

    Walsh, Gregory J.; Merschat, Arthur J.

    2015-09-29

    The bedrock geology was mapped to study the tectonic history of the area and to provide a framework for ongoing hydrogeologic characterization of the fractured bedrock of Massachusetts. This report presents mapping by Gregory J. Walsh and Arthur J. Merschat from 2008 to 2010. The report consists of a map and GIS database, both of which are available for download at http://dx.doi.org/ 10.3133/sim3345. The database includes contacts of bedrock geologic units, faults, outcrop locations, structural information, and photographs.

  19. Efficiently Distributing Component-based Applications Across Wide-Area Environments

    DTIC Science & Technology

    2002-01-01

    a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running

  20. Letter Addressing EPA's Requirements for Nonattainment Areas

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  1. RACT Requirements in Ozone Nonattainment Areas

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  2. Offsets in Nonclassifiable Areas

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  3. Users’ guide to the surgical literature: how to perform a high-quality literature search

    PubMed Central

    Waltho, Daniel; Kaur, Manraj Nirmal; Haynes, R. Brian; Farrokhyar, Forough; Thoma, Achilleas

    2015-01-01

    Summary The article “Users’ guide to the surgical literature: how to perform a literature search” was published in 2003, but the continuing technological developments in databases and search filters have rendered that guide out of date. The present guide fills an existing gap in this area; it provides the reader with strategies for developing a searchable clinical question, creating an efficient search strategy, accessing appropriate databases, and skillfully retrieving the best evidence to address the research question. PMID:26384150

  4. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  5. Clarification of PSD Guidance For Modeling Class 1 Area Impacts

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  6. Digital version of "Open-File Report 92-179: Geologic map of the Cow Cove Quadrangle, San Bernardino County, California"

    USGS Publications Warehouse

    Wilshire, Howard G.; Bedford, David R.; Coleman, Teresa

    2002-01-01

    3. Plottable map representations of the database at 1:24,000 scale in PostScript and Adobe PDF formats. The plottable files consist of a color geologic map derived from the spatial database, composited with a topographic base map in the form of the USGS Digital Raster Graphic for the map area. Color symbology from each of these datasets is maintained, which can cause plot file sizes to be large.

  7. Class I Area Significant Impact Levels

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  8. Evaluation of "shotgun" proteomics for identification of biological threat agents in complex environmental matrixes: experimental simulations.

    PubMed

    Verberkmoes, Nathan C; Hervey, W Judson; Shah, Manesh; Land, Miriam; Hauser, Loren; Larimer, Frank W; Van Berkel, Gary J; Goeringer, Douglas E

    2005-02-01

    There is currently a great need for rapid detection and positive identification of biological threat agents, as well as microbial species in general, directly from complex environmental samples. This need is most urgent in the area of homeland security, but also extends into medical, environmental, and agricultural sciences. Mass-spectrometry-based analysis is one of the leading technologies in the field with a diversity of different methodologies for biothreat detection. Over the past few years, "shotgun"proteomics has become one method of choice for the rapid analysis of complex protein mixtures by mass spectrometry. Recently, it was demonstrated that this methodology is capable of distinguishing a target species against a large database of background species from a single-component sample or dual-component mixtures with relatively the same concentration. Here, we examine the potential of shotgun proteomics to analyze a target species in a background of four contaminant species. We tested the capability of a common commercial mass-spectrometry-based shotgun proteomics platform for the detection of the target species (Escherichia coli) at four different concentrations and four different time points of analysis. We also tested the effect of database size on positive identification of the four microbes used in this study by testing a small (13-species) database and a large (261-species) database. The results clearly indicated that this technology could easily identify the target species at 20% in the background mixture at a 60, 120, 180, or 240 min analysis time with the small database. The results also indicated that the target species could easily be identified at 20% or 6% but could not be identified at 0.6% or 0.06% in either a 240 min analysis or a 30 h analysis with the small database. The effects of the large database were severe on the target species where detection above the background at any concentration used in this study was impossible, though the three other microbes used in this study were clearly identified above the background when analyzed with the large database. This study points to the potential application of this technology for biological threat agent detection but highlights many areas of needed research before the technology will be useful in real world samples.

  9. Estimation of vulnerability functions based on a global earthquake damage database

    NASA Astrophysics Data System (ADS)

    Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.

    2009-04-01

    Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human casualties will also be assembled. The database also contains analytical tools enabling data from similar locations, building classes or ground motion levels to be assembled and thus vulnerability relationships derived for any chosen ground motion parameter, for a given class of building, and for particular countries or regions. The paper presents examples of vulnerability relationships for particular classes of buildings and regions of the world, together with the estimated uncertainty ranges. It will discuss the applicability of such vulnerability functions in earthquake loss assessment for insurance purposes or for earthquake risk mitigation.

  10. Monitoring of services with non-relational databases and map-reduce framework

    NASA Astrophysics Data System (ADS)

    Babik, M.; Souto, F.

    2012-12-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  11. Global Distribution of Outbreaks of Water-Associated Infectious Diseases

    PubMed Central

    Yang, Kun; LeJeune, Jeffrey; Alsdorf, Doug; Lu, Bo; Shum, C. K.; Liang, Song

    2012-01-01

    Background Water plays an important role in the transmission of many infectious diseases, which pose a great burden on global public health. However, the global distribution of these water-associated infectious diseases and underlying factors remain largely unexplored. Methods and Findings Based on the Global Infectious Disease and Epidemiology Network (GIDEON), a global database including water-associated pathogens and diseases was developed. In this study, reported outbreak events associated with corresponding water-associated infectious diseases from 1991 to 2008 were extracted from the database. The location of each reported outbreak event was identified and geocoded into a GIS database. Also collected in the GIS database included geo-referenced socio-environmental information including population density (2000), annual accumulated temperature, surface water area, and average annual precipitation. Poisson models with Bayesian inference were developed to explore the association between these socio-environmental factors and distribution of the reported outbreak events. Based on model predictions a global relative risk map was generated. A total of 1,428 reported outbreak events were retrieved from the database. The analysis suggested that outbreaks of water-associated diseases are significantly correlated with socio-environmental factors. Population density is a significant risk factor for all categories of reported outbreaks of water-associated diseases; water-related diseases (e.g., vector-borne diseases) are associated with accumulated temperature; water-washed diseases (e.g., conjunctivitis) are inversely related to surface water area; both water-borne and water-related diseases are inversely related to average annual rainfall. Based on the model predictions, “hotspots” of risks for all categories of water-associated diseases were explored. Conclusions At the global scale, water-associated infectious diseases are significantly correlated with socio-environmental factors, impacting all regions which are affected disproportionately by different categories of water-associated infectious diseases. PMID:22348158

  12. Geologic map and map database of the Spreckels 7.5-minute Quadrangle, Monterey County, California

    USGS Publications Warehouse

    Clark, Joseph C.; Brabb, Earl E.; Rosenberg, Lewis I.; Goss, Heather V.; Watkins, Sarah E.

    2001-01-01

    Introduction The Spreckels quadrangle lies at the north end of the Sierra de Salinas and extends from the Salinas Valley on the northeast across Los Laurelles Ridge south to Carmel Valley, an intermontane valley that separates the Santa Lucia Range from the Sierra de Salinas (fig. 1). The Toro Regional Park occupies the east-central part of the quadrangle, whereas the former Fort Ord Military Reservation covers the northwestern part of the area and is the probable locus of future development. Subdivisions largely occupy the older floodplain of Toro Creek and the adjacent foothills, with less dense development along the narrower canyons of Corral de Tierra and San Benancio Gulch to the south. The foothills southwest of the Salinas River are the site of active residential development. Geologically, the study area has a crystalline basement of Upper Cretaceous granitic rocks of the Salinian block and older metasedimentary rocks of the schist of the Sierra de Salinas of probable Cretaceous age. Resting nonconformably upon these basement rocks is a sedimentary section that ranges in age from middle Miocene to Holocene and has a composite thickness of as much as 1,200 m. One of the purposes of the present study was to investigate the apparent lateral variation of the middle to upper Miocene sections from the typical porcelaneous and diatomaceous Monterey Formation of the Monterey and Seaside quadrangles to the west (Clark and others, 1997) to a thick marine sandstone section in the eastern part of the Spreckels quadrangle. Liquefaction, which seriously affected the Spreckels area in the 1906 San Francisco earthquake (Lawson, 1908), and landsliding are the two major geological hazards of the area. The landslides consist mainly of older large slides in the southern and younger debris flows in the northern part of the quadrangle. This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (skmf.txt, skmf.pdf, or skmf.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:24,000 or smaller.

  13. The development of a new database of gas emissions: MAGA, a collaborative web environment for collecting data

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Chiodini, G.; Frigeri, A.; Bagnato, E.; Aiuppa, A.; McCormick, B.

    2013-12-01

    The data on volcanic and non-volcanic gas emissions available online are, as today, incomplete and most importantly, fragmentary. Hence, there is need for common frameworks to aggregate available data, in order to characterize and quantify the phenomena at various spatial and temporal scales. Building on the Googas experience we are now extending its capability, particularly on the user side, by developing a new web environment for collecting and publishing data. We have started to create a new and detailed web database (MAGA: MApping GAs emissions) for the deep carbon degassing in the Mediterranean area. This project is part of the Deep Earth Carbon Degassing (DECADE) research initiative, lunched in 2012 by the Deep Carbon Observatory (DCO) to improve the global budget of endogenous carbon from volcanoes. MAGA database is planned to complement and integrate the work in progress within DECADE in developing CARD (Carbon Degassing) database. MAGA database will allow researchers to insert data interactively and dynamically into a spatially referred relational database management system, as well as to extract data. MAGA kicked-off with the database set up and a complete literature survey on publications on volcanic gas fluxes, by including data on active craters degassing, diffuse soil degassing and fumaroles both from dormant closed-conduit volcanoes (e.g., Vulcano, Phlegrean Fields, Santorini, Nysiros, Teide, etc.) and open-vent volcanoes (e.g., Etna, Stromboli, etc.) in the Mediterranean area and Azores. For each geo-located gas emission site, the database holds images and description of the site and of the emission type (e.g., diffuse emission, plume, fumarole, etc.), gas chemical-isotopic composition (when available), gas temperature and gases fluxes magnitude. Gas sampling, analysis and flux measurement methods are also reported together with references and contacts to researchers expert of the site. Data can be accessed on the network from a web interface or as a data-driven web service, where software clients can request data directly from the database. This way Geographical Information Systems (GIS) and Virtual Globes (e.g., Google Earth) can easily access the database, and data can be exchanged with other database. In details the database now includes: i) more than 1000 flux data about volcanic plume degassing from Etna (4 summit craters and bulk degassing) and Stromboli volcanoes, with time averaged CO2 fluxes of ~ 18000 and 766 t/d, respectively; ii) data from ~ 30 sites of diffuse soil degassing from Napoletan volcanoes, Azores, Canary, Etna, Stromboli, and Vulcano Island, with a wide range of CO2 fluxes (from les than 1 to 1500 t/d) and iii) several data on fumarolic emissions (~ 7 sites) with CO2 fluxes up to 1340 t/day (i.e., Stromboli). When available, time series of compositional data have been archived in the database (e.g., for Campi Flegrei fumaroles). We believe MAGA data-base is an important starting point to develop a large scale, expandable data-base aimed to excite, inspire, and encourage participation among researchers. In addition, the possibility to archive location and qualitative information for gas emission/sites not yet investigated, could stimulate the scientific community for future researches and will provide an indication on the current uncertainty on deep carbon fluxes global estimates.

  14. A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy

    PubMed Central

    Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco; Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco; Agosta, F.; Alessio, G.; Alfonsi, L.; Amanti, M.; Amoroso, S.; Aringoli, D.; Auciello, E.; Azzaro, R.; Baize, S.; Bello, S.; Benedetti, L.; Bertagnini, A.; Binda, G.; Bisson, M.; Blumetti, A.M.; Bonadeo, L.; Boncio, P.; Bornemann, P.; Branca, S.; Braun, T.; Brozzetti, F.; Brunori, C.A.; Burrato, P.; Caciagli, M.; Campobasso, C.; Carafa, M.; Cinti, F.R.; Cirillo, D.; Comerci, V.; Cucci, L.; De Ritis, R.; Deiana, G.; Del Carlo, P.; Del Rio, L.; Delorme, A.; Di Manna, P.; Di Naccio, D.; Falconi, L.; Falcucci, E.; Farabollini, P.; Faure Walker, J.P.; Ferrarini, F.; Ferrario, M.F.; Ferry, M.; Feuillet, N.; Fleury, J.; Fracassi, U.; Frigerio, C.; Galluzzo, F.; Gambillara, R.; Gaudiosi, G.; Goodall, H.; Gori, S.; Gregory, L.C.; Guerrieri, L.; Hailemikael, S.; Hollingsworth, J.; Iezzi, F.; Invernizzi, C.; Jablonská, D.; Jacques, E.; Jomard, H.; Kastelic, V.; Klinger, Y.; Lavecchia, G.; Leclerc, F.; Liberi, F.; Lisi, A.; Livio, F.; Lo Sardo, L.; Malet, J.P.; Mariucci, M.T.; Materazzi, M.; Maubant, L.; Mazzarini, F.; McCaffrey, K.J.W.; Michetti, A.M.; Mildon, Z.K.; Montone, P.; Moro, M.; Nave, R.; Odin, M.; Pace, B.; Paggi, S.; Pagliuca, N.; Pambianchi, G.; Pantosti, D.; Patera, A.; Pérouse, E.; Pezzo, G.; Piccardi, L.; Pierantoni, P.P.; Pignone, M.; Pinzi, S.; Pistolesi, E.; Point, J.; Pousse, L.; Pozzi, A.; Proposito, M.; Puglisi, C.; Puliti, I.; Ricci, T.; Ripamonti, L.; Rizza, M.; Roberts, G.P.; Roncoroni, M.; Sapia, V.; Saroli, M.; Sciarra, A.; Scotti, O.; Skupinski, G.; Smedile, A.; Soquet, A.; Tarabusi, G.; Tarquini, S.; Terrana, S.; Tesson, J.; Tondi, E.; Valentini, A.; Vallone, R.; Van der Woerd, J.; Vannoli, P.; Venuti, A.; Vittori, E.; Volatili, T.; Wedmore, L.N.J.; Wilkinson, M.; Zambrano, M.

    2018-01-01

    We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km2. The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting. PMID:29583143

  15. C&RE-SLC: Database for conservation and renewable energy activities

    NASA Astrophysics Data System (ADS)

    Cavallo, J. D.; Tompkins, M. M.; Fisher, A. G.

    1992-08-01

    The Western Area Power Administration (Western) requires all its long-term power customers to implement programs that promote the conservation of electric energy or facilitate the use of renewable energy resources. The hope is that these measures could significantly reduce the amount of environmental damage associated with electricity production. As part of preparing the environmental impact statement for Western's Electric Power Marketing Program, Argonne National Laboratory constructed a database of the conservation and renewable energy activities in which Western's Salt Lake City customers are involved. The database provides information on types of conservation and renewable energy activities and allows for comparisons of activities being conducted at different utilities in the Salt Lake City region. Sorting the database allows Western's Salt Lake City customers to be classified so the various activities offered by different classes of utilities can be identified; for example, comparisons can be made between municipal utilities and cooperatives or between large and small customers. The information included in the database was collected from customer planning documents in the files of Western's Salt Lake City office.

  16. OrChem - An open source chemistry search engine for Oracle(R).

    PubMed

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  17. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    NASA Astrophysics Data System (ADS)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  18. The ability of older adults to use customized online medical databases to improve their health-related knowledge.

    PubMed

    Freund, Ophir; Reychav, Iris; McHaney, Roger; Goland, Ella; Azuri, Joseph

    2017-06-01

    Patient compliance with medical advice and recommended treatment depends on perception of health condition, medical knowledge, attitude, and self-efficacy. This study investigated how use of customized online medical databases, intended to improve knowledge in a variety of relevant medical topics, influenced senior adults' perceptions. Seventy-nine older adults in residence homes completed a computerized, tablet-based questionnaire, with medical scenarios and related questions. Following an intervention, control group participants answered questions without online help while an experimental group received internet links that directed them to customized, online medical databases. Medical knowledge and test scores among the experimental group significantly improved from pre- to post-intervention (p<0.0001) and was higher in comparison with the control group (p<0.0001). No significant change occurred in the control group. Older adults improved their knowledge in desired medical topic areas using customized online medical databases. The study demonstrated how such databases help solve health-related questions among older adult population members, and that older patients appear willing to consider technology usage in information acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Expert searching in public health

    PubMed Central

    Alpi, Kristine M.

    2005-01-01

    Objective: The article explores the characteristics of public health information needs and the resources available to address those needs that distinguish it as an area of searching requiring particular expertise. Methods: Public health searching activities from reference questions and literature search requests at a large, urban health department library were reviewed to identify the challenges in finding relevant public health information. Results: The terminology of the information request frequently differed from the vocabularies available in the databases. Searches required the use of multiple databases and/or Web resources with diverse interfaces. Issues of the scope and features of the databases relevant to the search questions were considered. Conclusion: Expert searching in public health differs from other types of expert searching in the subject breadth and technical demands of the databases to be searched, the fluidity and lack of standardization of the vocabulary, and the relative scarcity of high-quality investigations at the appropriate level of geographic specificity. Health sciences librarians require a broad exposure to databases, gray literature, and public health terminology to perform as expert searchers in public health. PMID:15685281

  20. A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy.

    PubMed

    Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; De Martini, Paolo Marco

    2018-03-27

    We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km 2 . The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting.

  1. Windshear database for forward-looking systems certification

    NASA Technical Reports Server (NTRS)

    Switzer, G. F.; Proctor, F. H.; Hinton, D. A.; Aanstoos, J. V.

    1993-01-01

    This document contains a description of a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-look windshear detection equipment. The database contains high resolution, three dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies which have been generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases represent a wide spectrum of windshear events. General descriptions and figures from each of the case studies are included, as well as equations for F-factor, radar-reflectivity factor, and rainfall rate. The document also describes scenarios and paths through the data sets, jointly developed by NASA and the FAA, to meet FAA certification testing objectives. Instructions for reading and verifying the data from tape are included.

  2. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    PubMed

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. BGDB: a database of bivalent genes

    PubMed Central

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/ PMID:23894186

  4. A database of the coseismic effects following the 30 October 2016 Norcia earthquake in Central Italy

    NASA Astrophysics Data System (ADS)

    Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; de Martini, Paolo Marco; Villani, Fabio; Civico, Riccardo; Pucci, Stefano; Pizzimenti, Luca; Nappi, Rosa; de Martini, Paolo Marco; Agosta, F.; Alessio, G.; Alfonsi, L.; Amanti, M.; Amoroso, S.; Aringoli, D.; Auciello, E.; Azzaro, R.; Baize, S.; Bello, S.; Benedetti, L.; Bertagnini, A.; Binda, G.; Bisson, M.; Blumetti, A. M.; Bonadeo, L.; Boncio, P.; Bornemann, P.; Branca, S.; Braun, T.; Brozzetti, F.; Brunori, C. A.; Burrato, P.; Caciagli, M.; Campobasso, C.; Carafa, M.; Cinti, F. R.; Cirillo, D.; Comerci, V.; Cucci, L.; de Ritis, R.; Deiana, G.; Del Carlo, P.; Del Rio, L.; Delorme, A.; di Manna, P.; di Naccio, D.; Falconi, L.; Falcucci, E.; Farabollini, P.; Faure Walker, J. P.; Ferrarini, F.; Ferrario, M. F.; Ferry, M.; Feuillet, N.; Fleury, J.; Fracassi, U.; Frigerio, C.; Galluzzo, F.; Gambillara, R.; Gaudiosi, G.; Goodall, H.; Gori, S.; Gregory, L. C.; Guerrieri, L.; Hailemikael, S.; Hollingsworth, J.; Iezzi, F.; Invernizzi, C.; Jablonská, D.; Jacques, E.; Jomard, H.; Kastelic, V.; Klinger, Y.; Lavecchia, G.; Leclerc, F.; Liberi, F.; Lisi, A.; Livio, F.; Lo Sardo, L.; Malet, J. P.; Mariucci, M. T.; Materazzi, M.; Maubant, L.; Mazzarini, F.; McCaffrey, K. J. W.; Michetti, A. M.; Mildon, Z. K.; Montone, P.; Moro, M.; Nave, R.; Odin, M.; Pace, B.; Paggi, S.; Pagliuca, N.; Pambianchi, G.; Pantosti, D.; Patera, A.; Pérouse, E.; Pezzo, G.; Piccardi, L.; Pierantoni, P. P.; Pignone, M.; Pinzi, S.; Pistolesi, E.; Point, J.; Pousse, L.; Pozzi, A.; Proposito, M.; Puglisi, C.; Puliti, I.; Ricci, T.; Ripamonti, L.; Rizza, M.; Roberts, G. P.; Roncoroni, M.; Sapia, V.; Saroli, M.; Sciarra, A.; Scotti, O.; Skupinski, G.; Smedile, A.; Soquet, A.; Tarabusi, G.; Tarquini, S.; Terrana, S.; Tesson, J.; Tondi, E.; Valentini, A.; Vallone, R.; van der Woerd, J.; Vannoli, P.; Venuti, A.; Vittori, E.; Volatili, T.; Wedmore, L. N. J.; Wilkinson, M.; Zambrano, M.

    2018-03-01

    We provide a database of the coseismic geological surface effects following the Mw 6.5 Norcia earthquake that hit central Italy on 30 October 2016. This was one of the strongest seismic events to occur in Europe in the past thirty years, causing complex surface ruptures over an area of >400 km2. The database originated from the collaboration of several European teams (Open EMERGEO Working Group; about 130 researchers) coordinated by the Istituto Nazionale di Geofisica e Vulcanologia. The observations were collected by performing detailed field surveys in the epicentral region in order to describe the geometry and kinematics of surface faulting, and subsequently of landslides and other secondary coseismic effects. The resulting database consists of homogeneous georeferenced records identifying 7323 observation points, each of which contains 18 numeric and string fields of relevant information. This database will impact future earthquake studies focused on modelling of the seismic processes in active extensional settings, updating probabilistic estimates of slip distribution, and assessing the hazard of surface faulting.

  5. “Gone are the days of mass-media marketing plans and short term customer relationships”: tobacco industry direct mail and database marketing strategies

    PubMed Central

    Lewis, M Jane; Ling, Pamela M

    2015-01-01

    Background As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. Methods We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Results Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers’ names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers’ names each. These companies valued direct mail/database marketing’s flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail’s limited visibility to tobacco control, public health and regulators. Conclusions Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers’ perceptions and smoking behaviours. PMID:26243810

  6. Development Of New Databases For Tsunami Hazard Analysis In California

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.

    2009-12-01

    The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.

  7. Monitoring urban land cover change by updating the national land cover database impervious surface products

    USGS Publications Warehouse

    Xian, George Z.; Homer, Collin G.

    2009-01-01

    The U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 is widely used as a baseline for national land cover and impervious conditions. To ensure timely and relevant data, it is important to update this base to a more recent time period. A prototype method was developed to update the land cover and impervious surface by individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season from both 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, impervious surface was estimated for areas of change by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain a variety of metropolitan areas. Results from the five study areas show that the vast majority of impervious surface changes associated with urban developments were accurately captured and updated. The approach optimizes mapping efficiency and can provide users a flexible method to generate updated impervious surface at national and regional scales.

  8. Integrated database for rapid mass movements in Norway

    NASA Astrophysics Data System (ADS)

    Jaedicke, C.; Lied, K.; Kronholm, K.

    2009-03-01

    Rapid gravitational slope mass movements include all kinds of short term relocation of geological material, snow or ice. Traditionally, information about such events is collected separately in different databases covering selected geographical regions and types of movement. In Norway the terrain is susceptible to all types of rapid gravitational slope mass movements ranging from single rocks hitting roads and houses to large snow avalanches and rock slides where entire mountainsides collapse into fjords creating flood waves and endangering large areas. In addition, quick clay slides occur in desalinated marine sediments in South Eastern and Mid Norway. For the authorities and inhabitants of endangered areas, the type of threat is of minor importance and mitigation measures have to consider several types of rapid mass movements simultaneously. An integrated national database for all types of rapid mass movements built around individual events has been established. Only three data entries are mandatory: time, location and type of movement. The remaining optional parameters enable recording of detailed information about the terrain, materials involved and damages caused. Pictures, movies and other documentation can be uploaded into the database. A web-based graphical user interface has been developed allowing new events to be entered, as well as editing and querying for all events. An integration of the database into a GIS system is currently under development. Datasets from various national sources like the road authorities and the Geological Survey of Norway were imported into the database. Today, the database contains 33 000 rapid mass movement events from the last five hundred years covering the entire country. A first analysis of the data shows that the most frequent type of recorded rapid mass movement is rock slides and snow avalanches followed by debris slides in third place. Most events are recorded in the steep fjord terrain of the Norwegian west coast, but major events are recorded all over the country. Snow avalanches account for most fatalities, while large rock slides causing flood waves and huge quick clay slides are the most damaging individual events in terms of damage to infrastructure and property and for causing multiple fatalities. The quality of the data is strongly influenced by the personal engagement of local observers and varying observation routines. This database is a unique source for statistical analysis including, risk analysis and the relation between rapid mass movements and climate. The database of rapid mass movement events will also facilitate validation of national hazard and risk maps.

  9. Digital database of the geologic map of the island of Hawai'i [Hawaii

    USGS Publications Warehouse

    Trusdell, Frank A.; Wolfe, Edward W.; Morris, Jean

    2006-01-01

    This online publication (DS 144) provides the digital database for the printed map by Edward W. Wolfe and Jean Morris (I-2524-A; 1996). This digital database contains all the information used to publish U.S. Geological Survey Geologic Investigations Series I-2524-A (available only in paper form; see http://pubs.er.usgs.gov/pubs/i/i2524A). The database contains the distribution and relationships of volcanic and surficial-sedimentary deposits on the island of Hawai‘i. This dataset represents the geologic history for the five volcanoes that comprise the Island of Hawai'i. The volcanoes are Kohala, Mauna Kea, Hualalai, Mauna Loa and Kīlauea.This database of the geologic map contributes to understanding the geologic history of the Island of Hawai‘i and provides the basis for understanding long-term volcanic processes in an intra-plate ocean island volcanic system. In addition the database also serves as a basis for producing volcanic hazards assessment for the island of Hawai‘i. Furthermore it serves as a base layer to be used for interdisciplinary research.This online publication consists of a digital database of the geologic map, an explanatory pamphlet, description of map units, correlation of map units diagram, and images for plotting. Geologic mapping was compiled at a scale of 1:100,000 for the entire mapping area. The geologic mapping was compiled as a digital geologic database in ArcInfo GIS format.

  10. GEOGRAPHIC INFORMATION SYSTEM APPROACH FOR PLAY PORTFOLIOS TO IMPROVE OIL PRODUCTION IN THE ILLINOIS BASIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beverly Seyler; John Grube

    2004-12-10

    Oil and gas have been commercially produced in Illinois for over 100 years. Existing commercial production is from more than fifty-two named pay horizons in Paleozoic rocks ranging in age from Middle Ordovician to Pennsylvanian. Over 3.2 billion barrels of oil have been produced. Recent calculations indicate that remaining mobile resources in the Illinois Basin may be on the order of several billion barrels. Thus, large quantities of oil, potentially recoverable using current technology, remain in Illinois oil fields despite a century of development. Many opportunities for increased production may have been missed due to complex development histories, multiple stackedmore » pays, and commingled production which makes thorough exploitation of pays and the application of secondary or improved/enhanced recovery strategies difficult. Access to data, and the techniques required to evaluate and manage large amounts of diverse data are major barriers to increased production of critical reserves in the Illinois Basin. These constraints are being alleviated by the development of a database access system using a Geographic Information System (GIS) approach for evaluation and identification of underdeveloped pays. The Illinois State Geological Survey has developed a methodology that is being used by industry to identify underdeveloped areas (UDAs) in and around petroleum reservoirs in Illinois using a GIS approach. This project utilizes a statewide oil and gas Oracle{reg_sign} database to develop a series of Oil and Gas Base Maps with well location symbols that are color-coded by producing horizon. Producing horizons are displayed as layers and can be selected as separate or combined layers that can be turned on and off. Map views can be customized to serve individual needs and page size maps can be printed. A core analysis database with over 168,000 entries has been compiled and assimilated into the ISGS Enterprise Oracle database. Maps of wells with core data have been generated. Data from over 1,700 Illinois waterflood units and waterflood areas have been entered into an Access{reg_sign} database. The waterflood area data has also been assimilated into the ISGS Oracle database for mapping and dissemination on the ArcIMS website. Formation depths for the Beech Creek Limestone, Ste. Genevieve Limestone and New Albany Shale in all of the oil producing region of Illinois have been calculated and entered into a digital database. Digital contoured structure maps have been constructed, edited and added to the ILoil website as map layers. This technology/methodology addresses the long-standing constraints related to information access and data management in Illinois by significantly simplifying the laborious process that industry presently must use to identify underdeveloped pay zones in Illinois.« less

  11. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to tackle the change of support problem.

  12. The methane sink associated to soils of natural and agricultural ecosystems in Italy.

    PubMed

    Castaldi, Simona; Costantini, Massimo; Cenciarelli, Pietro; Ciccioli, Paolo; Valentini, Riccardo

    2007-01-01

    In the present work, the CH4 sink associated to Italian soils was calculated by using a process-based model controlled by gas diffusivity and microbial activity, which was run by using a raster-based geographical information system. Georeferenced data included land cover CLC2000, soil properties from the European Soil Database, climatic data from the MARS-STAT database, plus several derived soils properties based on published algorithms applied to the above mentioned databases. Overall CH4 consumption from natural and agricultural sources accounted for a total of 43.3 Gg CH4 yr(-1), with 28.1 Gg CH4 yr(-1) removed in natural ecosystems and 15.1 Gg CH4 yr(-1) in agricultural ecosystems. The highest CH4 uptake rates were obtained for natural areas of Southern Apennines and islands of Sardinia and Sicily, and were mainly associated to areas covered by sclerophyllous vegetation (259.7+/-30.2 mg CH4 m(-2) yr(-1)) and broad-leaved forest (237.5 mg CH4 m(-2) yr(-1)). In terms of total sink strength broad-leaved forests were the dominant ecosystem. The overall contribution of each ecosystem type to the whole CH4 sink depended on the total area covered by the specific ecosystem and on its exact geographic distribution. The latter determines the type of climate present in the area and the dominant soil type, both factors which showed to have a strong influence on CH4 uptake rates. The aggregated CH4 sink, calculated for natural ecosystems present in the Italian region, is significantly higher than previously reported estimates, which were extrapolated from fluxes measured in other temperate ecosystems.

  13. The use of karst geomorphology for planning, hazard avoidance and development in Great Britain

    NASA Astrophysics Data System (ADS)

    Cooper, Anthony H.; Farrant, Andrew R.; Price, Simon J.

    2011-11-01

    Within Great Britain five main types of karstic rocks - dolomite, limestone, chalk, gypsum and salt - are present. Each presents a different type and severity of karstic geohazard which are related to the rock solubility and geological setting. Typical karstic features associated with these rocks have been databased by the British Geological Survey (BGS) with records of sinkholes, cave entrances, stream sinks, resurgences and building damage; data for more than half of the country has been gathered. BGS has manipulated digital map data, for bedrock and superficial deposits, with digital elevation slope models, superficial deposit thickness models, the karst data and expertly interpreted areas, to generate a derived dataset assessing the likelihood of subsidence due to karst collapse. This dataset is informed and verified by the karst database and marketed as part of the BGS GeoSure suite. It is currently used by environmental regulators, the insurance and construction industries, and the BGS semi-automated enquiry system. The database and derived datasets can be further combined and manipulated using GIS to provide other datasets that deal with specific problems. Sustainable drainage systems, some of which use soak-aways into the ground, are being encouraged in Great Britain, but in karst areas they can cause ground stability problems. Similarly, open loop ground source heat or cooling pump systems may induce subsidence if installed in certain types of karstic environments such as in chalk with overlying sand deposits. Groundwater abstraction also has the potential to trigger subsidence in karst areas. GIS manipulation of the karst information is allowing Great Britain to be zoned into areas suitable, or unsuitable, for such uses; it has the potential to become part of a suite of planning management tools for local and National Government to assess the long term sustainable use of the ground.

  14. Digital geologic map of the Thirsty Canyon NW quadrangle, Nye County, Nevada

    USGS Publications Warehouse

    Minor, S.A.; Orkild, P.P.; Sargent, K.A.; Warren, R.G.; Sawyer, D.A.; Workman, J.B.

    1998-01-01

    This digital geologic map compilation presents new polygon (i.e., geologic map unit contacts), line (i.e., fault, fold axis, dike, and caldera wall), and point (i.e., structural attitude) vector data for the Thirsty Canyon NW 7 1/2' quadrangle in southern Nevada. The map database, which is at 1:24,000-scale resolution, provides geologic coverage of an area of current hydrogeologic and tectonic interest. The Thirsty Canyon NW quadrangle is located in southern Nye County about 20 km west of the Nevada Test Site (NTS) and 30 km north of the town of Beatty. The map area is underlain by extensive layers of Neogene (about 14 to 4.5 million years old [Ma]) mafic and silicic volcanic rocks that are temporally and spatially associated with transtensional tectonic deformation. Mapped volcanic features include part of a late Miocene (about 9.2 Ma) collapse caldera, a Pliocene (about 4.5 Ma) shield volcano, and two Pleistocene (about 0.3 Ma) cinder cones. Also documented are numerous normal, oblique-slip, and strike-slip faults that reflect regional transtensional deformation along the southern part of the Walker Lane belt. The Thirsty Canyon NW map provides new geologic information for modeling groundwater flow paths that may enter the map area from underground nuclear testing areas located in the NTS about 25 km to the east. The geologic map database comprises six component ArcINFO map coverages that can be accessed after decompressing and unbundling the data archive file (tcnw.tar.gz). These six coverages (tcnwpoly, tcnwflt, tcnwfold, tcnwdike, tcnwcald, and tcnwatt) are formatted here in ArcINFO EXPORT format. Bundled with this database are two PDF files for readily viewing and printing the map, accessory graphics, and a description of map units and compilation methods.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, P.K.

    A total of 48 papers were presented at the Engineering Geology and Geotechnical Engineering 30th Symposium. These papers are presented in this proceedings under the following headings: site characterization--Pocatello area; site characterization--Boise Area; site assessment; Idaho National Engineering Laboratory; geophysical methods; remediation; geotechnical engineering; and hydrogeology, northern and western Idaho. Individual papers have been processed separately for inclusion in the Energy Science and Technology Database.

  16. Young People's Use and Views of a School-Based Sexual Health Drop-In Service in Areas of High Deprivation

    ERIC Educational Resources Information Center

    Ingram, Jenny; Salmon, Debra

    2010-01-01

    Objective: To describe patterns and reasons of attendance and young people's views of the drop-in service. Design: Analysis of a prospective database, questionnaire survey and qualitative interviews and discussions. Setting: Sexual health drop-in clinics in 16 secondary schools (including three pupil-referral units) in deprived areas of a city in…

  17. COLE: A Web-based Tool for Interfacing with Forest Inventory Data

    Treesearch

    Patrick Proctor; Linda S. Heath; Paul C. Van Deusen; Jeffery H. Gove; James E. Smith

    2005-01-01

    We are developing an online computer program to provide forest carbon related estimates for the conterminous United States (COLE). Version 1.0 of the program features carbon estimates based on data from the USDA Forest Service Eastwide Forest Inventory database. The program allows the user to designate an area of interest, and currently provides area, growing-stock...

  18. Temporal variation in synchrony among chinook salmon (Oncorhynchus tshawytscha) redd counts from a wilderness area in central Idaho

    Treesearch

    D. J. Isaak; R. F. Thurow; B. E. Rieman; J. B. Dunham

    2003-01-01

    Metapopulation dynamics have emerged as a key consideration in conservation planning for salmonid fishes. Implicit to many models of spatially structured populations is a degree of synchrony, or correlation, among populations. We used a spatially and temporally extensive database of chinook salmon (Oncorhynchus tshawytscha) redd counts from a wilderness area in central...

  19. Ensemble of ground subsidence hazard maps using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Park, Inhye; Lee, Jiyeong; Saro, Lee

    2014-06-01

    Hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok, Korea, were constructed using fuzzy ensemble techniques and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, groundwater, and ground subsidence maps. Spatial data, topography, geology, and various ground-engineering data for the subsidence area were collected and compiled in a database for mapping ground-subsidence hazard (GSH). The subsidence area was randomly split 70/30 for training and validation of the models. The relationships between the detected ground-subsidence area and the factors were identified and quantified by frequency ratio (FR), logistic regression (LR) and artificial neural network (ANN) models. The relationships were used as factor ratings in the overlay analysis to create ground-subsidence hazard indexes and maps. The three GSH maps were then used as new input factors and integrated using fuzzy-ensemble methods to make better hazard maps. All of the hazard maps were validated by comparison with known subsidence areas that were not used directly in the analysis. As the result, the ensemble model was found to be more effective in terms of prediction accuracy than the individual model.

  20. Carbon Storages in Plantation Ecosystems in Sand Source Areas of North Beijing, China

    PubMed Central

    Liu, Xiuping; Zhang, Wanjun; Cao, Jiansheng; Shen, Huitao; Zeng, Xinhua; Yu, Zhiqiang; Zhao, Xin

    2013-01-01

    Afforestation is a mitigation option to reduce the increased atmospheric carbon dioxide levels as well as the predicted high possibility of climate change. In this paper, vegetation survey data, statistical database, National Forest Resource Inventory database, and allometric equations were used to estimate carbon density (carbon mass per hectare) and carbon storage, and identify the size and spatial distribution of forest carbon sinks in plantation ecosystems in sand source areas of north Beijing, China. From 2001 to the end of 2010, the forest areas increased more than 2.3 million ha, and total carbon storage in forest ecosystems was 173.02 Tg C, of which 82.80 percent was contained in soil in the top 0–100 cm layer. Younger forests have a large potential for enhancing carbon sequestration in terrestrial ecosystems than older ones. Regarding future afforestation efforts, it will be more effective to increase forest area and vegetation carbon density through selection of appropriate tree species and stand structure according to local climate and soil conditions, and application of proper forest management including land-shaping, artificial tending and fencing plantations. It would be also important to protect the organic carbon in surface soils during forest management. PMID:24349223

  1. TIGER/Line Shapefile, 2010, 2010 Census Block State-based

    EPA Pesticide Factsheets

    The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census Blocks are statistical areas bounded on all sides by visible features, such as streets, roads, streams, and railroad tracks, and/or by nonvisible boundaries such as city, town, township, and county limits, and short line-of-sight extensions of streets and roads. Census blocks are relatively small in area; for example, a block in a city bounded by streets. However, census blocks in remote areas are often large and irregular and may even be many square miles in area. A common misunderstanding is that data users think census blocks are used geographically to build all other census geographic areas, rather all other census geographic areas are updated and then used as the primary constraints, along with roads and water features, to delineate the tabulation blocks. As a result, all 2010 Census blocks nest within every other 2010 Census geographic area, so that Census Bureau statistical data can be tabulated at the block level and aggregated up to the appropr

  2. Lightning fatalities and injuries in Turkey

    NASA Astrophysics Data System (ADS)

    Tilev-Tanriover, Ş.; Kahraman, A.; Kadioğlu, M.; Schultz, D. M.

    2015-08-01

    A database of lightning-related fatalities and injuries in Turkey was constructed by collecting data from the Turkish State Meteorological Service, newspaper archives, European Severe Weather Database, and the internet. The database covers January 1930 to June 2014. In total, 742 lightning incidents causing human fatalities and injuries were found. Within these 742 incidents, there were 895 fatalities, 149 serious injuries, and 535 other injuries. Most of the incidents (89 %) occurred during April through September, with a peak in May and June (26 and 28 %) followed by July (14 %). Lightning-related fatalities and injuries were most frequent in the afternoon. Most of the incidents (86 %) occurred in rural areas, with only 14 % in the urban areas. Approximately, two thirds of the victims with known gender were male. Because of the unrepresentativeness of the historical data, determining an average mortality rate over a long period is not possible. Nevertheless, there were 31 fatalities (0.42 per million) in 2012, 26 fatalities (0.35 per million) in 2013, and 25 fatalities (0.34 per million) in 2014 (as of June). There were 36 injuries (0.49 per million) in each of 2012 and 2013, and 62 injuries (0.84 per million) in 2014 (as of June).

  3. Lightning fatalities and injuries in Turkey

    NASA Astrophysics Data System (ADS)

    Tilev-Tanriover, Ş.; Kahraman, A.; Kadioğlu, M.; Schultz, D. M.

    2015-03-01

    A database of lightning-related fatalities and injuries in Turkey was constructed by collecting data from the Turkish State Meteorological Service, newspaper archives, European Severe Weather Database, and the internet. The database covers January 1930 to June 2014. In total, 742 lightning incidents causing human fatalities and injuries were found. Within these 742 incidents, there were 895 fatalities, 149 serious injuries, and 535 other injuries. Most of the incidents (89%) occurred during April through September, with a peak in May and June (26 and 28 %) followed by July (14%). Lightning-related fatalities and injuries were most frequent in the afternoon. Most of the incidents (86%) occurred in the rural areas, with only 14% in the urban areas. Approximately, two thirds of the victims with known gender were male. Because of the unrepresentativeness of the historical data, determining an average mortality rate over a long period is not possible. Nevertheless, there were 31 fatalities (0.42 per million) in 2012, 26 fatalities (0.35 per million) in 2013, and 25 fatalities (0.34 per million) in 2014 (as of June). There were 36 injuries (0.49 per million) in each of 2012 and 2013, and 62 injuries (0.84 per million) in 2014 (as of June).

  4. Design research about coastal zone planning and management information system based on GIS and database technologies

    NASA Astrophysics Data System (ADS)

    Huang, Pei; Wu, Sangyun; Feng, Aiping; Guo, Yacheng

    2008-10-01

    As littoral areas in possession of concentrated population, abundant resources, developed industry and active economy, the coastal areas are bound to become the forward positions and supported regions for marine exploitation. In the 21st century, the pressure that coastal zones are faced with is as follows: growth of population and urbanization, rise of sea level and coastal erosion, shortage of freshwater resource and deterioration of water resource, and degradation of fishery resource and so on. So the resources of coastal zones should be programmed and used reasonably for the sustainable development of economy and environment. This paper proposes a design research on the construction of coastal zone planning and management information system based on GIS and database technologies. According to this system, the planning results of coastal zones could be queried and displayed expediently through the system interface. It is concluded that the integrated application of GIS and database technologies provides a new modern method for the management of coastal zone resources, and makes it possible to ensure the rational development and utilization of the coastal zone resources, along with the sustainable development of economy and environment.

  5. Developing consistent Landsat data sets for large area applications: the MRLC 2001 protocol

    USGS Publications Warehouse

    Chander, G.; Huang, Chengquan; Yang, Limin; Homer, Collin G.; Larson, C.

    2009-01-01

    One of the major efforts in large area land cover mapping over the last two decades was the completion of two U.S. National Land Cover Data sets (NLCD), developed with nominal 1992 and 2001 Landsat imagery under the auspices of the MultiResolution Land Characteristics (MRLC) Consortium. Following the successful generation of NLCD 1992, a second generation MRLC initiative was launched with two primary goals: (1) to develop a consistent Landsat imagery data set for the U.S. and (2) to develop a second generation National Land Cover Database (NLCD 2001). One of the key enhancements was the formulation of an image preprocessing protocol and implementation of a consistent image processing method. The core data set of the NLCD 2001 database consists of Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. This letter details the procedures for processing the original ETM+ images and more recent scenes added to the database. NLCD 2001 products include Anderson Level II land cover classes, percent tree canopy, and percent urban imperviousness at 30-m resolution derived from Landsat imagery. The products are freely available for download to the general public from the MRLC Consortium Web site at http://www.mrlc.gov.

  6. [Bibliometric and thematic analysis of the scientific literature about omega-3 fatty acids indexed in international databases on health sciences].

    PubMed

    Sanz-Valero, J; Gil, Á; Wanden-Berghe, C; Martínez de Victoria, E

    2012-11-01

    To evaluate by bibliometric and thematic analysis the scientific literature on omega-3 fatty acids indexed in international databases on health sciences and to establish a comparative base for future analysis. Searches were conducted with the descriptor (MeSH, as Major Topic) "Fatty Acids, Omega-3" from the first date available until December 31, 2010. Databases consulted: MEDLINE (via PubMed), EMBASE, ISI Web of Knowledge, CINAHL and LILACS. The most common type of document was originals articles. Obsolescence was set at 5 years. The geographical distribution of authors who appear as first author was EEUU and the articles were written predominantly in English. The study population was 90.98% (95% CI 89.25 to 92.71) adult humans. The documents were classified into 59 subject areas and the most studied topic 16.24% (95% CI 14.4 to 18.04) associated with omega-3, was cardiovascular disease. This study indicates that the scientific literature on omega-3 fatty acids is a full force area of knowledge. The Anglo-Saxon institutions dominate the scientific production and it is mainly oriented to the study of cardiovascular disease.

  7. Chemical analyses of coal, coal-associated rocks and coal combustion products collected for the National Coal Quality Inventory

    USGS Publications Warehouse

    Hatch, Joseph R.; Bullock, John H.; Finkelman, Robert B.

    2006-01-01

    In 1999, the USGS initiated the National Coal Quality Inventory (NaCQI) project to address a need for quality information on coals that will be mined during the next 20-30 years. At the time this project was initiated, the publicly available USGS coal quality data was based on samples primarily collected and analyzed between 1973 and 1985. The primary objective of NaCQI was to create a database containing comprehensive, accurate and accessible chemical information on the quality of mined and prepared United States coals and their combustion byproducts. This objective was to be accomplished through maintaining the existing publicly available coal quality database, expanding the database through the acquisition of new samples from priority areas, and analysis of the samples using updated coal analytical chemistry procedures. Priorities for sampling include those areas where future sources of compliance coal are federally owned. This project was a cooperative effort between the U.S. Geological Survey (USGS), State geological surveys, universities, coal burning utilities, and the coal mining industry. Funding support came from the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE).

  8. Predicting Language Outcome and Recovery After Stroke (PLORAS)

    PubMed Central

    Price, CJ; Seghier, ML; Leff, AP

    2013-01-01

    The ability of comprehend and produce speech after stroke depends on whether the areas of the brain that support language have been damaged. Here we review two different ways to predict language outcome after stroke. The first depends on understanding the neural circuits that support language. This model-based approach is a challenging endeavor because language is a complex cognitive function that involves the interaction of many different brain areas. The second approach does not require an understanding of why a lesion impairs language, instead, predictions are made on the basis of how previous patients with the same lesion recovered. This requires a database storing the speech and language abilities of a large population of patients who have, between them, incurred a comprehensive range of focal brain damage. In addition it requires a system that converts an MRI scan from a new patient into a 3D description of the lesion and then compares this lesion to all others on the database. The outputs of this system are the longitudinal language outcomes of corresponding patients in the database. This will provide a new patient, their carers and the clinician team managing them the range of likely recovery patterns over a variety of language measures. PMID:20212513

  9. Analyzing legacy U.S. Geological Survey geochemical databases using GIS: applications for a national mineral resource assessment

    USGS Publications Warehouse

    Yager, Douglas B.; Hofstra, Albert H.; Granitto, Matthew

    2012-01-01

    This report emphasizes geographic information system analysis and the display of data stored in the legacy U.S. Geological Survey National Geochemical Database for use in mineral resource investigations. Geochemical analyses of soils, stream sediments, and rocks that are archived in the National Geochemical Database provide an extensive data source for investigating geochemical anomalies. A study area in the Egan Range of east-central Nevada was used to develop a geographic information system analysis methodology for two different geochemical datasets involving detailed (Bureau of Land Management Wilderness) and reconnaissance-scale (National Uranium Resource Evaluation) investigations. ArcGIS was used to analyze and thematically map geochemical information at point locations. Watershed-boundary datasets served as a geographic reference to relate potentially anomalous sample sites with hydrologic unit codes at varying scales. The National Hydrography Dataset was analyzed with Hydrography Event Management and ArcGIS Utility Network Analyst tools to delineate potential sediment-sample provenance along a stream network. These tools can be used to track potential upstream-sediment-contributing areas to a sample site. This methodology identifies geochemically anomalous sample sites, watersheds, and streams that could help focus mineral resource investigations in the field.

  10. Experimental study on interfacial area transport in downward two-phase flow

    NASA Astrophysics Data System (ADS)

    Wang, Guanyi

    In view of the importance of two group interfacial area transport equations and lack of corresponding accurate downward flow database that can reveal two group interfacial area transport, a systematic database for adiabatic, air-water, vertically downward two-phase flow in a round pipe with inner diameter of 25.4 mm was collected to gain an insight of interfacial structure and provide benchmarking data for two-group interfacial area transport models. A four-sensor conductivity probe was used to measure the local two phase flow parameters and data was collected with data sampling frequency much higher than conventional data sampling frequency to ensure the accuracy. Axial development of local flow parameter profiles including void fraction, interfacial area concentration, and Sauter mean diameter were presented. Drastic inter-group transfer of void fraction and interfacial area was observed at bubbly to slug transition flow. And the wall peaked interfacial area concentration profiles were observed in churn-turbulent flow. The importance of local data about these phenomenon on flow structure prediction and interfacial area transport equation benchmark was analyzed. Bedsides, in order to investigate the effect of inlet conditions, all experiments were repeated after installing the flow straightening facility, and the results were briefly analyzed. In order to check the accuracy of current data, the experiment results were cross-checked with rotameter measurement as well as drift-flux model prediction, the averaged error is less than 15%. Current models for two-group interfacial area transport equation were evaluated using these data. The results show that two-group interfacial area transport equations with current models can predict most flow conditions with error less than 20%, except some bubbly to slug transition flow conditions and some churn-turbulent flow conditions. The disagreement between models and experiments could result from underestimate of inter-group void transfer.

  11. Application of logistic regression for landslide susceptibility zoning of Cekmece Area, Istanbul, Turkey

    NASA Astrophysics Data System (ADS)

    Duman, T. Y.; Can, T.; Gokceoglu, C.; Nefeslioglu, H. A.; Sonmez, H.

    2006-11-01

    As a result of industrialization, throughout the world, cities have been growing rapidly for the last century. One typical example of these growing cities is Istanbul, the population of which is over 10 million. Due to rapid urbanization, new areas suitable for settlement and engineering structures are necessary. The Cekmece area located west of the Istanbul metropolitan area is studied, because the landslide activity is extensive in this area. The purpose of this study is to develop a model that can be used to characterize landslide susceptibility in map form using logistic regression analysis of an extensive landslide database. A database of landslide activity was constructed using both aerial-photography and field studies. About 19.2% of the selected study area is covered by deep-seated landslides. The landslides that occur in the area are primarily located in sandstones with interbedded permeable and impermeable layers such as claystone, siltstone and mudstone. About 31.95% of the total landslide area is located at this unit. To apply logistic regression analyses, a data matrix including 37 variables was constructed. The variables used in the forwards stepwise analyses are different measures of slope, aspect, elevation, stream power index (SPI), plan curvature, profile curvature, geology, geomorphology and relative permeability of lithological units. A total of 25 variables were identified as exerting strong influence on landslide occurrence, and included by the logistic regression equation. Wald statistics values indicate that lithology, SPI and slope are more important than the other parameters in the equation. Beta coefficients of the 25 variables included the logistic regression equation provide a model for landslide susceptibility in the Cekmece area. This model is used to generate a landslide susceptibility map that correctly classified 83.8% of the landslide-prone areas.

  12. Publication trend, resource utilization, and impact of the US National Cancer Database: A systematic review.

    PubMed

    Su, Chang; Peng, Cuiying; Agbodza, Ena; Bai, Harrison X; Huang, Yuqian; Karakousis, Giorgos; Zhang, Paul J; Zhang, Zishu

    2018-03-01

    The utilization and impact of the studies published using the National Cancer Database (NCDB) is currently unclear. In this study, we aim to characterize the published studies, and identify relatively unexplored areas for future investigations. A literature search was performed using PubMed in January 2017 to identify all papers published using NCDB data. Characteristics of the publications were extracted. Citation frequencies were obtained through the Web of Science. Three hundred 2 articles written by 230 first authors met the inclusion criteria. The number of publications grew exponentially since 2013, with 108 articles published in 2016. Articles were published in 86 journals. The majority of the published papers focused on digestive system cancer, while bone and joints, eye and orbit, myeloma, mesothelioma, and Kaposi Sarcoma were never studied. Thirteen institutions in the United States were associated with more than 5 publications. The papers have been cited for a total of 9858 times since the publication of the first paper in 1992. Frequently appearing keywords congregated into 3 clusters: "demographics," "treatments and survival," and "statistical analysis method." Even though the main focuses of the articles captured a extremely wide range, they can be classified into 2 main categories: survival analysis and characterization. Other focuses include database(s) analysis and/or comparison, and hospital reporting. The surging interest in the use of NCDB is accompanied by unequal utilization of resources by individuals and institutions. Certain areas were relatively understudied and should be further explored.

  13. Nursing leadership succession planning in Veterans Health Administration: creating a useful database.

    PubMed

    Weiss, Lizabeth M; Drake, Audrey

    2007-01-01

    An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.

  14. An integrated biomedical telemetry system for sleep monitoring employing a portable body area network of sensors (SENSATION).

    PubMed

    Astaras, Alexander; Arvanitidou, Marina; Chouvarda, Ioanna; Kilintzis, Vassilis; Koutkias, Vassilis; Sanchez, Eduardo Monton; Stalidis, George; Triantafyllidis, Andreas; Maglaveras, Nicos

    2008-01-01

    A flexible, scaleable and cost-effective medical telemetry system is described for monitoring sleep-related disorders in the home environment. The system was designed and built for real-time data acquisition and processing, allowing for additional use in intensive care unit scenarios where rapid medical response is required in case of emergency. It comprises a wearable body area network of Zigbee-compatible wireless sensors worn by the subject, a central database repository residing in the medical centre and thin client workstations located at the subject's home and in the clinician's office. The system supports heterogeneous setup configurations, involving a variety of data acquisition sensors to suit several medical applications. All telemetry data is securely transferred and stored in the central database under the clinicians' ownership and control.

  15. Decision Analysis for Remediation Technologies (DART) user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sebo, D.

    1997-09-01

    This user`s manual is an introduction to the use of the Decision Analysis for Remediation Technology (DART) Report Generator. DART provides a user interface to a database containing site data (e.g., contaminants, waste depth, area) for sites within the Subsurface Contaminant Focus Area (SCFA). The database also contains SCFA requirements, needs, and technology information. The manual is arranged in two major sections. The first section describes loading DART onto a user system. The second section describes DART operation. DART operation is organized into sections by the user interface forms. For each form, user input, both optional and required, DART capabilities,more » and the result of user selections will be covered in sufficient detail to enable the user to understand DART, capabilities and determine how to use DART to meet specific needs.« less

  16. Geologic map of outcrop areas of sedimentary units in the eastern part of the Hailey 1 degree x 2 degrees quadrangle and part of the southern part of the Challis 1 degree x 2 degrees quadrangle, south-central Idaho

    USGS Publications Warehouse

    Link, P.K.; Mahoney, J.B.; Bruner, D.J.; Batatian, L.D.; Wilson, Eric; Williams, F.J.C.

    1995-01-01

    The paper version of the Geologic map of outcrop areas of sedimentary units in the eastern part of the Hailey 1x2 Quadrangle and part of the southern part of the Challis 1x2 Quadrangle, south-central Idaho was compiled by Paul Link and others in 1995. The plate was compiled on a 1:100,000 scale topographic base map. TechniGraphic System, Inc. of Fort Collins Colorado digitized this map under contract for N.Shock. G.Green edited and prepared the digital version for publication as a GIS database. The digital geologic map database can be queried in many ways to produce a variety of geologic maps.

  17. Evaluation of scientific periodicals and the Brazilian production of nursing articles.

    PubMed

    Erdmann, Alacoque Lorenzini; Marziale, Maria Helena Palucci; Pedreira, Mavilde da Luz Gonçalves; Lana, Francisco Carlos Félix; Pagliuca, Lorita Marlena Freitag; Padilha, Maria Itayra; Fernandes, Josicelia Dumêt

    2009-01-01

    This study aimed to identify nursing journals edited in Brazil indexed in the main bibliographic databases in the areas of health and nursing. It also aimed to classify the production of nursing graduate programs in 2007 according to the QUALIS/CAPES criteria used to classify scientific periodicals that disseminate the intellectual production of graduate programs in Brazil. This exploratory study used data from reports and documents available from CAPES to map scientific production and from searching the main international and national indexing databases. The findings from this research can help students, professors and coordinators of graduate programs in several ways: to understand the criteria of classifying periodicals; to be aware of the current production of graduate programs in the area of nursing; and to provide information that authors can use to select periodicals in which to publish their articles.

  18. Wide-area continuous offender monitoring

    NASA Astrophysics Data System (ADS)

    Hoshen, Joseph; Drake, George; Spencer, Debra D.

    1997-02-01

    The corrections system in the U.S. is supervising over five million offenders. This number is rising fast and so are the direct and indirect costs to society. To improve supervision and reduce the cost of parole and probation, first generation home arrest systems were introduced in 1987. While these systems proved to be helpful to the corrections system, their scope is rather limited because they only cover an offender at a single location and provide only a partial time coverage. To correct the limitations of first- generation systems, second-generation wide area continuous electronic offender monitoring systems, designed to monitor the offender at all times and locations, are now on the drawing board. These systems use radio frequency location technology to track the position of offenders. The challenge for this technology is the development of reliable personal locator devices that are small, lightweight, with long operational battery life, and indoors/outdoors accuracy of 100 meters or less. At the center of a second-generation system is a database that specifies the offender's home, workplace, commute, and time the offender should be found in each. The database could also define areas from which the offender is excluded. To test compliance, the system would compare the observed coordinates of the offender with the stored location for a given time interval. Database logfiles will also enable law enforcement to determine if a monitored offender was present at a crime scene and thus include or exclude the offender as a potential suspect.

  19. Wide area continuous offender monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoshen, J.; Drake, G.; Spencer, D.

    The corrections system in the U.S. is supervising over five million offenders. This number is rising fast and so are the direct and indirect costs to society. To improve supervision and reduce the cost of parole and probation, first generation home arrest systems were introduced in 1987. While these systems proved to be helpful to the corrections system, their scope is rather limited because they only cover an offender at a single location and provide only a partial time coverage. To correct the limitations of first-generation systems, second-generation wide area continuous electronic offender monitoring systems, designed to monitor the offendermore » at all times and locations, are now on the drawing board. These systems use radio frequency location technology to track the position of offenders. The challenge for this technology is the development of reliable personal locator devices that are small, lightweight, with long operational battery life, and indoors/outdoors accuracy of 100 meters or less. At the center of a second-generation system is a database that specifies the offender`s home, workplace, commute, and time the offender should be found in each. The database could also define areas from which the offender is excluded. To test compliance, the system would compare the observed coordinates of the offender with the stored location for a given time interval. Database logfiles will also enable law enforcement to determine if a monitored offender was present at a crime scene and thus include or exclude the offender as a potential suspect.« less

  20. Karst mapping in the United States: Past, present and future

    USGS Publications Warehouse

    Weary, David J.; Doctor, Daniel H.

    2015-01-01

    The earliest known comprehensive karst map of the entire USA was published by Stringfield and LeGrand (1969), based on compilations of William E. Davies of the U.S. Geological Survey (USGS). Various versions of essentially the same map have been published since. The USGS recently published new digital maps and databases depicting the extent of known karst, potential karst, and pseudokarst areas of the United States of America including Puerto Rico and the U.S. Virgin Islands (Weary and Doctor, 2014). These maps are based primarily on the extent of potentially karstic soluble rock types, and rocks with physical properties conducive to the formation of pseudokarst features. These data were compiled and refined from multiple sources at various spatial resolutions, mostly as digital data supplied by state geological surveys. The database includes polygons delineating areas with potential for karst and that are tagged with attributes intended to facilitate classification of karst regions. Approximately 18% of the surface of the fifty United States is underlain by significantly soluble bedrock. In the eastern United States the extent of outcrop of soluble rocks provides a good first-approximation of the distribution of karst and potential karst areas. In the arid western states, the extent of soluble rock outcrop tends to overestimate the extent of regions that might be considered as karst under current climatic conditions, but the new dataset encompasses those regions nonetheless. This database will be revised as needed, and the present map will be updated as new information is incorporated.

  1. VizieR Online Data Catalog: Scheiner drawing sunspot areas and tilt angles (Arlt+, 2016)

    NASA Astrophysics Data System (ADS)

    Arlt, R.; Senthamizh Pavai, V.; Schmiel, C.; Spada, F.

    2016-09-01

    Christoph Scheiner and his collaborators observed the sunspots from 1611-1631 at five different locations of Rome in Italy, Ingolstadt in Germany, Douai (Duacum in Latin) in France, Freiburg im Breisgau, Germany and Vienna, Austria. However, most of his published drawings were made in Rome. These sunspot drawings are important because they can tell us how the solar activity declined to a very low-activity phase which lasted for nearly five decades. The three sources used for the sunspot data extraction are Scheiner (1630rour.book.....S, Rosa Ursina sive solis), Scheiner (1651ppsm.book.....S, Prodromus pro sole mobili et terra stabili contra Academicum Florentinum Galilaeum a Galilaeis), and Reeves & Van Helden (2010, On sunspots. Galileo Galilei and Christoph Scheiner (University of Chicago Press)). The suspot drawings show the sunspot groups traversing the solar disk in a single full-disk drawing. The positions and areas of the sunspots were measured using 13 circular cursor shapes with different diameters. Umbral areas for 8167 sunspots and tilt angles for 697 manually selected, supposedly bipolar groups were obtained from Scheiner's sunspot drawings. The database does not contain spotless days. There is, of course, no polarity information in the sunspot drawings, so the tilt angles are actually pseudo-tilt angles. Both an updated sunspot database and a tilt angle database may be available at http://www.aip.de/Members/rarlt/sunspots for further study. (2 data files).

  2. Computer-aided diagnosis of malignant mammograms using Zernike moments and SVM.

    PubMed

    Sharma, Shubhi; Khanna, Pritee

    2015-02-01

    This work is directed toward the development of a computer-aided diagnosis (CAD) system to detect abnormalities or suspicious areas in digital mammograms and classify them as malignant or nonmalignant. Original mammogram is preprocessed to separate the breast region from its background. To work on the suspicious area of the breast, region of interest (ROI) patches of a fixed size of 128×128 are extracted from the original large-sized digital mammograms. For training, patches are extracted manually from a preprocessed mammogram. For testing, patches are extracted from a highly dense area identified by clustering technique. For all extracted patches corresponding to a mammogram, Zernike moments of different orders are computed and stored as a feature vector. A support vector machine (SVM) is used to classify extracted ROI patches. The experimental study shows that the use of Zernike moments with order 20 and SVM classifier gives better results among other studies. The proposed system is tested on Image Retrieval In Medical Application (IRMA) reference dataset and Digital Database for Screening Mammography (DDSM) mammogram database. On IRMA reference dataset, it attains 99% sensitivity and 99% specificity, and on DDSM mammogram database, it obtained 97% sensitivity and 96% specificity. To verify the applicability of Zernike moments as a fitting texture descriptor, the performance of the proposed CAD system is compared with the other well-known texture descriptors namely gray-level co-occurrence matrix (GLCM) and discrete cosine transform (DCT).

  3. Geology of the Cape Mendocino, Eureka, Garberville, and Southwestern Part of the Hayfork 30 x 60 Minute Quadrangles and Adjacent Offshore Area, Northern California

    USGS Publications Warehouse

    McLaughlin, Robert J.; Ellen, S.D.; Blake, M.C.; Jayko, Angela S.; Irwin, W.P.; Aalto, K.R.; Carver, G.A.; Clarke, S.H.; Barnes, J.B.; Cecil, J.D.; Cyr, K.A.

    2000-01-01

    Introduction These geologic maps and accompanying structure sections depict the geology and structure of much of northwestern California and the adjacent continental margin. The map area includes the Mendocino triple junction, which is the juncture of the North American continental plate with two plates of the Pacific ocean basin. The map area also encompasses major geographic and geologic provinces of northwestern California. The maps incorporate much previously unpublished geologic mapping done between 1980 and 1995, as well as published mapping done between about 1950 and 1978. To construct structure sections to mid-crustal depths, we integrate the surface geology with interpretations of crustal structure based on seismicity, gravity and aeromagnetic data, offshore structure, and seismic reflection and refraction data. In addition to describing major geologic and structural features of northwestern California, the geologic maps have the potential to address a number of societally relevant issues, including hazards from earthquakes, landslides, and floods and problems related to timber harvest, wildlife habitat, and changing land use. All of these topics will continue to be of interest in the region, as changing land uses and population density interact with natural conditions. In these interactions, it is critical that the policies and practices affecting man and the environment integrate an adequate understanding of the geology. This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (ceghmf.ps, ceghmf.pdf, ceghmf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:100,000 or smaller.

  4. Environmental applications based on GIS and GRID technologies

    NASA Astrophysics Data System (ADS)

    Demontis, R.; Lorrai, E.; Marrone, V. A.; Muscas, L.; Spanu, V.; Vacca, A.; Valera, P.

    2009-04-01

    In the last decades, the collection and use of environmental data has enormously increased in a wide range of applications. Simultaneously, the explosive development of information technology and its ever wider data accessibility have made it possible to store and manipulate huge quantities of data. In this context, the GRID approach is emerging worldwide as a tool allowing to provision a computational task with administratively-distant resources. The aim of this paper is to present three environmental applications (Land Suitability, Desertification Risk Assessment, Georesources and Environmental Geochemistry) foreseen within the AGISGRID (Access and query of a distributed GIS/Database within the GRID infrastructure, http://grida3.crs4.it/enginframe/agisgrid/index.xml) activities of the GRIDA3 (Administrator of sharing resources for data analysis and environmental applications, http://grida3.crs4.it) project. This project, co-funded by the Italian Ministry of research, is based on the use of shared environmental data through GRID technologies and accessible by a WEB interface, aimed at public and private users in the field of environmental management and land use planning. The technologies used for AGISGRID include: - the client-server-middleware iRODS™ (Integrated Rule-Oriented Data System) (https://irods.org); - the EnginFrame system (http://www.nice-italy.com/main/index.php?id=32), the grid portal that supplies a frame to make available, via Intranet/Internet, the developed GRID applications; - the software GIS GRASS (Geographic Resources Analysis Support System) (http://grass.itc.it); - the relational database PostgreSQL (http://www.posgresql.org) and the spatial database extension PostGis; - the open source multiplatform Mapserver (http://mapserver.gis.umn.edu), used to represent the geospatial data through typical WEB GIS functionalities. Three GRID nodes are directly involved in the applications: the application workflow is implemented at the CRS4 (Pula, Southern Sardinia, Italy), the soil database is managed at the DISTER node (Cagliari, southern Sardinia, Italy), and the geochemical database is managed at the DIGITA node (Cagliari, southern Sardinia, Italy). The input data are files (raster ASCII format) and database tables. The raster files have been zipped and stored in iRods. The tables are imported into a PostgreSQL database and accessed by the Rule-oriented Database Access (RDA) system available for PostgreSQL in iRODS 1.1. From the EnginFrame portal it is possible to view and use the applications through three services: "Upload Data", "View Data and Metadata", and "Execute Application". The Land Suitability application, based on the FAO framework for land evaluation, produces suitability maps (at the scale 1:10,000) for 11 different possible alternative uses. The maps, with a ASCII raster format, are downloadable by the user and viewable by Mapserver. This application has been implemented in an area of southern Sardinia (Monastir) and may be useful to direct municipal urban planning towards a rational land use. The Desertification Risk Assessment application produces, by means of biophysical and socioeconomic key indicators, a final combined map showing critical, fragile, and potential Environmentally Sensitive Areas to desertification. This application has been implemented in an area of south-west Sardinia (Muravera). The final index for the sensitivity is obtained by the geometric mean among four parameters: SQI (Soil Quality Index), CQI (Climate Quality Index), VQI (Vegetation Quality Index) e MQI (Management Quality Index). The final result (ESAs = (SQI * CQI * VQI * MQI)1•4) is a map at the scale 1:50,000, with a ASCII raster format, downloadable by the user and viewable by Mapserver. This type of map may be useful to direct land planning at catchment basin level. The Georesources and Environmental Geochemistry application, whose test is in progress in the area of Muravera (south-west Sardinia) through stream sediment sampling, aims at producing maps defining, with high precision, areas (hydrographic basins) where the values of a given element exceed the lithological background (i.e. geochemically anomalous). Such a product has a double purpose. First of all, it identifies releasing sources and may be useful for the necessary remediation actions, if they insist on areas historically prone to more or less intense anthropical activities. On the other hand, if these sources are of natural origin, they could also be interpreted as ore mineral occurrences. In the latter case the study of these occurrences could lead to discover economic ore bodies of small-to-medium size (at least in the present target area) and consequently to the revival of a local mining industry.

  5. The Amma-Sat Database

    NASA Astrophysics Data System (ADS)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on a regular grid with a spatial resolution compatible with the spatial variability of the geophysical parameter. Data are stored in NetCDF files to facilitate their use. Satellite products can be selected using several spatial and temporal criteria and ordered through a web interface developed in PHP-MySQL. More common means of access are also available such as direct FTP or NFS access for identified users. A Live Access Server allows quick visualization of the data. A meta-data catalogue based on the Directory Interchange Format manages the documentation of each satellite product. The database is currently under development, but some products are already available. The database will be complete by the end of 2005.

  6. Corruption of genomic databases with anomalous sequence.

    PubMed

    Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L

    1992-06-11

    We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%.

  7. Database for the geologic map of the Sauk River 30-minute by 60-minute quadrangle, Washington (I-2592)

    USGS Publications Warehouse

    Tabor, R.W.; Booth, D.B.; Vance, J.A.; Ford, A.B.

    2006-01-01

    This digital map database has been prepared by R.W. Tabor from the published Geologic map of the Sauk River 30- by 60 Minute Quadrangle, Washington. Together with the accompanying text files as PDF, it provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The authors mapped most of the bedrock geology at 1:100,000 scale, but compiled most Quaternary units at 1:24,000 scale. The Quaternary contacts and structural data have been much simplified for the 1:100,000-scale map and database. The spatial resolution (scale) of the database is 1:100,000 or smaller. This database depicts the distribution of geologic materials and structures at a regional (1:100,000) scale. The report is intended to provide geologic information for the regional study of materials properties, earthquake shaking, landslide potential, mineral hazards, seismic velocity, and earthquake faults. In addition, the report contains information and interpretations about the regional geologic history and framework. However, the regional scale of this report does not provide sufficient detail for site development purposes.

  8. Development of a standardized Intranet database of formulation records for nonsterile compounding, Part 2.

    PubMed

    Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela

    2012-01-01

    In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.

  9. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  10. East-China Geochemistry Database (ECGD):A New Networking Database for North China Craton

    NASA Astrophysics Data System (ADS)

    Wang, X.; Ma, W.

    2010-12-01

    North China Craton is one of the best natural laboratories that research some Earth Dynamic questions[1]. Scientists made much progress in research on this area, and got vast geochemistry data, which are essential for answering many fundamental questions about the age, composition, structure, and evolution of the East China area. But the geochemical data have long been accessible only through the scientific literature and theses where they have been widely dispersed, making it difficult for the broad Geosciences community to find, access and efficiently use the full range of available data[2]. How to effectively store, manage, share and reuse the existing geochemical data in the North China Craton area? East-China Geochemistry Database(ECGD) is a networking geochemical scientific database system that has been designed based on WebGIS and relational database for the structured storage and retrieval of geochemical data and geological map information. It is integrated the functions of data retrieval, spatial visualization and online analysis. ECGD focus on three areas: 1.Storage and retrieval of geochemical data and geological map information. Research on the characters of geochemical data, including its composing and connecting of each other, we designed a relational database, which based on geochemical relational data model, to store a variety of geological sample information such as sampling locality, age, sample characteristics, reference, major elements, rare earth elements, trace elements and isotope system et al. And a web-based user-friendly interface is provided for constructing queries. 2.Data view. ECGD is committed to online data visualization by different ways, especially to view data in digital map with dynamic way. Because ECGD was integrated WebGIS technology, the query results can be mapped on digital map, which can be zoomed, translation and dot selection. Besides of view and output query results data by html, txt or xls formats, researchers also can generate classification thematic maps using query results, according different parameters. 3.Data analysis on-line. Here we designed lots of geochemical online analysis tools, including geochemical diagrams, CIPW computing, and so on, which allows researchers to analyze query data without download query results. Operation of all these analysis tools is very easy; users just do it by click mouse one or two time. In summary, ECGD provide a geochemical platform for researchers, whom to know where various data are, to view various data in a synthetic and dynamic way, and analyze interested data online. REFERENCES [1] S. Gao, R.L. Rudnick, and W.L. Xu, “Recycling deep cratonic lithosphere and generation of intraplate magmatism in the North China Craton,” Earth and Planetary Science Letters,270,41-53,2008. [2] K.A. Lehnert, U. Harms, and E. Ito, “Promises, Achievements, and Challenges of Networking Global Geoinformatics Resources - Experiences of GeosciNET and EarthChem,” Geophysical Research Abstracts, Vol.10, EGU2008-A-05242,2008.

  11. Hydrologic and landscape database for the Cache and White River National Wildlife Refuges and contributing watersheds in Arkansas, Missouri, and Oklahoma

    USGS Publications Warehouse

    Buell, Gary R.; Wehmeyer, Loren L.; Calhoun, Daniel L.

    2012-01-01

    A hydrologic and landscape database was developed by the U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, for the Cache River and White River National Wildlife Refuges and their contributing watersheds in Arkansas, Missouri, and Oklahoma. The database is composed of a set of ASCII files, Microsoft Access® files, Microsoft Excel® files, an Environmental Systems Research Institute (ESRI) ArcGIS® geodatabase, ESRI ArcGRID® raster datasets, and an ESRI ArcReader® published map. The database was developed as an assessment and evaluation tool to use in examining refuge-specific hydrologic patterns and trends as related to water availability for refuge ecosystems, habitats, and target species; and includes hydrologic time-series data, statistics, and hydroecological metrics that can be used to assess refuge hydrologic conditions and the availability of aquatic and riparian habitat. Landscape data that describe the refuge physiographic setting and the locations of hydrologic-data collection stations are also included in the database. Categories of landscape data include land cover, soil hydrologic characteristics, physiographic features, geographic and hydrographic boundaries, hydrographic features, regional runoff estimates, and gaging-station locations. The database geographic extent covers three hydrologic subregions—the Lower Mississippi–St Francis (0802), the Upper White (1101), and the Lower Arkansas (1111)—within which human activities, climatic variation, and hydrologic processes can potentially affect the hydrologic regime of the refuges and adjacent areas. Database construction has been automated to facilitate periodic updates with new data. The database report (1) serves as a user guide for the database, (2) describes the data-collection, data-reduction, and data-analysis methods used to construct the database, (3) provides a statistical and graphical description of the database, and (4) provides detailed information on the development of analytical techniques designed to assess water availability for ecological needs.

  12. A Database of Historical Information on Landslides and Floods in Italy

    NASA Astrophysics Data System (ADS)

    Guzzetti, F.; Tonelli, G.

    2003-04-01

    For the past 12 years we have maintained and updated a database of historical information on landslides and floods in Italy, known as the National Research Council's AVI (Damaged Urban Areas) Project archive. The database was originally designed to respond to a specific request of the Minister of Civil Protection, and was aimed at helping the regional assessment of landslide and flood risk in Italy. The database was first constructed in 1991-92 to cover the period 1917 to 1990. Information of damaging landslide and flood event was collected by searching archives, by screening thousands of newspaper issues, by reviewing the existing technical and scientific literature on landslides and floods in Italy, and by interviewing landslide and flood experts. The database was then updated chiefly through the analysis of hundreds of newspaper articles, and it now covers systematically the period 1900 to 1998, and non-systematically the periods 1900 to 1916 and 1999 to 2002. Non systematic information on landslide and flood events older than 20th century is also present in the database. The database currently contains information on more than 32,000 landslide events occurred at more than 25,700 sites, and on more than 28,800 flood events occurred at more than 15,600 sites. After a brief outline of the history and evolution of the AVI Project archive, we present and discuss: (a) the present structure of the database, including the hardware and software solutions adopted to maintain, manage, use and disseminate the information stored in the database, (b) the type and amount of information stored in the database, including an estimate of its completeness, and (c) examples of recent applications of the database, including a web-based GIS systems to show the location of sites historically affected by landslides and floods, and an estimate of geo-hydrological (i.e., landslide and flood) risk in Italy based on the available historical information.

  13. Bibliography of global change, 1992

    NASA Technical Reports Server (NTRS)

    1993-01-01

    This bibliography lists 585 reports, articles, and other documents introduced in the NASA Scientific and Technical Information Database in 1992. The areas covered include global change, decision making, earth observation (from space), forecasting, global warming, policies, and trends.

  14. 32 CFR 240.4 - Policy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...

  15. 32 CFR 240.4 - Policy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...

  16. 32 CFR 240.4 - Policy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... enterprise information infrastructure requirements. (c) The academic disciplines, with concentrations in IA..., computer systems analysis, cyber operations, cybersecurity, database administration, data management... infrastructure development and academic research to support the DoD IA/IT critical areas of interest. ...

  17. Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California

    USGS Publications Warehouse

    Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.

    2006-01-01

    The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.

  18. "Gone are the days of mass-media marketing plans and short term customer relationships": tobacco industry direct mail and database marketing strategies.

    PubMed

    Lewis, M Jane; Ling, Pamela M

    2016-07-01

    As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers' names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers' names each. These companies valued direct mail/database marketing's flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail's limited visibility to tobacco control, public health and regulators. Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers' perceptions and smoking behaviours. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. The National Landslide Database and GIS for Great Britain: construction, development, data acquisition, application and communication

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Dashwood, Claire; Freeborough, Katy

    2014-05-01

    The National Landslide Database has been developed by the British Geological Survey (BGS) and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 16,500 records of landslide events, each documented as fully as possible. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and crowd-sourcing information through social media and other online resources. This information is invaluable for the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domain map currently under development rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures and an understanding of causative factors and their spatial distribution, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP and data collected for the National Landslide Database is used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  20. Medical Decision Algorithm for Pre-Hospital Trauma Care. Phase I.

    DTIC Science & Technology

    1996-09-01

    Algorithm for Pre-Hospital Trauma Care PRINCIPAL INVESTIGATOR: Donald K. Wedding, P.E., Ph.D CONTRACTING ORGANIZATION : Photonics Systems, Incorporated... ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Photonics Systems, Incorporated Northwood, Ohio 43619 9. SPONSORING...three areas: 1) data acquisition, 2) neural network design, and 3) system architechture design. In the first area of this research, a triage database

  1. Out-of-School Time Programs in Rural Areas. Highlights from the Out-of-School Time Database. Research Update, No. 6

    ERIC Educational Resources Information Center

    Harris, Erin; Malone, Helen; Sunnanon, Tai

    2011-01-01

    Out-of-school time (OST) programming can be a crucial asset to families in rural areas where resources to support children's learning and development are often insufficient to meet the community's needs. OST programs that offer youth in rural communities a safe and supportive adult-supervised environment--along with various growth-enhancing…

  2. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  3. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as a series of ArcReader projects which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in ArcMap projects. The ArcMap projects allow fuller use of the data, but require ESRI ArcMap? software. Multiple projects were required to accommodate the large number of images needed. A fuller description of the projects can be found in the Dunes_ReadMe file and the ReadMe_GIS file in the Documentation folder. For users who prefer to create their own projects, the data is available in ESRI shapefile and geodatabase formats, as well as the open Geographic Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. ReadMe files are available in PDF and ASCII (.txt) files. Tables are available in both Excel (.xls) and ASCII formats.

  4. Validation of temporal and spatial consistency of facility- and speed-specific vehicle-specific power distributions for emission estimation: A case study in Beijing, China.

    PubMed

    Zhai, Zhiqiang; Song, Guohua; Lu, Hongyu; He, Weinan; Yu, Lei

    2017-09-01

    Vehicle-specific power (VSP) has been found to be highly correlated with vehicle emissions. It is used in many studies on emission modeling such as the MOVES (Motor Vehicle Emissions Simulator) model. The existing studies develop specific VSP distributions (or OpMode distribution in MOVES) for different road types and various average speeds to represent the vehicle operating modes on road. However, it is still not clear if the facility- and speed-specific VSP distributions are consistent temporally and spatially. For instance, is it necessary to update periodically the database of the VSP distributions in the emission model? Are the VSP distributions developed in the city central business district (CBD) area applicable to its suburb area? In this context, this study examined the temporal and spatial consistency of the facility- and speed-specific VSP distributions in Beijing. The VSP distributions in different years and in different areas are developed, based on real-world vehicle activity data. The root mean square error (RMSE) is employed to quantify the difference between the VSP distributions. The maximum differences of the VSP distributions between different years and between different areas are approximately 20% of that between different road types. The analysis of the carbon dioxide (CO 2 ) emission factor indicates that the temporal and spatial differences of the VSP distributions have no significant impact on vehicle emission estimation, with relative error of less than 3%. The temporal and spatial differences have no significant impact on the development of the facility- and speed-specific VSP distributions for the vehicle emission estimation. The database of the specific VSP distributions in the VSP-based emission models can maintain in terms of time. Thus, it is unnecessary to update the database regularly, and it is reliable to use the history vehicle activity data to forecast the emissions in the future. In one city, the areas with less data can still develop accurate VSP distributions based on better data from other areas.

  5. GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species

    PubMed Central

    Kumar, Sujai; Stevens, Lewis; Blaxter, Mark

    2017-01-01

    Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774

  6. Geotherm: the U.S. geological survey geothermal information system

    USGS Publications Warehouse

    Bliss, J.D.; Rapport, A.

    1983-01-01

    GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey. Information in the system is available to the public on request. ?? 1983.

  7. Mining databases for protein aggregation: a review.

    PubMed

    Tsiolaki, Paraskevi L; Nastou, Katerina C; Hamodrakas, Stavros J; Iconomidou, Vassiliki A

    2017-09-01

    Protein aggregation is an active area of research in recent decades, since it is the most common and troubling indication of protein instability. Understanding the mechanisms governing protein aggregation and amyloidogenesis is a key component to the aetiology and pathogenesis of many devastating disorders, including Alzheimer's disease or type 2 diabetes. Protein aggregation data are currently found "scattered" in an increasing number of repositories, since advances in computational biology greatly influence this field of research. This review exploits the various resources of aggregation data and attempts to distinguish and analyze the biological knowledge they contain, by introducing protein-based, fragment-based and disease-based repositories, related to aggregation. In order to gain a broad overview of the available repositories, a novel comprehensive network maps and visualizes the current association between aggregation databases and other important databases and/or tools and discusses the beneficial role of community annotation. The need for unification of aggregation databases in a common platform is also addressed.

  8. The database of the Nikolaev Astronomical Observatory as a unit of an international virtual observatory

    NASA Astrophysics Data System (ADS)

    Protsyuk, Yu.; Pinigin, G.; Shulga, A.

    2005-06-01

    Results of the development and organization of the digital database of the Nikolaev Astronomical Observatory (NAO) are presented. At present, three telescopes are connected to the local area network of NAO. All the data obtained, and results of data processing are entered into the common database of NAO. The daily average volume of new astronomical information obtained from the CCD instruments ranges from 300 MB up to 2 GB, depending on the purposes and conditions of observations. The overwhelming majority of the data are stored in the FITS format. Development and further improvement of storage standards, procedures of data handling and data processing are being carried out. It is planned to create an astronomical web portal with the possibility to have interactive access to databases and telescopes. In the future, this resource may become a part of an international virtual observatory. There are the prototypes of search tools with the use of PHP and MySQL. Efforts for getting more links to the Internet are being made.

  9. An X-Ray Analysis Database of Photoionization Cross Sections Including Variable Ionization

    NASA Technical Reports Server (NTRS)

    Wang, Ping; Cohen, David H.; MacFarlane, Joseph J.; Cassinelli, Joseph P.

    1997-01-01

    Results of research efforts in the following areas are discussed: review of the major theoretical and experimental data of subshell photoionization cross sections and ionization edges of atomic ions to assess the accuracy of the data, and to compile the most reliable of these data in our own database; detailed atomic physics calculations to complement the database for all ions of 17 cosmically abundant elements; reconciling the data from various sources and our own calculations; and fitting cross sections with functional approximations and incorporating these functions into a compact computer code.Also, efforts included adapting an ionization equilibrium code, tabulating results, and incorporating them into the overall program and testing the code (both ionization equilibrium and opacity codes) with existing observational data. The background and scientific applications of this work are discussed. Atomic physics cross section models and calculations are described. Calculation results are compared with available experimental data and other theoretical data. The functional approximations used for fitting cross sections are outlined and applications of the database are discussed.

  10. Record linkage for pharmacoepidemiological studies in cancer patients.

    PubMed

    Herk-Sukel, Myrthe P P van; Lemmens, Valery E P P; Poll-Franse, Lonneke V van de; Herings, Ron M C; Coebergh, Jan Willem W

    2012-01-01

    An increasing need has developed for the post-approval surveillance of (new) anti-cancer drugs by means of pharmacoepidemiology and outcomes research in the area of oncology. To create an overview that makes researchers aware of the available database linkages in Northern America and Europe which facilitate pharmacoepidemiology and outcomes research in cancer patients. In addition to our own database, i.e. the Eindhoven Cancer Registry (ECR) linked to the PHARMO Record Linkage System, we considered database linkages between a population-based cancer registry and an administrative healthcare database that at least contains information on drug use and offers a longitudinal perspective on healthcare utilization. Eligible database linkages were limited to those that had been used in multiple published articles in English language included in Pubmed. The HMO Cancer Research Network (CRN) in the US was excluded from this review, as an overview of the linked databases participating in the CRN is already provided elsewhere. Researchers who had worked with the data resources included in our review were contacted for additional information and verification of the data presented in the overview. The following database linkages were included: the Surveillance, Epidemiology, and End-Results-Medicare; cancer registry data linked to Medicaid; Canadian cancer registries linked to population-based drug databases; the Scottish cancer registry linked to the Tayside drug dispensing data; linked databases in the Nordic Countries of Europe: Norway, Sweden, Finland and Denmark; and the ECR-PHARMO linkage in the Netherlands. Descriptives of the included database linkages comprise population size, generalizability of the population, year of first data availability, contents of the cancer registry, contents of the administrative healthcare database, the possibility to select a cancer-free control cohort, and linkage to other healthcare databases. The linked databases offer a longitudinal perspective, allowing for observations of health care utilization before, during, and after cancer diagnosis. They create new powerful data resources for the monitoring of post-approval drug utilization, as well as a framework to explore the (cost-)effectiveness of new, often expensive, anti-cancer drugs as used in everyday practice. Copyright © 2011 John Wiley & Sons, Ltd.

  11. Letter to Bay Area on Periodic Monitoring

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  12. Issuance of PSD Permit to Sources Impacting Dirty and Clean Air Areas

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  13. Issuance of PSD Permits in Attainment Areas Where Violations have been Modeled

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  14. Title V Deferrals and Exemptions for Area Sources

    EPA Pesticide Factsheets

    This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  15. EPA, FWS, NPS Coordination Procedures for Determining Air Quality Impact in Region IV Class I Areas

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  16. Response to Action Memorandum - Policy for New Sources in Non-Attainment Areas, Hampton Roads Oil Refinery

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  17. Advanced telemedicine development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; George, J.E.; Gavrilov, E.M.

    1998-12-31

    This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop a Java-based, electronic, medical-record system that can handle multimedia data and work over a wide-area network based on open standards, and that can utilize an existing database back end. The physician is to be totally unaware that there is a database behind the scenes and is only aware that he/she can access and manage the relevant information to treat the patient.

  18. Bathymetry of Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina, 2008

    USGS Publications Warehouse

    Nagle, D.D.; Campbell, B.G.; Lowery, M.A.

    2009-01-01

    The increasing use and importance of lakes for water supply to communities enhance the need for an accurate methodology to determine lake bathymetry and storage capacity. A global positioning receiver and a fathometer were used to collect position data and water depth in February 2008 at Lake William C. Bowen and Municipal Reservoir #1, Spartanburg County, South Carolina. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and stage-area and -volume relations were created from the geographic information database.

  19. VizieR Online Data Catalog: Mira stars discovered in LAMOST DR4 (Yao+, 2017)

    NASA Astrophysics Data System (ADS)

    Yao, Y.; Liu, C.; Deng, L.; de Grijs, R.; Matsunaga, N.

    2017-10-01

    By the end of 2016 March, the wide-field Large sky Area Multi-Object fiber Spectroscopic Telescope (LAMOST) DR4 catalog had accumulated 7681185 spectra (R=1800), of which 6898298 were of stars. We compiled a photometrically confirmed sample of Mira variables from the Kiso Wide-Field Camera (KWFC) Intensive Survey of the Galactic Plane (KISOGP; Matsunaga 2017, arXiv:1705.08567), the American Association of Variable Star Observers (AAVSO) International Database Variable Star Index (VSX; Watson 2006, B/vsx, version 2017-05-02; we selected stars of variability type "M"), and the SIMBAD Astronomical Database. We first cross-matched the KISOGP and VSX Miras with the LAMOST DR4 catalog. Finally, we cross-matched the DR4 catalog with the SIMBAD database. See section 2. (1 data file).

  20. Spatially detailed water footprint assessment using the U.S. National Water-Economy Database

    NASA Astrophysics Data System (ADS)

    Ruddell, B. L.

    2015-12-01

    The new U.S. National Water-Economy Database (NWED) provides a complete picture of water use and trade in water-derived goods and services in the U.S. economy, by economic sector, at the county and metropolitan area scale. This data product provides for the first time a basis for spatially detailed calculations of water footprints and virtual water trade in the entire U.S.. This talk reviews the general patterns of U.S. water footprint and virtual water trade, at the county scale., and provides an opportunity for the community to discuss applications of this database for water resource policy and economics. The water footprints of irrigated agriculture and energy are specifically addressed, as well as overall patterns of water use in the economy.

  1. Use of the EpiNet database for observational study of status epilepticus in Auckland, New Zealand.

    PubMed

    Bergin, Peter; Jayabal, Jayaganth; Walker, Elizabeth; Davis, Suzanne; Jones, Peter; Dalziel, Stuart; Yates, Kim; Thornton, Vanessa; Bennett, Patricia; Wilson, Kaisa; Roberts, Lynair; Litchfield, Rhonda; Te Ao, Braden; Parmer, Priya; Feigin, Valery; Jost, Jeremy; Beghi, Ettore; Rossetti, Andrea O

    2015-08-01

    The EpiNet project has been established to facilitate investigator-initiated clinical research in epilepsy, to undertake epidemiological studies, and to simultaneously improve the care of patients who have records created within the EpiNet database. The EpiNet database has recently been adapted to collect detailed information regarding status epilepticus. An incidence study is now underway in Auckland, New Zealand in which the incidence of status epilepticus in the greater Auckland area (population: 1.5 million) will be calculated. The form that has been developed for this study can be used in the future to collect information for randomized controlled trials in status epilepticus. This article is part of a Special Issue entitled "Status Epilepticus". Copyright © 2015 Elsevier Inc. All rights reserved.

  2. MitoNuc: a database of nuclear genes coding for mitochondrial proteins. Update 2002.

    PubMed

    Attimonelli, Marcella; Catalano, Domenico; Gissi, Carmela; Grillo, Giorgio; Licciulli, Flavio; Liuni, Sabino; Santamaria, Monica; Pesole, Graziano; Saccone, Cecilia

    2002-01-01

    Mitochondria, besides their central role in energy metabolism, have recently been found to be involved in a number of basic processes of cell life and to contribute to the pathogenesis of many degenerative diseases. All functions of mitochondria depend on the interaction of nuclear and organelle genomes. Mitochondrial genomes have been extensively sequenced and analysed and data have been collected in several specialised databases. In order to collect information on nuclear coded mitochondrial proteins we developed MitoNuc, a database containing detailed information on sequenced nuclear genes coding for mitochondrial proteins in Metazoa. The MitoNuc database can be retrieved through SRS and is available via the web site http://bighost.area.ba.cnr.it/mitochondriome where other mitochondrial databases developed by our group, the complete list of the sequenced mitochondrial genomes, links to other mitochondrial sites and related information, are available. The MitoAln database, related to MitoNuc in the previous release, reporting the multiple alignments of the relevant homologous protein coding regions, is no longer supported in the present release. In order to keep the links among entries in MitoNuc from homologous proteins, a new field in the database has been defined: the cluster identifier, an alpha numeric code used to identify each cluster of homologous proteins. A comment field derived from the corresponding SWISS-PROT entry has been introduced; this reports clinical data related to dysfunction of the protein. The logic scheme of MitoNuc database has been implemented in the ORACLE DBMS. This will allow the end-users to retrieve data through a friendly interface that will be soon implemented.

  3. Integrated Geo Hazard Management System in Cloud Computing Technology

    NASA Astrophysics Data System (ADS)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  4. Human Factors Considerations for Area Navigation Departure and Arrival Procedures

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Adams, Catherine A.

    2006-01-01

    Area navigation (RNAV) procedures are being implemented in the United States and around the world as part of a transition to a performance-based navigation system. These procedures are providing significant benefits and have also caused some human factors issues to emerge. Under sponsorship from the Federal Aviation Administration (FAA), the National Aeronautics and Space Administration (NASA) has undertaken a project to document RNAV-related human factors issues and propose areas for further consideration. The component focusing on RNAV Departure and Arrival Procedures involved discussions with expert users, a literature review, and a focused review of the NASA Aviation Safety Reporting System (ASRS) database. Issues were found to include aspects of air traffic control and airline procedures, aircraft systems, and procedure design. Major findings suggest the need for specific instrument procedure design guidelines that consider the effects of human performance. Ongoing industry and government activities to address air-ground communication terminology, design improvements, and chart-database commonality are strongly encouraged. A review of factors contributing to RNAV in-service errors would likely lead to improved system design and operational performance.

  5. Innovations in health service organization and delivery in northern rural and remote regions: a review of the literature.

    PubMed

    Mitton, Craig; Dionne, Francois; Masucci, Lisa; Wong, Sabrina; Law, Susan

    2011-01-01

    To identify and review innovations relevant to improving access, quality, efficiency and/or effectiveness in the organization and delivery of health care services in rural and remote areas. Literature review. Key bibliographic databases that index health research were searched: MEDLINE, EMBASE and CINAHL. Other databases relevant to Arctic health were also accessed. Abstracts were assessed for relevancy and full articles were reviewed and categorized according to emergent themes. Many innovations in delivering services to rural and remote areas were identified, particularly in the public health realm. These innovations were grouped into 4 key themes: organizational structure of health services; utilization of telehealth and ehealth; medical transportation; and public health challenges. Despite the challenges facing rural and remote regions, there is a distinctly positive message from this broad literature review. Evidence-based initiatives exist across a range of areas - which include operational efficiency and integration, access to care, organizational structure, public health, continuing education and workforce composition - that have the potential to positively impact health care quality and health-related outcomes.

  6. Efficacy and safety of extracorporeal shock wave therapy for orthopedic conditions: a systematic review on studies listed in the PEDro database

    PubMed Central

    Schmitz, Christoph; Császár, Nikolaus B. M.; Milz, Stefan; Schieker, Matthias; Maffulli, Nicola; Rompe, Jan-Dirk; Furia, John P.

    2015-01-01

    Background Extracorporeal shock wave therapy (ESWT) is an effective and safe non-invasive treatment option for tendon and other pathologies of the musculoskeletal system. Sources of data This systematic review used data derived from the Physiotherapy Evidence Database (PEDro; www.pedro.org.au, 23 October 2015, date last accessed). Areas of agreement ESWT is effective and safe. An optimum treatment protocol for ESWT appears to be three treatment sessions at 1-week intervals, with 2000 impulses per session and the highest energy flux density the patient can tolerate. Areas of controversy The distinction between radial ESWT as ‘low-energy ESWT’ and focused ESWT as ‘high-energy ESWT’ is not correct and should be abandoned. Growing points There is no scientific evidence in favour of either radial ESWT or focused ESWT with respect to treatment outcome. Areas timely for developing research Future randomized controlled trials should primarily address systematic tests of the aforementioned optimum treatment protocol and direct comparisons between radial and focused ESWT. PMID:26585999

  7. Editorial: Reviewer Selection Process and New Areas of Expertise in GEMS

    NASA Technical Reports Server (NTRS)

    Liemohn, Michael W.; Balikhin, Michael; Kepko, Larry; Rodger, Alan; Wang, Yuming

    2016-01-01

    One method of selecting potential reviewers for papers submitted to the Journal of Geophysical Research Space Physics is to filter the user database within the Geophysical Electronic Manuscript System (GEMS) by areas of expertise. The list of these areas in GEMS can be self selected by users in their profile settings. The Editors have added 18 new entries to this list, an increase of 33 more than the previous 55 entries. All space physicists are strongly encouraged to update their profile settings in GEMS, especially their areas of expertise selections, and details of how to do this are provided.

  8. Editorial: Reviewer selection process and new areas of expertise in GEMS

    NASA Astrophysics Data System (ADS)

    Liemohn, Michael W.; Balikhin, Michael; Kepko, Larry; Rodger, Alan; Wang, Yuming

    2016-06-01

    One method of selecting potential reviewers for papers submitted to the Journal of Geophysical Research Space Physics is to filter the user database within the Geophysical Electronic Manuscript System (GEMS) by areas of expertise. The list of these areas in GEMS can be self selected by users in their profile settings. The Editors have added 18 new entries to this list, an increase of 33% more than the previous 55 entries. All space physicists are strongly encouraged to update their profile settings in GEMS, especially their areas of expertise selections, and details of how to do this are provided.

  9. Echocardiographic measurements of cardiac dimensions correlate better with body length than with body weight or body surface area.

    PubMed

    Motz, R; Schumacher, M; Nürnberg, J; Viemann, M; Grafmüller, S; Fiedler, K; Claus, M; Kronberg, K

    2014-12-01

    Looking after children means caring for very small infants up to adult-sized adolescents, with weights ranging from 500 g to more than 100 kg and heights ranging from 25 to more than 200 cm. The available echocardiographic reference data were drawn from a small sample, which did not include preterm infants. Most authors have used body weight or body surface area to predict left ventricular dimensions. The current authors had the impression that body length would be a better surrogate parameter than body weight or body surface area. They analyzed their echocardiographic database retrospectively. The analysis included all available echocardiographic data from 6 June 2001 to 15 December 2011 from their echocardiographic database. The authors included 12,086 of 26,325 subjects documented as patients with normal hearts in their analysis by the examining the pediatric cardiologist. For their analysis, they selected body weight, length, age, and aortic and pulmonary valve diameter in two-dimensional echocardiography and left ventricular dimension in M-mode. They found good correlation between echocardiographic dimensions and body surface area, body weight, and body length. The analysis showed a complex relationship between echocardiographic measurements and body weight and body surface area, whereas body length showed a linear relationship. This makes prediction of echo parameters more reliable. According to this retrospective analysis, body length is a better parameter for evaluating echocardiographic measurements than body weight or body surface area and should therefore be used in daily practice.

  10. Geochemical baseline studies of soil in Finland

    NASA Astrophysics Data System (ADS)

    Pihlaja, Jouni

    2017-04-01

    The soil element concentrations regionally vary a lot in Finland. Mostly this is caused by the different bedrock types, which are reflected in the soil qualities. Geological Survey of Finland (GTK) is carrying out geochemical baseline studies in Finland. In the previous phase, the research is focusing on urban areas and mine environments. The information can, for example, be used to determine the need for soil remediation, to assess environmental impacts or to measure the natural state of soil in industrial areas or mine districts. The field work is done by taking soil samples, typically at depth between 0-10 cm. Sampling sites are chosen to represent the most vulnerable areas when thinking of human impacts by possible toxic soil element contents: playgrounds, day-care centers, schools, parks and residential areas. In the mine districts the samples are taken from the areas locating outside the airborne dust effected areas. Element contents of the soil samples are then analyzed with ICP-AES and ICP-MS, Hg with CV-AAS. The results of the geochemical baseline studies are published in the Finnish national geochemical baseline database (TAPIR). The geochemical baseline map service is free for all users via internet browser. Through this map service it is possible to calculate regional soil baseline values using geochemical data stored in the map service database. Baseline data for 17 elements in total is provided in the map service and it can be viewed on the GTK's web pages (http://gtkdata.gtk.fi/Tapir/indexEN.html).

  11. [CIRRNET® - learning from errors, a success story].

    PubMed

    Frank, O; Hochreutener, M; Wiederkehr, P; Staender, S

    2012-06-01

    CIRRNET® is the network of local error-reporting systems of the Swiss Patient Safety Foundation. The network has been running since 2006 together with the Swiss Society for Anaesthesiology and Resuscitation (SGAR), and network participants currently include 39 healthcare institutions from all four different language regions of Switzerland. Further institutions can join at any time. Local error reports in CIRRNET® are bundled at a supraregional level, categorised in accordance with the WHO classification, and analysed by medical experts. The CIRRNET® database offers a solid pool of data with error reports from a wide range of medical specialist's areas and provides the basis for identifying relevant problem areas in patient safety. These problem areas are then processed in cooperation with specialists with extremely varied areas of expertise, and recommendations for avoiding these errors are developed by changing care processes (Quick-Alerts®). Having been approved by medical associations and professional medical societies, Quick-Alerts® are widely supported and well accepted in professional circles. The CIRRNET® database also enables any affiliated CIRRNET® participant to access all error reports in the 'closed user area' of the CIRRNET® homepage and to use these error reports for in-house training. A healthcare institution does not have to make every mistake itself - it can learn from the errors of others, compare notes with other healthcare institutions, and use existing knowledge to advance its own patient safety.

  12. OrChem - An open source chemistry search engine for Oracle®

    PubMed Central

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  13. Inequality of obesity and socioeconomic factors in Iran: a systematic review and meta- analyses

    PubMed Central

    Djalalinia, Shirin; Peykari, Niloofar; Qorbani, Mostafa; Larijani, Bagher; Farzadfar, Farshad

    2015-01-01

    Background: Socioeconomic status and demographic factors, such as education, occupation, place of residence, gender, age, and marital status have been reported to be associated with obesity. We conducted a systematic review to summarize evidences on associations between socioeconomic factors and obesity/overweight in Iranian population. Methods: We systematically searched international databases; ISI, PubMed/Medline, Scopus, and national databases Iran-medex, Irandoc, and Scientific Information Database (SID). We refined data for associations between socioeconomic factors and obesity/overweight by sex, age, province, and year. There were no limitations for time and languages. Results: Based on our search strategy we found 151 records; of them 139 were from international databases and the remaining 12 were obtained from national databases. After removing duplicates, via the refining steps, only 119 articles were found related to our study domains. Extracted results were attributed to 146596 person/data from included studies. Increased ages, low educational levels, being married, residence in urban area, as well as female sex were clearly associated with obesity. Conclusion: Results could be useful for better health policy and more planned studies in this field. These also could be used for future complementary analyses. PMID:26793632

  14. Inequality of obesity and socioeconomic factors in Iran: a systematic review and meta- analyses.

    PubMed

    Djalalinia, Shirin; Peykari, Niloofar; Qorbani, Mostafa; Larijani, Bagher; Farzadfar, Farshad

    2015-01-01

    Socioeconomic status and demographic factors, such as education, occupation, place of residence, gender, age, and marital status have been reported to be associated with obesity. We conducted a systematic review to summarize evidences on associations between socioeconomic factors and obesity/overweight in Iranian population. We systematically searched international databases; ISI, PubMed/Medline, Scopus, and national databases Iran-medex, Irandoc, and Scientific Information Database (SID). We refined data for associations between socioeconomic factors and obesity/overweight by sex, age, province, and year. There were no limitations for time and languages. Based on our search strategy we found 151 records; of them 139 were from international databases and the remaining 12 were obtained from national databases. After removing duplicates, via the refining steps, only 119 articles were found related to our study domains. Extracted results were attributed to 146596 person/data from included studies. Increased ages, low educational levels, being married, residence in urban area, as well as female sex were clearly associated with obesity. RESULTS could be useful for better health policy and more planned studies in this field. These also could be used for future complementary analyses.

  15. Advanced SPARQL querying in small molecule databases.

    PubMed

    Galgonek, Jakub; Hurt, Tomáš; Michlíková, Vendula; Onderka, Petr; Schwarz, Jan; Vondrášek, Jiří

    2016-01-01

    In recent years, the Resource Description Framework (RDF) and the SPARQL query language have become more widely used in the area of cheminformatics and bioinformatics databases. These technologies allow better interoperability of various data sources and powerful searching facilities. However, we identified several deficiencies that make usage of such RDF databases restrictive or challenging for common users. We extended a SPARQL engine to be able to use special procedures inside SPARQL queries. This allows the user to work with data that cannot be simply precomputed and thus cannot be directly stored in the database. We designed an algorithm that checks a query against data ontology to identify possible user errors. This greatly improves query debugging. We also introduced an approach to visualize retrieved data in a user-friendly way, based on templates describing visualizations of resource classes. To integrate all of our approaches, we developed a simple web application. Our system was implemented successfully, and we demonstrated its usability on the ChEBI database transformed into RDF form. To demonstrate procedure call functions, we employed compound similarity searching based on OrChem. The application is publicly available at https://bioinfo.uochb.cas.cz/projects/chemRDF.

  16. An annotated database of Arabidopsis mutants of acyl lipid metabolism

    DOE PAGES

    McGlew, Kathleen; Shaw, Vincent; Zhang, Meng; ...

    2014-12-10

    Mutants have played a fundamental role in gene discovery and in understanding the function of genes involved in plant acyl lipid metabolism. The first mutant in Arabidopsis lipid metabolism ( fad4) was described in 1985. Since that time, characterization of mutants in more than 280 genes associated with acyl lipid metabolism has been reported. This review provides a brief background and history on identification of mutants in acyl lipid metabolism, an analysis of the distribution of mutants in different areas of acyl lipid metabolism and presents an annotated database (ARALIPmutantDB) of these mutants. The database provides information on the phenotypesmore » of mutants, pathways and enzymes/proteins associated with the mutants, and allows rapid access via hyperlinks to summaries of information about each mutant and to literature that provides information on the lipid composition of the mutants. Mutants for at least 30 % of the genes in the database have multiple names, which have been compiled here to reduce ambiguities in searches for information. Furthermore, the database should also provide a tool for exploring the relationships between mutants in acyl lipid-related genes and their lipid phenotypes and point to opportunities for further research.« less

  17. Database and Related Activities in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi

    2011-05-11

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition ofmore » W{sup 26+} ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association ''Forum of Atomic and Molecular Data and Their Applications'' to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.« less

  18. LocSigDB: a database of protein localization signals

    PubMed Central

    Negi, Simarjeet; Pandey, Sanjit; Srinivasan, Satish M.; Mohammed, Akram; Guda, Chittibabu

    2015-01-01

    LocSigDB (http://genome.unmc.edu/LocSigDB/) is a manually curated database of experimental protein localization signals for eight distinct subcellular locations; primarily in a eukaryotic cell with brief coverage of bacterial proteins. Proteins must be localized at their appropriate subcellular compartment to perform their desired function. Mislocalization of proteins to unintended locations is a causative factor for many human diseases; therefore, collection of known sorting signals will help support many important areas of biomedical research. By performing an extensive literature study, we compiled a collection of 533 experimentally determined localization signals, along with the proteins that harbor such signals. Each signal in the LocSigDB is annotated with its localization, source, PubMed references and is linked to the proteins in UniProt database along with the organism information that contain the same amino acid pattern as the given signal. From LocSigDB webserver, users can download the whole database or browse/search for data using an intuitive query interface. To date, LocSigDB is the most comprehensive compendium of protein localization signals for eight distinct subcellular locations. Database URL: http://genome.unmc.edu/LocSigDB/ PMID:25725059

  19. The CEBAF Element Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less

  20. Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.

    PubMed

    Zappia, Luke; Phipson, Belinda; Oshlack, Alicia

    2018-06-25

    As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.

  1. [The theme of disaster in health care: profile of technical and scientific production in the specialized database on disasters of the Virtual Health Library - VHL].

    PubMed

    Rocha, Vania; Ximenes, Elisa Francioli; Carvalho, Mauren Lopes de; Alpino, Tais de Moura Ariza; Freitas, Carlos Machado de

    2014-09-01

    In the specialized database of the Virtual Health Library (VHL), the DISASTER database highlights the importance of the theme for the health sector. The scope of this article is to identify the profiles of technical and scientific publications in the specialized database. Based on systematic searches and the analysis of results it is possible to determine: the type of publication; the main topics addressed; the most common type of disasters mentioned in published materials, countries and regions as subjects, historic periods with the most publications and the current trend of publications. When examining the specialized data in detail, it soon becomes clear that the number of major topics is very high, making a specific search process in this database a challenging exercise. On the other hand, it is encouraging that the disaster topic is discussed and assessed in a broad and diversified manner, associated with different aspects of the natural and social sciences. The disaster issue requires the production of interdisciplinary knowledge development to reduce the impacts of disasters and for risk management. In this way, since the health sector is a interdisciplinary area, it can contribute to knowledge production.

  2. Database and Related Activities in Japan

    NASA Astrophysics Data System (ADS)

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi; Sakaue, Hiroyuki A.; Kato, Takako; Ding, Xiaobin; Morita, Shigeru; Kitajima, Masashi; Koike, Fumihiro; Nakamura, Nobuyuki; Sakamoto, Naoki; Sasaki, Akira; Skobelev, Igor; Tsuchida, Hidetsugu; Ulantsev, Artemiy; Watanabe, Tetsuya; Yamamoto, Norimasa

    2011-05-01

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition of W26+ ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association "Forum of Atomic and Molecular Data and Their Applications" to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.

  3. MetReS, an Efficient Database for Genomic Applications.

    PubMed

    Vilaplana, Jordi; Alves, Rui; Solsona, Francesc; Mateo, Jordi; Teixidó, Ivan; Pifarré, Marc

    2018-02-01

    MetReS (Metabolic Reconstruction Server) is a genomic database that is shared between two software applications that address important biological problems. Biblio-MetReS is a data-mining tool that enables the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the processes of interest and their function. The main goal of this work was to identify the areas where the performance of the MetReS database performance could be improved and to test whether this improvement would scale to larger datasets and more complex types of analysis. The study was started with a relational database, MySQL, which is the current database server used by the applications. We also tested the performance of an alternative data-handling framework, Apache Hadoop. Hadoop is currently used for large-scale data processing. We found that this data handling framework is likely to greatly improve the efficiency of the MetReS applications as the dataset and the processing needs increase by several orders of magnitude, as expected to happen in the near future.

  4. Improving retrospective characterization of the food environment for a large region in the United States during a historic time period.

    PubMed

    Auchincloss, Amy H; Moore, Kari A B; Moore, Latetia V; Diez Roux, Ana V

    2012-11-01

    Access to healthy foods has received increasing attention due to growing prevalence of obesity and diet-related health conditions yet there are major obstacles in characterizing the local food environment. This study developed a method to retrospectively characterize supermarkets for a single historic year, 2005, in 19 counties in 6 states in the USA using a supermarket chain-name list and two business databases. Data preparation, merging, overlaps, added-value amongst various approaches and differences by census tract area-level socio-demographic characteristics are described. Agreement between two food store databases was modest: 63%. Only 55% of the final list of supermarkets were identified by a single business database and selection criteria that included industry classification codes and sales revenue ≥$2 million. The added-value of using a supermarket chain-name list and second business database was identification of an additional 14% and 30% of supermarkets, respectively. These methods are particularly useful to retrospectively characterize access to supermarkets during a historic period and when field observations are not feasible and business databases are used. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. DamaGIS: a multisource geodatabase for collection of flood-related damage data

    NASA Astrophysics Data System (ADS)

    Saint-Martin, Clotilde; Javelle, Pierre; Vinet, Freddy

    2018-06-01

    Every year in France, recurring flood events result in several million euros of damage, and reducing the heavy consequences of floods has become a high priority. However, actions to reduce the impact of floods are often hindered by the lack of damage data on past flood events. The present paper introduces a new database for collection and assessment of flood-related damage. The DamaGIS database offers an innovative bottom-up approach to gather and identify damage data from multiple sources, including new media. The study area has been defined as the south of France considering the high frequency of floods over the past years. This paper presents the structure and contents of the database. It also presents operating instructions in order to keep collecting damage data within the database. This paper also describes an easily reproducible method to assess the severity of flood damage regardless of the location or date of occurrence. A first analysis of the damage contents is also provided in order to assess data quality and the relevance of the database. According to this analysis, despite its lack of comprehensiveness, the DamaGIS database presents many advantages. Indeed, DamaGIS provides a high accuracy of data as well as simplicity of use. It also has the additional benefit of being accessible in multiple formats and is open access. The DamaGIS database is available at https://doi.org/10.5281/zenodo.1241089.

  6. What have we learned in minimally invasive colorectal surgery from NSQIP and NIS large databases? A systematic review.

    PubMed

    Batista Rodríguez, Gabriela; Balla, Andrea; Corradetti, Santiago; Martinez, Carmen; Hernández, Pilar; Bollo, Jesús; Targarona, Eduard M

    2018-06-01

    "Big data" refers to large amount of dataset. Those large databases are useful in many areas, including healthcare. The American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) and the National Inpatient Sample (NIS) are big databases that were developed in the USA in order to record surgical outcomes. The aim of the present systematic review is to evaluate the type and clinical impact of the information retrieved through NISQP and NIS big database articles focused on laparoscopic colorectal surgery. A systematic review was conducted using The Meta-Analysis Of Observational Studies in Epidemiology (MOOSE) guidelines. The research was carried out on PubMed database and revealed 350 published papers. Outcomes of articles in which laparoscopic colorectal surgery was the primary aim were analyzed. Fifty-five studies, published between 2007 and February 2017, were included. Articles included were categorized in groups according to the main topic as: outcomes related to surgical technique comparisons, morbidity and perioperatory results, specific disease-related outcomes, sociodemographic disparities, and academic training impact. NSQIP and NIS databases are just the tip of the iceberg for the potential application of Big Data technology and analysis in MIS. Information obtained through big data is useful and could be considered as external validation in those situations where a significant evidence-based medicine exists; also, those databases establish benchmarks to measure the quality of patient care. Data retrieved helps to inform decision-making and improve healthcare delivery.

  7. The Canadian Connection: Business Online.

    ERIC Educational Resources Information Center

    Merry, Susan; And Others

    1989-01-01

    Provides an overview of the Canadian business environment and online sources of business information. The databases described cover the following areas: directories, financial information, stock quotes, investment reports, industrial and economic information, magazines, newspapers, wire services, biographical information, and government…

  8. Accident Data Use and Geographic Information System (GIS)

    DOT National Transportation Integrated Search

    1998-09-16

    Project Description: The Cheyenne Area Transportation Planning Process (ChATTP) : has developed a PowerPoint presentation demonstrating how to use an existing : accident database with GIS software. The slides are followed by a hands-on : demonstratio...

  9. Goods Movement: Regional Analysis and Database Final Report

    DOT National Transportation Integrated Search

    1993-03-26

    The project reported here was undertaken to create and test methods for synthesizing truck flow patterns in urban areas from partial and fragmentary observations. More specifically, the project sought to develop a way to estimate origin-destination (...

  10. Geomasking sensitive health data and privacy protection: an evaluation using an E911 database.

    PubMed

    Allshouse, William B; Fitch, Molly K; Hampton, Kristen H; Gesink, Dionne C; Doherty, Irene A; Leone, Peter A; Serre, Marc L; Miller, William C

    2010-10-01

    Geomasking is used to provide privacy protection for individual address information while maintaining spatial resolution for mapping purposes. Donut geomasking and other random perturbation geomasking algorithms rely on the assumption of a homogeneously distributed population to calculate displacement distances, leading to possible under-protection of individuals when this condition is not met. Using household data from 2007, we evaluated the performance of donut geomasking in Orange County, North Carolina. We calculated the estimated k-anonymity for every household based on the assumption of uniform household distribution. We then determined the actual k-anonymity by revealing household locations contained in the county E911 database. Census block groups in mixed-use areas with high population distribution heterogeneity were the most likely to have privacy protection below selected criteria. For heterogeneous populations, we suggest tripling the minimum displacement area in the donut to protect privacy with a less than 1% error rate.

  11. Geomasking sensitive health data and privacy protection: an evaluation using an E911 database

    PubMed Central

    Allshouse, William B; Fitch, Molly K; Hampton, Kristen H; Gesink, Dionne C; Doherty, Irene A; Leone, Peter A; Serre, Marc L; Miller, William C

    2010-01-01

    Geomasking is used to provide privacy protection for individual address information while maintaining spatial resolution for mapping purposes. Donut geomasking and other random perturbation geomasking algorithms rely on the assumption of a homogeneously distributed population to calculate displacement distances, leading to possible under-protection of individuals when this condition is not met. Using household data from 2007, we evaluated the performance of donut geomasking in Orange County, North Carolina. We calculated the estimated k-anonymity for every household based on the assumption of uniform household distribution. We then determined the actual k-anonymity by revealing household locations contained in the county E911 database. Census block groups in mixed-use areas with high population distribution heterogeneity were the most likely to have privacy protection below selected criteria. For heterogeneous populations, we suggest tripling the minimum displacement area in the donut to protect privacy with a less than 1% error rate. PMID:20953360

  12. Picophytoplankton biomass distribution in the global ocean

    NASA Astrophysics Data System (ADS)

    Buitenhuis, E. T.; Li, W. K. W.; Vaulot, D.; Lomas, M. W.; Landry, M.; Partensky, F.; Karl, D. M.; Ulloa, O.; Campbell, L.; Jacquet, S.; Lantoine, F.; Chavez, F.; Macias, D.; Gosselin, M.; McManus, G. B.

    2012-04-01

    The smallest marine phytoplankton, collectively termed picophytoplankton, have been routinely enumerated by flow cytometry since the late 1980s, during cruises throughout most of the world ocean. We compiled a database of 40 946 data points, with separate abundance entries for Prochlorococcus, Synechococcus and picoeukaryotes. We use average conversion factors for each of the three groups to convert the abundance data to carbon biomass. After gridding with 1° spacing, the database covers 2.4% of the ocean surface area, with the best data coverage in the North Atlantic, the South Pacific and North Indian basins. The average picophytoplankton biomass is 12 ± 22 μg C l-1 or 1.9 g C m-2. We estimate a total global picophytoplankton biomass of 0.53-0.74 Pg C (17-39% Prochlorococcus, 12-15% Synechococcus and 49-69% picoeukaryotes). Future efforts in this area of research should focus on reporting calibrated cell size, and collecting data in undersampled regions.

  13. Geometric processing of digital images of the planets

    NASA Technical Reports Server (NTRS)

    Edwards, Kathleen

    1987-01-01

    New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.

  14. UTILIZATION OF GEOGRAPHIC INFORMATION SYSTEMS TECHNOLOGY IN THE ASSESSMENT OF REGIONAL GROUND-WATER QUALITY.

    USGS Publications Warehouse

    Nebert, Douglas; Anderson, Dean

    1987-01-01

    The U. S. Geological Survey (USGS) in cooperation with the U. S. Environmental Protection Agency Office of Pesticide Programs and several State agencies in Oregon has prepared a digital spatial database at 1:500,000 scale to be used as a basis for evaluating the potential for ground-water contamination by pesticides and other agricultural chemicals. Geographic information system (GIS) software was used to assemble, analyze, and manage spatial and tabular environmental data in support of this project. Physical processes were interpreted relative to published spatial data and an integrated database to support the appraisal of regional ground-water contamination was constructed. Ground-water sampling results were reviewed relative to the environmental factors present in several agricultural areas to develop an empirical knowledge base which could be used to assist in the selection of future sampling or study areas.

  15. Clutter modeling of the Denver Airport and surrounding areas

    NASA Technical Reports Server (NTRS)

    Harrah, Steven D.; Delmore, Victor E.; Onstott, Robert G.

    1991-01-01

    To accurately simulate and evaluate an airborne Doppler radar as a wind shear detection and avoidance sensor, the ground clutter surrounding a typical airport must be quantified. To do this, an imaging airborne Synthetic Aperture Radar (SAR) was employed to investigate and map the normalized radar cross sections (NRCS) of the ground terrain surrounding the Denver Stapleton Airport during November of 1988. Images of the Stapleton ground clutter scene were obtained at a variety of aspect and elevation angles (extending to near-grazing) at both HH and VV polarizations. Presented here, in viewgraph form with commentary, are the method of data collection, the specific observations obtained of the Denver area, a summary of the quantitative analysis performed on the SAR images to date, and the statistical modeling of several of the more interesting stationary targets in the SAR database. Additionally, the accompanying moving target database, containing NRCS and velocity information, is described.

  16. Contribution of the administrative database and the geographical information system to disaster preparedness and regionalization.

    PubMed

    Kuwabara, Kazuaki; Matsuda, Shinya; Fushimi, Kiyohide; Ishikawa, Koichi B; Horiguchi, Hiromasa; Fujimori, Kenji

    2012-01-01

    Public health emergencies like earthquakes and tsunamis underscore the need for an evidence-based approach to disaster preparedness. Using the Japanese administrative database and the geographical information system (GIS), the interruption of hospital-based mechanical ventilation administration by a hypothetical disaster in three areas of the southeastern mainland (Tokai, Tonankai, and Nankai) was simulated and the repercussions on ventilator care in the prefectures adjacent to the damaged prefectures was estimated. Using the database of 2010 including 3,181,847 hospitalized patients among 952 hospitals, the maximum daily ventilator capacity in each hospital was calculated and the number of patients who were administered ventilation on October xx was counted. Using GIS and patient zip code, the straight-line distances among the damaged hospitals, the hospitals in prefectures nearest to damaged prefectures, and ventilated patients' zip codes were measured. The authors simulated that ventilated patients were transferred to the closest hospitals outside damaged prefectures. The increase in the ventilator operating rates in three areas was aggregated. One hundred twenty-four and 236 patients were administered ventilation in the damaged hospitals and in the closest hospitals outside the damaged prefectures of Tokai, 92 and 561 of Tonankai, and 35 and 85 of Nankai, respectively. The increases in the ventilator operating rates among prefectures ranged from 1.04 to 26.33-fold in Tokai; 1.03 to 1.74-fold in Tonankai, and 1.00 to 2.67-fold in Nankai. Administrative databases and GIS can contribute to evidenced-based disaster preparedness and the determination of appropriate receiving hospitals with available medical resources.

  17. GIS model for identifying urban areas vulnerable to noise pollution: case study

    NASA Astrophysics Data System (ADS)

    Bilaşco, Ştefan; Govor, Corina; Roşca, Sanda; Vescan, Iuliu; Filip, Sorin; Fodorean, Ioan

    2017-04-01

    The unprecedented expansion of the national car ownership over the last few years has been determined by economic growth and the need for the population and economic agents to reduce travel time in progressively expanding large urban centres. This has led to an increase in the level of road noise and a stronger impact on the quality of the environment. Noise pollution generated by means of transport represents one of the most important types of pollution with negative effects on a population's health in large urban areas. As a consequence, tolerable limits of sound intensity for the comfort of inhabitants have been determined worldwide and the generation of sound maps has been made compulsory in order to identify the vulnerable zones and to make recommendations how to decrease the negative impact on humans. In this context, the present study aims at presenting a GIS spatial analysis model-based methodology for identifying and mapping zones vulnerable to noise pollution. The developed GIS model is based on the analysis of all the components influencing sound propagation, represented as vector databases (points of sound intensity measurements, buildings, lands use, transport infrastructure), raster databases (DEM), and numerical databases (wind direction and speed, sound intensity). Secondly, the hourly changes (for representative hours) were analysed to identify the hotspots characterised by major traffic flows specific to rush hours. The validated results of the model are represented by GIS databases and useful maps for the local public administration to use as a source of information and in the process of making decisions.

  18. Publication trend, resource utilization, and impact of the US National Cancer Database

    PubMed Central

    Su, Chang; Peng, Cuiying; Agbodza, Ena; Bai, Harrison X.; Huang, Yuqian; Karakousis, Giorgos; Zhang, Paul J.; Zhang, Zishu

    2018-01-01

    Abstract Background: The utilization and impact of the studies published using the National Cancer Database (NCDB) is currently unclear. In this study, we aim to characterize the published studies, and identify relatively unexplored areas for future investigations. Methods: A literature search was performed using PubMed in January 2017 to identify all papers published using NCDB data. Characteristics of the publications were extracted. Citation frequencies were obtained through the Web of Science. Results: Three hundred 2 articles written by 230 first authors met the inclusion criteria. The number of publications grew exponentially since 2013, with 108 articles published in 2016. Articles were published in 86 journals. The majority of the published papers focused on digestive system cancer, while bone and joints, eye and orbit, myeloma, mesothelioma, and Kaposi Sarcoma were never studied. Thirteen institutions in the United States were associated with more than 5 publications. The papers have been cited for a total of 9858 times since the publication of the first paper in 1992. Frequently appearing keywords congregated into 3 clusters: “demographics,” “treatments and survival,” and “statistical analysis method.” Even though the main focuses of the articles captured a extremely wide range, they can be classified into 2 main categories: survival analysis and characterization. Other focuses include database(s) analysis and/or comparison, and hospital reporting. Conclusion: The surging interest in the use of NCDB is accompanied by unequal utilization of resources by individuals and institutions. Certain areas were relatively understudied and should be further explored. PMID:29489679

  19. Forest cover of Champaign County, Illinois in 1993

    Treesearch

    Jesus Danilo Chinea; Louis R. Iverson

    1997-01-01

    The forest cover of Champaign County, in east-central Illinois, was mapped from 1993 aerial photography and entered in a geographical information system database. One hundred and six forest patches cover 3,380 ha. These patches have a mean area of 32 ha, a mean perimeter of 4,851 m, a mean perimeter to area ratio of 237, a fractal dimension of 1.59, and a mean nearest...

  20. Developing Basic and Higher Level Reading Processing Skills: Exploring an Instructional Framework with the Progress in International Reading Literacy Study (PIRLS) 2006 Database

    ERIC Educational Resources Information Center

    Deasy, Michael Joseph

    2012-01-01

    Concern over worldwide literacy rates prompted the United Nations to establish the UN Literacy Decade (2003-2012) with one area of focus being to provide support to schools to develop effective literacy programs (UNESCO, 2005). This study addressed the area of providing support to schools to develop effective literacy programs by exploring the…

Top