Sample records for state analysis database

  1. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  2. The Forest Inventory and Analysis Database Version 4.0: Database Description and Users Manual for Phase 3

    Treesearch

    Christopher W. Woodall; Barbara L. Conkling; Michael C. Amacher; John W. Coulston; Sarah Jovan; Charles H. Perry; Beth Schulz; Gretchen C. Smith; Susan Will Wolf

    2010-01-01

    Describes the structure of the Forest Inventory and Analysis Database (FIADB) 4.0 for phase 3 indicators. The FIADB structure provides a consistent framework for storing forest health monitoring data across all ownerships for the entire United States. These data are available to the public.

  3. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    ERIC Educational Resources Information Center

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  4. The Status of Statewide Subscription Databases

    ERIC Educational Resources Information Center

    Krueger, Karla S.

    2012-01-01

    This qualitative content analysis presents subscription databases available to school libraries through statewide purchases. The results may help school librarians evaluate grade and subject-area coverage, make comparisons to recommended databases, and note potential suggestions for their states to include in future contracts or for local…

  5. The Animal Genetic Resource Information Network (AnimalGRIN) Database: A Database Design & Implementation Case

    ERIC Educational Resources Information Center

    Irwin, Gretchen; Wessel, Lark; Blackman, Harvey

    2012-01-01

    This case describes a database redesign project for the United States Department of Agriculture's National Animal Germplasm Program (NAGP). The case provides a valuable context for teaching and practicing database analysis, design, and implementation skills, and can be used as the basis for a semester-long team project. The case demonstrates the…

  6. The forest inventory and analysis database description and users manual version 1.0

    Treesearch

    Patrick D. Miles; Gary J. Brand; Carol L. Alerich; Larry F. Bednar; Sharon W. Woudenberg; Joseph F. Glover; Edward N. Ezell

    2001-01-01

    Describes the structure of the Forest Inventory and Analysis Database (FIADB) and provides information on generating estimates of forest statistics from these data. The FIADB structure provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. These data are available to the public.

  7. Monitoring and tracing of critical software systems: State of the work and project definition

    DTIC Science & Technology

    2008-12-01

    analysis, troubleshooting and debugging. Some of these subsystems already come with ad hoc tracers for events like wireless connections or SCSI disk... SQLite ). Additional synthetic events (e.g. states) are added to the database. The database thus consists in contexts (process, CPU, state), event...capability on a [operating] system-by-system basis. Additionally, the mechanics of querying the data in an ad - hoc manner outside the boundaries of the

  8. USGS national surveys and analysis projects: Preliminary compilation of integrated geological datasets for the United States

    USGS Publications Warehouse

    Nicholson, Suzanne W.; Stoeser, Douglas B.; Wilson, Frederic H.; Dicken, Connie L.; Ludington, Steve

    2007-01-01

    The growth in the use of Geographic nformation Systems (GS) has highlighted the need for regional and national digital geologic maps attributed with age and rock type information. Such spatial data can be conveniently used to generate derivative maps for purposes that include mineral-resource assessment, metallogenic studies, tectonic studies, human health and environmental research. n 1997, the United States Geological Survey’s Mineral Resources Program initiated an effort to develop national digital databases for use in mineral resource and environmental assessments. One primary activity of this effort was to compile a national digital geologic map database, utilizing state geologic maps, to support mineral resource studies in the range of 1:250,000- to 1:1,000,000-scale. Over the course of the past decade, state databases were prepared using a common standard for the database structure, fields, attributes, and data dictionaries. As of late 2006, standardized geological map databases for all conterminous (CONUS) states have been available on-line as USGS Open-File Reports. For Alaska and Hawaii, new state maps are being prepared, and the preliminary work for Alaska is being released as a series of 1:500,000-scale regional compilations. See below for a list of all published databases.

  9. Data-based Organizational Change: The Use of Administrative Data To Improve Child Welfare Programs and Policy.

    ERIC Educational Resources Information Center

    English, Diana J.; Brandford, Carol C.; Coghlan, Laura

    2000-01-01

    Discusses the strengths and weaknesses of administrative databases, issues with their implementation and data analysis, and effective presentation of their data at different levels in child welfare organizations. Focuses on the development and implementation of Washington state's Children's Administration's administrative database, the Case and…

  10. Data mining and visualization of the Alabama accident database

    DOT National Transportation Integrated Search

    2000-08-01

    The Alabama Department of Public Safety has developed and maintains a centralized database that contain traffic accident data collected from crash report completed by local police officers and state troopers. The Critical Analysis Reporting Environme...

  11. Preliminary Integrated Geologic Map Databases for the United States: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, Rhode Island and Vermont

    USGS Publications Warehouse

    Nicholson, Suzanne W.; Dicken, Connie L.; Horton, John D.; Foose, Michael P.; Mueller, Julia A.L.; Hon, Rudi

    2006-01-01

    The rapid growth in the use of Geographic Information Systems (GIS) has highlighted the need for regional and national scale digital geologic maps that have standardized information about geologic age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. Although two digital geologic maps (Schruben and others, 1994; Reed and Bush, 2004) of the United States currently exist, their scales (1:2,500,000 and 1:5,000,000) are too general for many regional applications. Most states have digital geologic maps at scales of about 1:500,000, but the databases are not comparably structured and, thus, it is difficult to use the digital database for more than one state at a time. This report describes the result for a seven state region of an effort by the U.S. Geological Survey to produce a series of integrated and standardized state geologic map databases that cover the entire United States. In 1997, the United States Geological Survey's Mineral Resources Program initiated the National Surveys and Analysis (NSA) Project to develop national digital databases. One primary activity of this project was to compile a national digital geologic map database, utilizing state geologic maps, to support studies in the range of 1:250,000- to 1:1,000,000-scale. To accomplish this, state databases were prepared using a common standard for the database structure, fields, attribution, and data dictionaries. For Alaska and Hawaii new state maps are being prepared and the preliminary work for Alaska is being released as a series of 1:250,000 scale quadrangle reports. This document provides background information and documentation for the integrated geologic map databases of this report. This report is one of a series of such reports releasing preliminary standardized geologic map databases for the United States. The data products of the project consist of two main parts, the spatial databases and a set of supplemental tables relating to geologic map units. The datasets serve as a data resource to generate a variety of stratigraphic, age, and lithologic maps. This documentation is divided into four main sections: (1) description of the set of data files provided in this report, (2) specifications of the spatial databases, (3) specifications of the supplemental tables, and (4) an appendix containing the data dictionaries used to populate some fields of the spatial database and supplemental tables.

  12. Towards the Truly Predictive 3D Modeling of Recrystallization and Grain Growth in Advanced Technical Alloys

    DTIC Science & Technology

    2010-06-11

    MODELING WITH IMPLEMENTED GBI AND MD DATA (STEADY STATE GB MIGRATION) PAGE 48 5. FORMATION AND ANALYSIS OF GB PROPERTIES DATABASE PAGE 53 5.1...Relative GB energy for specified GBM averaged on possible GBIs PAGE 53 5.2. Database validation on available experimental data PAGE 56 5.3. Comparison...PAGE 70 Fig. 6.11. MC Potts Rex. and GG software: (a) modeling volume analysis; (b) searching for GB energy value within included database . PAGE

  13. State analysis requirements database for engineering complex embedded systems

    NASA Technical Reports Server (NTRS)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  14. Examination of Trends and Evidence-Based Elements in State Physical Education Legislation: A Content Analysis

    ERIC Educational Resources Information Center

    Eyler, Amy A.; Brownson, Ross C.; Aytur, Semra A.; Cradock, Angie L.; Doescher, Mark; Evenson, Kelly R.; Kerr, Jacqueline; Maddock, Jay; Pluto, Delores L.; Steinman, Lesley; Tompkins, Nancy O'Hara; Troped, Philip; Schmid, Thomas L.

    2010-01-01

    Objectives: To develop a comprehensive inventory of state physical education (PE) legislation, examine trends in bill introduction, and compare bill factors. Methods: State PE legislation from January 2001 to July 2007 was identified using a legislative database. Analysis included components of evidence-based school PE from the Community Guide and…

  15. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  16. Use of large electronic health record databases for environmental epidemiology studies.

    EPA Science Inventory

    Background: Electronic health records (EHRs) are a ubiquitous component of the United States healthcare system and capture nearly all data collected in a clinic or hospital setting. EHR databases are attractive for secondary data analysis as they may contain detailed clinical rec...

  17. Compilation of Abstracts of Theses Submitted by Candidates for Degrees

    DTIC Science & Technology

    1987-09-30

    Paral- lel, Multiple Backend Database Systems Feudo, C.V. Modern Hardware Tochnololies 88 MAJ , USA 8nd. Sof ware Techniques for Online uatabase Storage...and itsApplication in the War- gaming , Reseamth and Analysis (W.A.R.) Lab Waltens erger, G.M. On Limited War, Escalation 524 CPT,, USRF Control, and...TECHNIQIUES FOR ONLINE DATABASE ,TORAGE AND ACCESS Christopher V. Feudo Ma or, United States Army B.S., United States Military Academy# 1972

  18. Tissue Molecular Anatomy Project (TMAP): an expression database for comparative cancer proteomics.

    PubMed

    Medjahed, Djamel; Luke, Brian T; Tontesh, Tawady S; Smythers, Gary W; Munroe, David J; Lemkin, Peter F

    2003-08-01

    By mining publicly accessible databases, we have developed a collection of tissue-specific predictive protein expression maps as a function of cancer histological state. Data analysis is applied to the differential expression of gene products in pooled libraries from the normal to the altered state(s). We wish to report the initial results of our survey across different tissues and explore the extent to which this comparative approach may help uncover panels of potential biomarkers of tumorigenesis which would warrant further examination in the laboratory.

  19. Dutch Treat for U.S. Database Producers.

    ERIC Educational Resources Information Center

    Boumans, Jak

    1984-01-01

    Reports on investments in the United States (including database activities) by four Dutch publishing companies--Elsevier-NDU, VNU, Kluwer, Wolters Samsom Group. An analysis of the reasons behind these investments, the solidness of the companies, the approach to the U.S. information market, and the knowledge transfer to Europe are highlighted. (EJS)

  20. FishTraits: a database of ecological and life-history traits of freshwater fishes of the United States

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2011-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. We have compiled a database of > 100 traits for 809 (731 native and 78 nonnative) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database, named Fish Traits, contains information on four major categories of traits: (1) trophic ecology; (2) body size, reproductive ecology, and life history; (3) habitat preferences; and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status was also compiled. The database enhances many opportunities for conducting research on fish species traits and constitutes the first step toward establishing a central repository for a continually expanding set of traits of North American fishes.

  1. State Student Aid Policies and Independent Higher Education: Their Potential Relevance for California. Report 96-5.

    ERIC Educational Resources Information Center

    Zumeta, William

    This paper provides recent information concerning policies in other states relevant to California's efforts to direct undergraduate students toward in-state private colleges and universities or out-of-state institutions, thus relieving the enrollment burden on the state's publicly supported institutions. The primary database for the analysis is a…

  2. Second-Tier Database for Ecosystem Focus, 2002-2003 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Holmes, Chris; Muongchanh, Christine; Anderson, James J.

    2003-11-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities. The Second-Tier Database known as Data Access in Realtime (DART) integrates public data for effective access, consideration and application. DART also provides analysis tools and performance measures helpful in evaluating the condition of Columbia Basin salmonid stocks.

  3. Students' Attitudes toward ABI/INFORM on CD-ROM: A Factor Analysis.

    ERIC Educational Resources Information Center

    Wang, Vicky; Lau, Shuk-fong

    Two years after the introduction of CD-ROM bibliographic database searching in the Memphis State University libraries (Tennessee), a survey was conducted to examine students' attitudes toward the business database, ABI/INFORM. ABI/INFORM contains indexes and abstracts of articles from over 800 journals on management, accounting, banking, human…

  4. Retrieving Online Information on Drugs: An Analysis of Four Databases.

    ERIC Educational Resources Information Center

    Lavengood, Kathryn A.

    This study examines the indexing of drugs in the literature and compares actual drug indexing to stated indexing policies in selected databases. The goal is to aid health science information specialists, end-users, and/or non-subject experts to improve recall and comprehensiveness when searching for drug information by identifying the most useful…

  5. EVALIDatorReports: Reporting beyond the FIADB

    Treesearch

    Patrick D. Miles

    2009-01-01

    Tools for analyzing data collected by the U.S. Forest Service's Forest Inventory and Analysis (FIA) program are available in Microsoft Access© format. Databases have been created for every state, except Hawaii, and are available for downloading. EVALIDatorReports is a Visual Basic Application that is stored within each Microsoft Access© database...

  6. Forest Inventory and Analysis Database of the United States of America (FIA)

    Treesearch

    Andrew N. Gray; Thomas J. Brandeis; John D. Shaw; William H. McWilliams; Patrick Miles

    2012-01-01

    Extensive vegetation inventories established with a probabilistic design are an indispensable tool in describing distributions of species and community types and detecting changes in composition in response to climate or other drivers. The Forest Inventory and Analysis Program measures vegetation in permanent plots on forested lands across the United States of America...

  7. A spatial database of wildfires in the United States, 1992-2011

    NASA Astrophysics Data System (ADS)

    Short, K. C.

    2013-07-01

    The statistical analysis of wildfire activity is a critical component of national wildfire planning, operations, and research in the United States (US). However, there are multiple federal, state, and local entities with wildfire protection and reporting responsibilities in the US, and no single, unified system of wildfire record-keeping exists. To conduct even the most rudimentary interagency analyses of wildfire numbers and area burned from the authoritative systems of record, one must harvest records from dozens of disparate databases with inconsistent information content. The onus is then on the user to check for and purge redundant records of the same fire (i.e. multijurisdictional incidents with responses reported by several agencies or departments) after pooling data from different sources. Here we describe our efforts to acquire, standardize, error-check, compile, scrub, and evaluate the completeness of US federal, state, and local wildfire records from 1992-2011 for the national, interagency Fire Program Analysis (FPA) application. The resulting FPA Fire-occurrence Database (FPA FOD) includes nearly 1.6 million records from the 20 yr period, with values for at least the following core data elements: location at least as precise as a Public Land Survey System section (2.6 km2 grid), discovery date, and final fire size. The FPA FOD is publicly available from the Research Data Archive of the US Department of Agriculture, Forest Service (doi:10.2737/RDS-2013-0009). While necessarily incomplete in some aspects, the database is intended to facilitate fairly high-resolution geospatial analysis of US wildfire activity over the past two decades, based on available information from the authoritative systems of record.

  8. A spatial database of wildfires in the United States, 1992-2011

    NASA Astrophysics Data System (ADS)

    Short, K. C.

    2014-01-01

    The statistical analysis of wildfire activity is a critical component of national wildfire planning, operations, and research in the United States (US). However, there are multiple federal, state, and local entities with wildfire protection and reporting responsibilities in the US, and no single, unified system of wildfire record keeping exists. To conduct even the most rudimentary interagency analyses of wildfire numbers and area burned from the authoritative systems of record, one must harvest records from dozens of disparate databases with inconsistent information content. The onus is then on the user to check for and purge redundant records of the same fire (i.e., multijurisdictional incidents with responses reported by several agencies or departments) after pooling data from different sources. Here we describe our efforts to acquire, standardize, error-check, compile, scrub, and evaluate the completeness of US federal, state, and local wildfire records from 1992-2011 for the national, interagency Fire Program Analysis (FPA) application. The resulting FPA Fire-Occurrence Database (FPA FOD) includes nearly 1.6 million records from the 20 yr period, with values for at least the following core data elements: location, at least as precise as a Public Land Survey System section (2.6 km2 grid), discovery date, and final fire size. The FPA FOD is publicly available from the Research Data Archive of the US Department of Agriculture, Forest Service (doi:10.2737/RDS-2013-0009). While necessarily incomplete in some aspects, the database is intended to facilitate fairly high-resolution geospatial analysis of US wildfire activity over the past two decades, based on available information from the authoritative systems of record.

  9. Geospatial Analysis of Oil and Gas Wells in California

    NASA Astrophysics Data System (ADS)

    Riqueros, N. S.; Kang, M.; Jackson, R. B.

    2015-12-01

    California currently ranks third in oil production by U.S. state and more than 200,000 wells have been drilled in the state. Oil and gas wells provide a potential pathway for subsurface migration, leading to groundwater contamination and emissions of methane and other fluids to the atmosphere. Here we compile available public databases on oil and gas wells from the California Department of Conservation's Division of Oil, Gas, and Geothermal Resources, the U.S. Geological Survey, and other state and federal sources. We perform geospatial analysis at the county and field levels to characterize depths, producing formations, spud/completion/abandonment dates, land cover, population, and land ownership of active, idle, buried, abandoned, and plugged wells in California. The compiled database is designed to serve as a quantitative platform for developing field-based groundwater and air emission monitoring plans.

  10. Hazards of Extreme Weather: Flood Fatalities in Texas

    NASA Astrophysics Data System (ADS)

    Sharif, H. O.; Jackson, T.; Bin-Shafique, S.

    2009-12-01

    The Federal Emergency Management Agency (FEMA) considers flooding “America’s Number One Natural Hazard”. Despite flood management efforts in many communities, U.S. flood damages remain high, due, in large part, to increasing population and property development in flood-prone areas. Floods are the leading cause of fatalities related to natural disasters in Texas. Texas leads the nation in flash flood fatalities. There are three times more fatalities in Texas (840) than the following state Pennsylvania (265). This study examined flood fatalities that occurred in Texas between 1960 and 2008. Flood fatality statistics were extracted from three sources: flood fatality databases from the National Climatic Data Center, the Spatial Hazard Event and Loss Database for the United States, and the Texas Department of State Health Services. The data collected for flood fatalities include the date, time, gender, age, location, and weather conditions. Inconsistencies among the three databases were identified and discussed. Analysis reveals that most fatalities result from driving into flood water (about 65%). Spatial analysis indicates that more fatalities occurred in counties containing major urban centers. Hydrologic analysis of a flood event that resulted in five fatalities was performed. A hydrologic model was able to simulate the water level at a location where a vehicle was swept away by flood water resulting in the death of the driver.

  11. RPA tree-level database users guide

    Treesearch

    Patrick D. Miles; Scott A. Pugh; Brad Smith; Sonja N. Oswalt

    2014-01-01

    The Forest and Rangeland Renewable Resources Planning Act (RPA) of 1974 calls for a periodic assessment of the Nation's renewable resources. The Forest Inventory and Analysis (FIA) program of the U.S. Forest Service supports the RPA effort by providing information on the forest resources of the United States. The RPA tree-level database (RPAtreeDB) was generated...

  12. High Resolution Soil Water from Regional Databases and Satellite Images

    NASA Technical Reports Server (NTRS)

    Morris, Robin D.; Smelyanskly, Vadim N.; Coughlin, Joseph; Dungan, Jennifer; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on the ways in which plant growth can be inferred from satellite data and can then be used to infer soil water. There are several steps in this process, the first of which is the acquisition of data from satellite observations and relevant information databases such as the State Soil Geographic Database (STATSGO). Then probabilistic analysis and inversion with the Bayes' theorem reveals sources of uncertainty. The Markov chain Monte Carlo method is also used.

  13. Development Of New Databases For Tsunami Hazard Analysis In California

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.

    2009-12-01

    The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.

  14. Forensic DNA databases in Western Balkan region: retrospectives, perspectives, and initiatives

    PubMed Central

    Marjanović, Damir; Konjhodžić, Rijad; Butorac, Sara Sanela; Drobnič, Katja; Merkaš, Siniša; Lauc, Gordan; Primorac, Damir; Anđelinović, Šimun; Milosavljević, Mladen; Karan, Željko; Vidović, Stojko; Stojković, Oliver; Panić, Bojana; Vučetić Dragović, Anđelka; Kovačević, Sandra; Jakovski, Zlatko; Asplen, Chris; Primorac, Dragan

    2011-01-01

    The European Network of Forensic Science Institutes (ENFSI) recommended the establishment of forensic DNA databases and specific implementation and management legislations for all EU/ENFSI members. Therefore, forensic institutions from Bosnia and Herzegovina, Serbia, Montenegro, and Macedonia launched a wide set of activities to support these recommendations. To assess the current state, a regional expert team completed detailed screening and investigation of the existing forensic DNA data repositories and associated legislation in these countries. The scope also included relevant concurrent projects and a wide spectrum of different activities in relation to forensics DNA use. The state of forensic DNA analysis was also determined in the neighboring Slovenia and Croatia, which already have functional national DNA databases. There is a need for a ‘regional supplement’ to the current documentation and standards pertaining to forensic application of DNA databases, which should include regional-specific preliminary aims and recommendations. PMID:21674821

  15. Forensic DNA databases in Western Balkan region: retrospectives, perspectives, and initiatives.

    PubMed

    Marjanović, Damir; Konjhodzić, Rijad; Butorac, Sara Sanela; Drobnic, Katja; Merkas, Sinisa; Lauc, Gordan; Primorac, Damir; Andjelinović, Simun; Milosavljević, Mladen; Karan, Zeljko; Vidović, Stojko; Stojković, Oliver; Panić, Bojana; Vucetić Dragović, Andjelka; Kovacević, Sandra; Jakovski, Zlatko; Asplen, Chris; Primorac, Dragan

    2011-06-01

    The European Network of Forensic Science Institutes (ENFSI) recommended the establishment of forensic DNA databases and specific implementation and management legislations for all EU/ENFSI members. Therefore, forensic institutions from Bosnia and Herzegovina, Serbia, Montenegro, and Macedonia launched a wide set of activities to support these recommendations. To assess the current state, a regional expert team completed detailed screening and investigation of the existing forensic DNA data repositories and associated legislation in these countries. The scope also included relevant concurrent projects and a wide spectrum of different activities in relation to forensics DNA use. The state of forensic DNA analysis was also determined in the neighboring Slovenia and Croatia, which already have functional national DNA databases. There is a need for a 'regional supplement' to the current documentation and standards pertaining to forensic application of DNA databases, which should include regional-specific preliminary aims and recommendations.

  16. Building a QC Database of Meteorological Data from NASA KSC and the United States Air Force's Eastern Range

    NASA Technical Reports Server (NTRS)

    Brenton, J. C.; Barbre, R. E.; Decker, R. K.; Orcutt, J. M.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) provides atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large datasets consists of ensuring erroneous data are removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, development methodologies, and periods of record. The goal of this activity is to use the previous efforts to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, It is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.

  17. A spatial database of wildfires in the United States, 1992-2011

    Treesearch

    K. C. Short

    2014-01-01

    The statistical analysis of wildfire activity is a critical component of national wildfire planning, operations, and research in the United States (US). However, there are multiple federal, state, and local entities with wildfire protection and reporting responsibilities in the US, and no single, unified system of wildfire record keeping exists. To conduct even the...

  18. A spatial database of wildfires in the United States, 1992-2011 [Discussions

    Treesearch

    K. C. Short

    2013-01-01

    The statistical analysis of wildfire activity is a critical component of national wildfire planning, operations, and research in the United States (US). However, there are multiple federal, state, and local entities with wildfire protection and reporting responsibilities in the US, and no single, unified system of wildfire record-keeping exists. To conduct even the...

  19. The Development of Vocational Vehicle Drive Cycles and Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Adam W.; Phillips, Caleb T.; Konan, Arnaud M.

    Under a collaborative interagency agreement between the U.S. Environmental Protection Agency and the U.S Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) performed a series of in-depth analyses to characterize the on-road driving behavior including distributions of vehicle speed, idle time, accelerations and decelerations, and other driving metrics of medium- and heavy-duty vocational vehicles operating within the United States. As part of this effort, NREL researchers segmented U.S. medium- and heavy-duty vocational vehicle driving characteristics into three distinct operating groups or clusters using real world drive cycle data collected at 1 Hz and stored in NREL's Fleet DNAmore » database. The Fleet DNA database contains millions of miles of historical real-world drive cycle data captured from medium- and heavy vehicles operating across the United States. The data encompass data from existing DOE activities as well as contributions from valued industry stakeholder participants. For this project, data captured from 913 unique vehicles comprising 16,250 days of operation were drawn from the Fleet DNA database and examined. The Fleet DNA data used as a source for this analysis has been collected from a total of 30 unique fleets/data providers operating across 22 unique geographic locations spread across the United States. This includes locations with topology ranging from the foothills of Denver, Colorado, to the flats of Miami, Florida. The range of fleets, geographic locations, and total number of vehicles analyzed ensures results that include the influence of these factors. While no analysis will be perfect without unlimited resources and data, it is the researchers understanding that the Fleet DNA database is the largest and most thorough publicly accessible vocational vehicle usage database currently in operation. This report includes an introduction to the Fleet DNA database and the data contained within, a presentation of the results of the statistical analysis performed by NREL, review of the logistic model developed to predict cluster membership, and a discussion and detailed summary of the development of the vocational drive cycle weights and representative transient drive cycles for testing and simulation. Additional discussion of known limitations and potential future work are also included in the report content.« less

  20. The Design and Analysis of a Network Interface for the Multi-Lingual Database System.

    DTIC Science & Technology

    1985-12-01

    IDENTIF:CATION NUMBER 0 ORGANIZATION (If applicable) 8c. ADDRESS (City, State. and ZIP Code) 10. SOURCE OF FUNDING NUMBERS PROGRAM PROJECT TASK WORK UNIT...APPFNlDIX - THE~ KMS PROGRAM SPECIFICATI~bS ........ 94 I4 XST O)F REFEFRENCFS*O*IOebqBS~*OBS 124 Il LIST OF FIrURPS F’igure 1: The multi-Linqual Database...bacKend Database System *CABO0S). In this section, we Provide an overviev of Doti tne MLLS an tne 4B0S to enhance the readers understandin- of the

  1. A Meta-Analysis of the Efficacy of Behavioral Interventions to Reduce Risky Sexual Behavior and Decrease Sexually Transmitted Infections in Latinas Living in the United States

    ERIC Educational Resources Information Center

    Althoff, Meghan D.; Grayson, Cary T.; Witt, Lucy; Holden, Julie; Reid, Daniel; Kissinger, Patricia

    2015-01-01

    The objective of this meta-analysis was to determine the effect of behavioral interventions in reducing risky sexual behavior and incident sexually transmitted infections (STI) among Latina women living in the United States. Studies were found by systematically searching the MEDLINE, EMBASE, and PsychInfo databases without language restriction.…

  2. Analysis of DIRAC's behavior using model checking with process algebra

    NASA Astrophysics Data System (ADS)

    Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof

    2012-12-01

    DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model.

  3. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  4. National health care providers' database (NHCPD) of Slovenia--information technology solution for health care planning and management.

    PubMed

    Albreht, T; Paulin, M

    1999-01-01

    The article describes the possibilities of planning of the health care providers' network enabled by the use of information technology. The cornerstone of such planning is the development and establishment of a quality database on health care providers, health care professionals and their employment statuses. Based on the analysis of information needs, a new database was developed for various users in health care delivery as well as for those in health insurance. The method of information engineering was used in the standard four steps of the information system construction, while the whole project was run in accordance with the principles of two internationally approved project management methods. Special attention was dedicated to a careful analysis of the users' requirements and we believe the latter to be fulfilled to a very large degree. The new NHCPD is a relational database which is set up in two important state institutions, the National Institute of Public Health and the Health Insurance Institute of Slovenia. The former is responsible for updating the database, while the latter is responsible for the technological side as well as for the implementation of data security and protection. NHCPD will be inter linked with several other existing applications in the area of health care, public health and health insurance. Several important state institutions and professional chambers are users of the database in question, thus integrating various aspects of the health care system in Slovenia. The setting up of a completely revised health care providers' database in Slovenia is an important step in the development of a uniform and integrated information system that would support top decision-making processes at the national level.

  5. Building a Quality Controlled Database of Meteorological Data from NASA Kennedy Space Center and the United States Air Force's Eastern Range

    NASA Technical Reports Server (NTRS)

    Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.

  6. [Bibliometric analysis of Revista Médica del IMSS in the Scopus database for the period between 2005-2013].

    PubMed

    García-Gómez, Francisco; Ramírez-Méndez, Fernando

    2015-01-01

    To analyze the number of articles of Revista Médica del Instituto Mexicano del Seguro Social (Rev Med Inst Mex Seguro Soc) in the Scopus database and describe principal quantitative bibliometric indicators of scientific publications during the period between 2005 to 2013. Scopus database was used limited to the period between 2005 to 2013. The analysis cover mainly title of articles with the title of Revista Médica del Instituto Mexicano del Seguro Social and its possible modifications. For the analysis, Scopus, Excel and Access were used. 864 articles were published during the period between 2005 to 2013 in the Scopus database. We identified authors with the highest number of contributions including articles with the highest citation rate and forms of documents cited. We also divided articles by subjects, types of documents and other bibliometric indicators which characterize the publications. The use of Scopus brings the possibility of analyze with an external tool the visibility of the scientific production published in the Revista Médica del IMSS. The use of this database also contributes to identify the state of science in México, as well as in the developing countries.

  7. CADDIS Volume 5. Causal Databases: Home page (Duplicate?)

    EPA Pesticide Factsheets

    The Causal Analysis/Diagnosis Decision Information System, or CADDIS, is a website developed to help scientists and engineers in the Regions, States, and Tribes conduct causal assessments in aquatic systems.

  8. 75 FR 78685 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-16

    ... be effective without further notice on January 18, 2011 unless comments are received which result in... name: Economic and Manpower Analysis (OEMA) Database (October 1, 2008, 73 FR 57082). * * * * * Changes... Economic and Manpower Analysis (OEMA), 607 Cullum Road, United States Military Academy, West Point, NY...

  9. The Monitoring Erosion of Agricultural Land and spatial database of erosion events

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri; Zizala, Daniel

    2013-04-01

    In 2011 originated in The Czech Republic The Monitoring Erosion of Agricultural Land as joint project of State Land Office (SLO) and Research Institute for Soil and Water Conservation (RISWC). The aim of the project is collecting and record keeping information about erosion events on agricultural land and their evaluation. The main idea is a creation of a spatial database that will be source of data and information for evaluation and modeling erosion process, for proposal of preventive measures and measures to reduce negative impacts of erosion events. A subject of monitoring is the manifestations of water erosion, wind erosion and slope deformation in which cause damaged agriculture land. A website, available on http://me.vumop.cz, is used as a tool for keeping and browsing information about monitored events. SLO employees carry out record keeping. RISWC is specialist institute in the Monitoring Erosion of Agricultural Land that performs keeping the spatial database, running the website, managing the record keeping of events, analysis the cause of origins events and statistical evaluations of keeping events and proposed measures. Records are inserted into the database using the user interface of the website which has map server as a component. Website is based on database technology PostgreSQL with superstructure PostGIS and MapServer UMN. Each record is in the database spatial localized by a drawing and it contains description information about character of event (data, situation description etc.) then there are recorded information about land cover and about grown crops. A part of database is photodocumentation which is taken in field reconnaissance which is performed within two days after notify of event. Another part of database are information about precipitations from accessible precipitation gauges. Website allows to do simple spatial analysis as are area calculation, slope calculation, percentage representation of GAEC etc.. Database structure was designed on the base of needs analysis inputs to mathematical models. Mathematical models are used for detailed analysis of chosen erosion events which include soil analysis. Till the end 2012 has had the database 135 events. The content of database still accrues and gives rise to the extensive source of data that is usable for testing mathematical models.

  10. Statewide health information: a tool for improving hospital accountability.

    PubMed

    Epstein, M H; Kurtzig, B S

    1994-07-01

    By early 1994, 38 states had invested in data collection, analysis, and dissemination on the use, cost, effectiveness, and performance of hospitals. States use these data to control costs, encourage prudent purchasing, monitor effectiveness and outcomes of health care, guide health policy, and promote informed decision making. Experience in several states suggests that public release of hospital-specific data influences hospital performance. The value of state data organizations' databases to address issues of quality and accountability can be strengthened by ensuring the stability and growth of statewide health information systems, supporting research on information dissemination techniques, and promoting comparisons among hospitals. Information to measure provider performance must be placed in the public domain--to help ensure prudent and cost-effective health care purchasing and to give providers comparable information for improvement of care. State-level health databases are an essential component of the information infrastructure needed to support health reform.

  11. Analyzing Current Serials in Virginia: An Application of the Ulrich's Serials Analysis System

    ERIC Educational Resources Information Center

    Metz, Paul; Gasser, Sharon

    2006-01-01

    VIVA (the Virtual Library of Virginia) was one of the first subscribers to R. R. Bowker's Ulrich's Serials Analysis System (USAS). Creating a database that combined a union report of current serial subscriptions within most academic libraries in the state with the data elements present in Ulrich's made possible a comprehensive analysis designed…

  12. CADDIS Volume 5. Causal Databases: Interactive Conceptual Diagrams (ICDs) User Guide

    EPA Pesticide Factsheets

    The Causal Analysis/Diagnosis Decision Information System, or CADDIS, is a website developed to help scientists and engineers in the Regions, States, and Tribes conduct causal assessments in aquatic systems.

  13. The National State Policy Database. Quick Turn Around (QTA).

    ERIC Educational Resources Information Center

    Ahearn, Eileen; Jackson, Terry

    This paper describes the National State Policy Database (NSPD), a full-text searchable database of state and federal education regulations for special education. It summarizes the history of the NSPD and reports on a survey of state directors or their designees as to their use of the database and their suggestions for its future expansion. The…

  14. Developing a Non-Formal Education and Literacy Database in the Asia-Pacific Region. Final Report of the Expert Group Consultation Meeting (Dhaka, Bangladesh, December 15-18, 1997).

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific, and Cultural Organization, Bangkok (Thailand). Regional Office for Education in Asia and the Pacific.

    The objectives of the Expert Group Consultation Meeting for Developing a Non-Formal Education and Literacy Database in the Asia-Pacific Region were: to exchange information and review the state-of-the-art in the field of data collection, analysis and indicators of non-formal education and literacy programs; to examine and review the set of…

  15. Strategic Studies Quarterly. Volume 7, Number 4. Winter 2013

    DTIC Science & Technology

    2013-01-01

    databases to bridge the man-machine interface, thereby mak- ing both machines and man more capable of complex thought, independent assessment, and...Edward, “China Steps up Effort to Diversify FX Reserves,” Re- uters, 13 January 2013, http://www.reuters.com/article/2013/01/14/us-china- forex ...of attacks in Israel, Russia, and the United States from 1989 to 2008 (see fig. 2). The analysis combines data from the Global Terrorism Database

  16. Analysis of the NMI01 marker for a population database of cannabis seeds.

    PubMed

    Shirley, Nicholas; Allgeier, Lindsay; Lanier, Tommy; Coyle, Heather Miller

    2013-01-01

    We have analyzed the distribution of genotypes at a single hexanucleotide short tandem repeat (STR) locus in a Cannabis sativa seed database along with seed-packaging information. This STR locus is defined by the polymerase chain reaction amplification primers CS1F and CS1R and is referred to as NMI01 (for National Marijuana Initiative) in our study. The population database consists of seed seizures of two categories: seed samples from labeled and unlabeled packages regarding seed bank source. Of a population database of 93 processed seeds including 12 labeled Cannabis varieties, the observed genotypes generated from single seeds exhibited between one and three peaks (potentially six alleles if in homozygous state). The total number of observed genotypes was 54 making this marker highly specific and highly individualizing even among seeds of common lineage. Cluster analysis associated many but not all of the handwritten labeled seed varieties tested to date as well as the National Park seizure to our known reference database containing Mr. Nice Seedbank and Sensi Seeds commercially packaged reference samples. © 2012 American Academy of Forensic Sciences.

  17. A methodology and decision support tool for informing state-level bioenergy policymaking: New Jersey biofuels as a case study

    NASA Astrophysics Data System (ADS)

    Brennan-Tonetta, Margaret

    This dissertation seeks to provide key information and a decision support tool that states can use to support long-term goals of fossil fuel displacement and greenhouse gas reductions. The research yields three outcomes: (1) A methodology that allows for a comprehensive and consistent inventory and assessment of bioenergy feedstocks in terms of type, quantity, and energy potential. Development of a standardized methodology for consistent inventorying of biomass resources fosters research and business development of promising technologies that are compatible with the state's biomass resource base. (2) A unique interactive decision support tool that allows for systematic bioenergy analysis and evaluation of policy alternatives through the generation of biomass inventory and energy potential data for a wide variety of feedstocks and applicable technologies, using New Jersey as a case study. Development of a database that can assess the major components of a bioenergy system in one tool allows for easy evaluation of technology, feedstock and policy options. The methodology and decision support tool is applicable to other states and regions (with location specific modifications), thus contributing to the achievement of state and federal goals of renewable energy utilization. (3) Development of policy recommendations based on the results of the decision support tool that will help to guide New Jersey into a sustainable renewable energy future. The database developed in this research represents the first ever assessment of bioenergy potential for New Jersey. It can serve as a foundation for future research and modifications that could increase its power as a more robust policy analysis tool. As such, the current database is not able to perform analysis of tradeoffs across broad policy objectives such as economic development vs. CO2 emissions, or energy independence vs. source reduction of solid waste. Instead, it operates one level below that with comparisons of kWh or GGE generated by different feedstock/technology combinations at the state and county level. Modification of the model to incorporate factors that will enable the analysis of broader energy policy issues as those mentioned above, are recommended for future research efforts.

  18. Building a QC Database of Meteorological Data From NASA KSC and the United States Air Force's Eastern Range

    NASA Technical Reports Server (NTRS)

    Brenton, James C.; Barbre, Robert E.; Orcutt, John M.; Decker, Ryan K.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER is one of the most heavily instrumented sites in the United States measuring various atmospheric parameters on a continuous basis. An inherent challenge with the large databases that EV44 receives from the ER consists of ensuring erroneous data are removed from the databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments; however, no standard QC procedures for all databases currently exist resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build flags within the meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC checks are described. The flagged data points will be plotted in a graphical user interface (GUI) as part of a manual confirmation that the flagged data do indeed need to be removed from the archive. As the rate of launches increases with additional launch vehicle programs, more emphasis is being placed to continually update and check weather databases for data quality before use in launch vehicle design and certification analyses.

  19. CADDIS Volume 5. Causal Databases: Interactive Conceptual Diagrams (ICDs) Quick Start Instructions

    EPA Pesticide Factsheets

    The Causal Analysis/Diagnosis Decision Information System, or CADDIS, is a website developed to help scientists and engineers in the Regions, States, and Tribes conduct causal assessments in aquatic systems.

  20. Optimal Tree Increment Models for the Northeastern United States

    Treesearch

    Don C. Bragg

    2005-01-01

    I used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...

  1. Empowering Accountability for Vocational-Technical Education: The Analysis and Use of Wage Records.

    ERIC Educational Resources Information Center

    Jarosik, Daniel; Phelps, L. Allen

    Since 1988, state governments have been required to collect quarterly from private sector employers gross earnings by Social Security numbers, industry of employment, and county of employment. A study was conducted of 13 states' efforts to use this wage record database as a tool for improving educational accountability and assessing the impact of…

  2. [Conceptual foundations of creation of branch database of technology and intellectual property rights owned by scientific institutions, organizations, higher medical educational institutions and enterprises of healthcare sphere of Ukraine].

    PubMed

    Horban', A Ie

    2013-09-01

    The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.

  3. 9 CFR 55.25 - Animal identification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CWD National Database or in an approved State database. The second animal identification must be... CWD National Database or in an approved State database. The means of animal identification must be...

  4. 9 CFR 55.25 - Animal identification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CWD National Database or in an approved State database. The second animal identification must be... CWD National Database or in an approved State database. The means of animal identification must be...

  5. Second-Tier Database for Ecosystem Focus, 2003-2004 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    University of Washington, Columbia Basin Research, DART Project Staff,

    2004-12-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities essential to sound operational and resource management. The database also assists with juvenile and adult mainstem passage modeling supporting federal decisions affecting the operation of the FCRPS. The Second-Tier Database known as Data Access in Real Time (DART) integrates public data for effective access, consideration and application. DART also provides analysis tools and performance measures for evaluating the condition of Columbia Basin salmonid stocks. These services are critical tomore » BPA's implementation of its fish and wildlife responsibilities under the Endangered Species Act (ESA).« less

  6. The Cardiac Safety Research Consortium ECG database.

    PubMed

    Kligfield, Paul; Green, Cynthia L

    2012-01-01

    The Cardiac Safety Research Consortium (CSRC) ECG database was initiated to foster research using anonymized, XML-formatted, digitized ECGs with corresponding descriptive variables from placebo- and positive-control arms of thorough QT studies submitted to the US Food and Drug Administration (FDA) by pharmaceutical sponsors. The database can be expanded to other data that are submitted directly to CSRC from other sources, and currently includes digitized ECGs from patients with genotyped varieties of congenital long-QT syndrome; this congenital long-QT database is also linked to ambulatory electrocardiograms stored in the Telemetric and Holter ECG Warehouse (THEW). Thorough QT data sets are available from CSRC for unblinded development of algorithms for analysis of repolarization and for blinded comparative testing of algorithms developed for the identification of moxifloxacin, as used as a positive control in thorough QT studies. Policies and procedures for access to these data sets are available from CSRC, which has developed tools for statistical analysis of blinded new algorithm performance. A recently approved CSRC project will create a data set for blinded analysis of automated ECG interval measurements, whose initial focus will include comparison of four of the major manufacturers of automated electrocardiographs in the United States. CSRC welcomes application for use of the ECG database for clinical investigation. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. The Molecular Signatures Database (MSigDB) hallmark gene set collection.

    PubMed

    Liberzon, Arthur; Birger, Chet; Thorvaldsdóttir, Helga; Ghandi, Mahmoud; Mesirov, Jill P; Tamayo, Pablo

    2015-12-23

    The Molecular Signatures Database (MSigDB) is one of the most widely used and comprehensive databases of gene sets for performing gene set enrichment analysis. Since its creation, MSigDB has grown beyond its roots in metabolic disease and cancer to include >10,000 gene sets. These better represent a wider range of biological processes and diseases, but the utility of the database is reduced by increased redundancy across, and heterogeneity within, gene sets. To address this challenge, here we use a combination of automated approaches and expert curation to develop a collection of "hallmark" gene sets as part of MSigDB. Each hallmark in this collection consists of a "refined" gene set, derived from multiple "founder" sets, that conveys a specific biological state or process and displays coherent expression. The hallmarks effectively summarize most of the relevant information of the original founder sets and, by reducing both variation and redundancy, provide more refined and concise inputs for gene set enrichment analysis.

  8. Distance Education in Library and Information Science Education: Trends and Issues.

    ERIC Educational Resources Information Center

    Zepp, Diana

    This study measured current trends in distance education in the United States within Library and Information Science programs. The study was conducted, for the period 1989 to 1998, through a content analysis of journal articles from the "Library Literature" database, and through a content analysis of graduate catalogs from American Library…

  9. Analysis of lane change crashes

    DOT National Transportation Integrated Search

    2003-03-01

    This report defines the problem of lane change crashes in the United States (U.S.) based on data from the 1999 National Automotive Sampling System/General Estimates System (GES) crash database of the National Highway Traffic Safety Administration. Th...

  10. Assessment of statewide intersection safety performance.

    DOT National Transportation Integrated Search

    2011-06-01

    This report summarizes the results of an analysis of the safety performance of Oregons intersections. Following a pilot : study, a database of 500 intersections randomly sampled from around the state of Oregon in both urban and rural : environment...

  11. Mining Claim Activity on Federal Land in the United States

    USGS Publications Warehouse

    Causey, J. Douglas

    2007-01-01

    Several statistical compilations of mining claim activity on Federal land derived from the Bureau of Land Management's LR2000 database have previously been published by the U.S Geological Survey (USGS). The work in the 1990s did not include Arkansas or Florida. None of the previous reports included Alaska because it is stored in a separate database (Alaska Land Information System) and is in a different format. This report includes data for all states for which there are Federal mining claim records, beginning in 1976 and continuing to the present. The intent is to update the spatial and statistical data associated with this report on an annual basis, beginning with 2005 data. The statistics compiled from the databases are counts of the number of active mining claims in a section of land each year from 1976 to the present for all states within the United States. Claim statistics are subset by lode and placer types, as well as a dataset summarizing all claims including mill site and tunnel site claims. One table presents data by case type, case status, and number of claims in a section. This report includes a spatial database for each state in which mining claims were recorded, except North Dakota, which only has had two claims. A field is present that allows the statistical data to be joined to the spatial databases so that spatial displays and analysis can be done by using appropriate geographic information system (GIS) software. The data show how mining claim activity has changed in intensity, space, and time. Variations can be examined on a state, as well as a national level. The data are tied to a section of land, approximately 640 acres, which allows it to be used at regional, as well as local scale. The data only pertain to Federal land and mineral estate that was open to mining claim location at the time the claims were staked.

  12. International patent analysis of water source heat pump based on orbit database

    NASA Astrophysics Data System (ADS)

    Li, Na

    2018-02-01

    Using orbit database, this paper analysed the international patents of water source heat pump (WSHP) industry with patent analysis methods such as analysis of publication tendency, geographical distribution, technology leaders and top assignees. It is found that the beginning of the 21st century is a period of rapid growth of the patent application of WSHP. Germany and the United States had done researches and development of WSHP in an early time, but now Japan and China have become important countries of patent applications. China has been developing faster and faster in recent years, but the patents are concentrated in universities and urgent to be transferred. Through an objective analysis, this paper aims to provide appropriate decision references for the development of domestic WSHP industry.

  13. Geoscience research databases for coastal Alabama ecosystem management

    USGS Publications Warehouse

    Hummell, Richard L.

    1995-01-01

    Effective management of complex coastal ecosystems necessitates access to scientific knowledge that can be acquired through a multidisciplinary approach involving Federal and State scientists that take advantage of agency expertise and resources for the benefit of all participants working toward a set of common research and management goals. Cooperative geostatic investigations have led toward building databases of fundamental scientific knowledge that can be utilized to manage coastal Alabama's natural and future development. These databases have been used to assess the occurrence and economic potential of hard mineral resources in the Alabama EFZ, and to support oil spill contingency planning and environmental analysis for coastal Alabama.

  14. Determinants of Post-fire Water Quality in the Western United States

    NASA Astrophysics Data System (ADS)

    Rust, A.; Saxe, S.; Dolan, F.; Hogue, T. S.; McCray, J. E.

    2015-12-01

    Large wildfires are becoming increasingly common in the Western United States. Wildfires that consume greater than twenty percent of the watershed impact river water quality. The surface waters of the arid West are limited and in demand by the aquatic ecosystems, irrigated agriculture, and the region's growing human population. A range of studies, typically focused on individual fires, have observed mobilization of contaminants, nutrients (including nitrates), and sediments into receiving streams. Post-fire metal concentrations have also been observed to increase when fires were located in streams close to urban centers. The objective of this work was to assemble an extensive historical water quality database through data mining from federal, state and local agencies into a fire-database. Data from previous studies on individual fires by the co-authors was also included. The fire-database includes observations of water quality, discharge, geospatial and land characteristics from over 200 fire-impacted watersheds in the western U.S. since 1985. Water quality data from burn impacted watersheds was examined for trends in water quality response using statistical analysis. Watersheds where there was no change in water quality after fire were also examined to determine characteristics of the watershed that make it more resilient to fire. The ultimate goal is to evaluate trends in post-fire water quality response and identify key drivers of resiliency and post-fire response. The fire-database will eventually be publicly available.Large wildfires are becoming increasingly common in the Western United States. Wildfires that consume greater than twenty percent of the watershed impact river water quality. The surface waters of the arid West are limited and in demand by the aquatic ecosystems, irrigated agriculture, and the region's growing human population. A range of studies, typically focused on individual fires, have observed mobilization of contaminants, nutrients (including nitrates), and sediments into receiving streams. Post-fire metal concentrations have also been observed to increase when fires were located in streams close to urban centers. The objective of this work was to assemble an extensive historical water quality database through data mining from federal, state and local agencies into a fire-database. Data from previous studies on individual fires by the co-authors was also included. The fire-database includes observations of water quality, discharge, geospatial and land characteristics from over 200 fire-impacted watersheds in the western U.S. since 1985. Water quality data from burn impacted watersheds was examined for trends in water quality response using statistical analysis. Watersheds where there was no change in water quality after fire were also examined to determine characteristics of the watershed that make it more resilient to fire. The ultimate goal is to evaluate trends in post-fire water quality response and identify key drivers of resiliency and post-fire response. The fire-database will eventually be publicly available.

  15. Predictors of Outcomes for African Americans in a Rehabilitation State Agency: Implications for National Policy and Practice

    ERIC Educational Resources Information Center

    Balcazar, Fabricio E.; Oberoi, Ashmeet K.; Suarez-Balcazar, Yolanda; Alvarado, Francisco

    2012-01-01

    A review of vocational rehabilitation (VR) data from a Midwestern state was conducted to identify predictors of rehabilitation outcomes for African American consumers. The database included 37,404 African Americans who were referred or self-referred over a period of five years. Logistic regression analysis indicated that except for age and…

  16. Interactive access to forest inventory data for the South Central United States

    Treesearch

    William H. McWilliams

    1990-01-01

    On-line access to USDA, Forest Service successive forest inventory data for the South Central United States is provided by two computer systems. The Easy Access to Forest Inventory and Analysis Tables program (EZTAB) produces a set of tables for specific geographic areas. The Interactive Graphics and Retrieval System (INGRES) is a database management system that...

  17. Tribology and Friction of Soft Materials: Mississippi State Case Study

    DTIC Science & Technology

    2010-03-18

    elastomers , foams, and fabrics. B. Develop internal state variable (ISV) material model. Model will be calibrated using database and verified...Rubbers Natural rubber Santoprene (Vulcanized Elastomer ) Styrene Butadiene Rubber (SBR) Foams Polypropylene Foam Polyurethane Foam Fabrics Kevlar...Axially symmetric model PC Disk PC Numerical Implementation in FEM Codes Experiment SEM Optical methods ISV Model Void Nucleation FEM Analysis

  18. Listeria, Then and Now: A Call to Reevaluate Patient Teaching Based on Analysis of US Federal Databases, 1998-2016.

    PubMed

    Simon, Katya; Simon, Valentina; Rosenzweig, Rachel; Barroso, Rebeca; Gillmor-Kahn, Mickey

    2018-05-01

    Listeria monocytogenes is a foodborne pathogen capable of crossing the placental-fetal barrier; infection with the bacterium causes listeriosis. An exposed fetus may suffer blindness, neurological damage including meningitis, or even death. The adverse consequences of listeriosis place the infection on the federally reportable disease list. Primary prevention relies on women avoiding 6 categories of foods most likely to be contaminated with L monocytogenes, as indicated in guidelines developed by the Centers for Disease Control and Prevention (CDC), adapted by the American College of Obstetricians and Gynecologists (ACOG) in 2014, and reaffirmed without changes by ACOG in 2016. This report contains a critical evaluation of United States listeriosis prevention guidelines. Between 1998 and 2016, there were 876 identified listeriosis events documented in the illness and recall databases maintained by the CDC, Food and Drug Administration (FDA), and United States Department of Agriculture - Food Safety and Inspection Service (USDA-FSIS). Each contaminated food was manually compared to the existing listeriosis avoidance guidelines, placing each event within or outside the guidelines. Trends were analyzed over time. Database analysis demonstrates that prior to the year 2000, abiding by the current guidelines would have prevented all reported listeriosis cases. However, in 2015 and 2016, only 5% of confirmed L monocytogenes infections originated from the 6 food groups listed in the CDC and ACOG guidelines. Similar trends emerged for food processing plant recalls (USDA-FSIS database) and grocery store recalls (FDA database). The total number of listeriosis illnesses in the United States doubled from 2007 to 2014. A gradual shift in detection of L monocytogenes contamination in ready-to-eat meals, frozen foods, and ready-to-eat salads has occurred. Another emerging culprit is pasteurized dairy products. Revision of listeriosis avoidance guidelines by a consensus-seeking, multidisciplinary task force, is needed. © 2018 by the American College of Nurse-Midwives.

  19. A Unified Flash Flood Database across the United States

    USGS Publications Warehouse

    Gourley, Jonathan J.; Hong, Yang; Flamig, Zachary L.; Arthur, Ami; Clark, Robert; Calianno, Martin; Ruin, Isabelle; Ortel, Terry W.; Wieczorek, Michael; Kirstetter, Pierre-Emmanuel; Clark, Edward; Krajewski, Witold F.

    2013-01-01

    Despite flash flooding being one of the most deadly and costly weather-related natural hazards worldwide, individual datasets to characterize them in the United States are hampered by limited documentation and can be difficult to access. This study is the first of its kind to assemble, reprocess, describe, and disseminate a georeferenced U.S. database providing a long-term, detailed characterization of flash flooding in terms of spatiotemporal behavior and specificity of impacts. The database is composed of three primary sources: 1) the entire archive of automated discharge observations from the U.S. Geological Survey that has been reprocessed to describe individual flooding events, 2) flash-flooding reports collected by the National Weather Service from 2006 to the present, and 3) witness reports obtained directly from the public in the Severe Hazards Analysis and Verification Experiment during the summers 2008–10. Each observational data source has limitations; a major asset of the unified flash flood database is its collation of relevant information from a variety of sources that is now readily available to the community in common formats. It is anticipated that this database will be used for many diverse purposes, such as evaluating tools to predict flash flooding, characterizing seasonal and regional trends, and improving understanding of dominant flood-producing processes. We envision the initiation of this community database effort will attract and encompass future datasets.

  20. Spatiotemporal distribution patterns of forest fires in northern Mexico

    Treesearch

    Gustavo Pérez-Verdin; M. A. Márquez-Linares; A. Cortes-Ortiz; M. Salmerón-Macias

    2013-01-01

    Using the 2000-2011 CONAFOR databases, a spatiotemporal analysis of the occurrence of forest fires in Durango, one of the most affected States in Mexico, was conducted. The Moran's index was used to determine a spatial distribution pattern; also, an analysis of seasonal and temporal autocorrelation of the data collected was completed. The geographically weighted...

  1. Ceramic transactions: Fractography of glasses and ceramics III. Volume 64

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varner, J.R.; Frechette, V.D.; Quinn, G.D.

    1996-12-31

    Reports are presented from the Third Annual Conference on the Fractography of Glasses and Ceramics. Topics include ceramics fracture mode, damage analysis, defect origin, deformation, crack evolution, and the use of laser raman spectroscopy for analysis of residual surface strains. Individual projects have been processed separately for the United States Department of Energy databases.

  2. Predictions of Control Inputs, Periodic Responses and Damping Levels of an Isolated Experimental Rotor in Trimmed Flight

    NASA Technical Reports Server (NTRS)

    Gaonkar, G. H.; Subramanian, S.

    1996-01-01

    Since the early 1990s the Aeroflightdynamics Directorate at the Ames Research Center has been conducting tests on isolated hingeless rotors in hover and forward flight. The primary objective is to generate a database on aeroelastic stability in trimmed flight for torsionally soft rotors at realistic tip speeds. The rotor test model has four soft inplane blades of NACA 0012 airfoil section with low torsional stiffness. The collective pitch and shaft tilt are set prior to each test run, and then the rotor is trimmed in the following sense: the longitudinal and lateral cyclic pitch controls are adjusted through a swashplate to minimize the 1/rev flapping moment at the 12 percent radial station. In hover, the database comprises lag regressive-mode damping with pitch variations. In forward flight the database comprises cyclic pitch controls, root flap moment and lag regressive-mode damping with advance ratio, shaft angle and pitch variations. This report presents the predictions and their correlation with the database. A modal analysis is used, in which nonrotating modes in flap bending, lag bending and torsion are computed from the measured blade mass and stiffness distributions. The airfoil aerodynamics is represented by the ONERA dynamic stall models of lift, drag and pitching moment, and the wake dynamics is represented by a state-space wake model. The trim analysis of finding, the cyclic controls and the corresponding, periodic responses is based on periodic shooting with damped Newton iteration; the Floquet transition matrix (FTM) comes out as a byproduct. The stabillty analysis of finding the frequencies and damping levels is based on the eigenvalue-eigenvector analysis of the FTM. All the structural and aerodynamic states are included from modeling to trim analysis. A major finding is that dynamic wake dramatically improves the correlation for the lateral cyclic pitch control. Overall, the correlation is fairly good.

  3. 9 CFR 81.2 - Identification of deer, elk, and moose in interstate commerce.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... is linked to that animal in the CWD National Database or in an approved State database. The second... that animal and herd in the CWD National Database or in an approved State database. (Approved by the...

  4. 9 CFR 81.2 - Identification of deer, elk, and moose in interstate commerce.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... is linked to that animal in the CWD National Database or in an approved State database. The second... that animal and herd in the CWD National Database or in an approved State database. (Approved by the...

  5. Formulating a strategy for securing high-speed rail in the United States.

    DOT National Transportation Integrated Search

    2013-03-01

    This report presents an analysis of information relating to attacks, attempted attacks, and plots against high-speed rail (HSR) : systems. It draws upon empirical data from MTIs Database of Terrorist and Serious Criminal Attacks Against Public Sur...

  6. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  7. A blue carbon soil database: Tidal wetland stocks for the US National Greenhouse Gas Inventory

    NASA Astrophysics Data System (ADS)

    Feagin, R. A.; Eriksson, M.; Hinson, A.; Najjar, R. G.; Kroeger, K. D.; Herrmann, M.; Holmquist, J. R.; Windham-Myers, L.; MacDonald, G. M.; Brown, L. N.; Bianchi, T. S.

    2015-12-01

    Coastal wetlands contain large reservoirs of carbon, and in 2015 the US National Greenhouse Gas Inventory began the work of placing blue carbon within the national regulatory context. The potential value of a wetland carbon stock, in relation to its location, soon could be influential in determining governmental policy and management activities, or in stimulating market-based CO2 sequestration projects. To meet the national need for high-resolution maps, a blue carbon stock database was developed linking National Wetlands Inventory datasets with the USDA Soil Survey Geographic Database. Users of the database can identify the economic potential for carbon conservation or restoration projects within specific estuarine basins, states, wetland types, physical parameters, and land management activities. The database is geared towards both national-level assessments and local-level inquiries. Spatial analysis of the stocks show high variance within individual estuarine basins, largely dependent on geomorphic position on the landscape, though there are continental scale trends to the carbon distribution as well. Future plans including linking this database with a sedimentary accretion database to predict carbon flux in US tidal wetlands.

  8. Comparative policy analysis for alcohol and drugs: Current state of the field.

    PubMed

    Ritter, Alison; Livingston, Michael; Chalmers, Jenny; Berends, Lynda; Reuter, Peter

    2016-05-01

    A central policy research question concerns the extent to which specific policies produce certain effects - and cross-national (or between state/province) comparisons appear to be an ideal way to answer such a question. This paper explores the current state of comparative policy analysis (CPA) with respect to alcohol and drugs policies. We created a database of journal articles published between 2010 and 2014 as the body of CPA work for analysis. We used this database of 57 articles to clarify, extract and analyse the ways in which CPA has been defined. Quantitative and qualitative analysis of the CPA methods employed, the policy areas that have been studied, and differences between alcohol CPA and drug CPA are explored. There is a lack of clear definition as to what counts as a CPA. The two criteria for a CPA (explicit study of a policy, and comparison across two or more geographic locations), exclude descriptive epidemiology and single state comparisons. With the strict definition, most CPAs were with reference to alcohol (42%), although the most common policy to be analysed was medical cannabis (23%). The vast majority of papers undertook quantitative data analysis, with a variety of advanced statistical methods. We identified five approaches to the policy specification: classification or categorical coding of policy as present or absent; the use of an index; implied policy differences; described policy difference and data-driven policy coding. Each of these has limitations, but perhaps the most common limitation was the inability for the method to account for the differences between policy-as-stated versus policy-as-implemented. There is significant diversity in CPA methods for analysis of alcohol and drugs policy, and some substantial challenges with the currently employed methods. The absence of clear boundaries to a definition of what counts as a 'comparative policy analysis' may account for the methodological plurality but also appears to stand in the way of advancing the techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Geologic Map Database of Texas

    USGS Publications Warehouse

    Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.

    2005-01-01

    The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.

  10. The Self-Esteem of Adolescents in American Public High Schools: A Multilevel Analysis of Individual Differences.

    ERIC Educational Resources Information Center

    Bekhuis, Tanja C. H. M.

    1994-01-01

    A total of 219 schools and 13,022 students from a database of secondary schools and students in the United States were sampled. Analysis of the data revealed that school social climate and several student characteristics predicted student self-esteem and that variability of student self-esteem was greater in southern than northern schools. (BC)

  11. An Analysis of Trends for People with MR, Cerebral Palsy, and Epilepsy Receiving Services from State VR Agencies: Ten Years of Progress.

    ERIC Educational Resources Information Center

    Gilmore, Dana Scott; Schuster, Jennifer L.; Timmons, Jaimie Ciulla; Butterworth, John

    2000-01-01

    Presents the results of a secondary analysis of the RSA- 911 database from the Rehabilitation Services Administration. All successful vocational rehabilitation (VR) closures for individuals with mental retardation, cerebral palsy, and epilepsy for five data points between 1985 and 1995 were investigated. Trends in the use of competitive employment…

  12. The Molecule Pages database

    PubMed Central

    Saunders, Brian; Lyon, Stephen; Day, Matthew; Riley, Brenda; Chenette, Emily; Subramaniam, Shankar

    2008-01-01

    The UCSD-Nature Signaling Gateway Molecule Pages (http://www.signaling-gateway.org/molecule) provides essential information on more than 3800 mammalian proteins involved in cellular signaling. The Molecule Pages contain expert-authored and peer-reviewed information based on the published literature, complemented by regularly updated information derived from public data source references and sequence analysis. The expert-authored data includes both a full-text review about the molecule, with citations, and highly structured data for bioinformatics interrogation, including information on protein interactions and states, transitions between states and protein function. The expert-authored pages are anonymously peer reviewed by the Nature Publishing Group. The Molecule Pages data is present in an object-relational database format and is freely accessible to the authors, the reviewers and the public from a web browser that serves as a presentation layer. The Molecule Pages are supported by several applications that along with the database and the interfaces form a multi-tier architecture. The Molecule Pages and the Signaling Gateway are routinely accessed by a very large research community. PMID:17965093

  13. The Molecule Pages database.

    PubMed

    Saunders, Brian; Lyon, Stephen; Day, Matthew; Riley, Brenda; Chenette, Emily; Subramaniam, Shankar; Vadivelu, Ilango

    2008-01-01

    The UCSD-Nature Signaling Gateway Molecule Pages (http://www.signaling-gateway.org/molecule) provides essential information on more than 3800 mammalian proteins involved in cellular signaling. The Molecule Pages contain expert-authored and peer-reviewed information based on the published literature, complemented by regularly updated information derived from public data source references and sequence analysis. The expert-authored data includes both a full-text review about the molecule, with citations, and highly structured data for bioinformatics interrogation, including information on protein interactions and states, transitions between states and protein function. The expert-authored pages are anonymously peer reviewed by the Nature Publishing Group. The Molecule Pages data is present in an object-relational database format and is freely accessible to the authors, the reviewers and the public from a web browser that serves as a presentation layer. The Molecule Pages are supported by several applications that along with the database and the interfaces form a multi-tier architecture. The Molecule Pages and the Signaling Gateway are routinely accessed by a very large research community.

  14. A spatial database for landslides in northern Bavaria: A methodological approach

    NASA Astrophysics Data System (ADS)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  15. Leveraging Big Data Analysis Techniques for U.S. Vocational Vehicle Drive Cycle Characterization, Segmentation, and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duran, Adam W; Phillips, Caleb T; Perr-Sauer, Jordan

    Under a collaborative interagency agreement between the U.S. Environmental Protection Agency and the U.S. Department of Energy (DOE), the National Renewable Energy Laboratory (NREL) performed a series of in-depth analyses to characterize on-road driving behavior including distributions of vehicle speed, idle time, accelerations and decelerations, and other driving metrics of medium- and heavy-duty vocational vehicles operating within the United States. As part of this effort, NREL researchers segmented U.S. medium- and heavy-duty vocational vehicle driving characteristics into three distinct operating groups or clusters using real-world drive cycle data collected at 1 Hz and stored in NREL's Fleet DNA database. Themore » Fleet DNA database contains millions of miles of historical drive cycle data captured from medium- and heavy-duty vehicles operating across the United States. The data encompass existing DOE activities as well as contributions from valued industry stakeholder participants. For this project, data captured from 913 unique vehicles comprising 16,250 days of operation were drawn from the Fleet DNA database and examined. The Fleet DNA data used as a source for this analysis has been collected from a total of 30 unique fleets/data providers operating across 22 unique geographic locations spread across the United States. This includes locations with topographies ranging from the foothills of Denver, Colorado, to the flats of Miami, Florida. This paper includes the results of the statistical analysis performed by NREL and a discussion and detailed summary of the development of the vocational drive cycle weights and representative transient drive cycles for testing and simulation. Additional discussion of known limitations and potential future work is also included.« less

  16. Drainage investment and wetland loss: an analysis of the national resources inventory data

    USGS Publications Warehouse

    Douglas, Aaron J.; Johnson, Richard L.

    1994-01-01

    The United States Soil Conservation Service (SCS) conducts a survey for the purpose of establishing an agricultural land use database. This survey is called the National Resources Inventory (NRI) database. The complex NRI land classification system, in conjunction with the quantitative information gathered by the survey, has numerous applications. The current paper uses the wetland area data gathered by the NRI in 1982 and 1987 to examine empirically the factors that generate wetland loss in the United States. The cross-section regression models listed here use the quantity of wetlands, the stock of drainage capital, the realty value of farmland and drainage costs to explain most of the cross-state variation in wetland loss rates. Wetlands preservation efforts by federal agencies assume that pecuniary economic factors play a decisive role in wetland drainage. The empirical models tested in the present paper validate this assumption.

  17. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...

  18. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...

  19. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...

  20. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...

  1. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and identification...

  2. PAD-US: National Inventory of Protected Areas

    USGS Publications Warehouse

    Gergely, Kevin J.; McKerrow, Alexa

    2013-11-12

    The Gap Analysis Program produces data and tools that help meet critical national challenges such as biodiversity conservation, renewable energy development, climate change adaptation, and infrastructure investment. The Protected Areas Database of the United States (PAD-US) is the official inventory of protected open space in the United States. With over 715 million acres in thousands of holdings, the spatial data in PAD-US include public lands held in trust by national, State, and some local governments, and by some nonprofit conservation organizations.

  3. Reservoir Simulations of Low-Temperature Geothermal Reservoirs

    NASA Astrophysics Data System (ADS)

    Bedre, Madhur Ganesh

    The eastern United States generally has lower temperature gradients than the western United States. However, West Virginia, in particular, has higher temperature gradients compared to other eastern states. A recent study at Southern Methodist University by Blackwell et al. has shown the presence of a hot spot in the eastern part of West Virginia with temperatures reaching 150°C at a depth of between 4.5 and 5 km. This thesis work examines similar reservoirs at a depth of around 5 km resembling the geology of West Virginia, USA. The temperature gradients used are in accordance with the SMU study. In order to assess the effects of geothermal reservoir conditions on the lifetime of a low-temperature geothermal system, a sensitivity analysis study was performed on following seven natural and human-controlled parameters within a geothermal reservoir: reservoir temperature, injection fluid temperature, injection flow rate, porosity, rock thermal conductivity, water loss (%) and well spacing. This sensitivity analysis is completed by using ‘One factor at a time method (OFAT)’ and ‘Plackett-Burman design’ methods. The data used for this study was obtained by carrying out the reservoir simulations using TOUGH2 simulator. The second part of this work is to create a database of thermal potential and time-dependant reservoir conditions for low-temperature geothermal reservoirs by studying a number of possible scenarios. Variations in the parameters identified in sensitivity analysis study are used to expand the scope of database. Main results include the thermal potential of reservoir, pressure and temperature profile of the reservoir over its operational life (30 years for this study), the plant capacity and required pumping power. The results of this database will help the supply curves calculations for low-temperature geothermal reservoirs in the United States, which is the long term goal of the work being done by the geothermal research group under Dr. Anderson at West Virginia University.

  4. The need for a juvenile fire setting database.

    PubMed

    Klein, Julianne J; Mondozzi, Mary A; Andrews, David A

    2008-01-01

    A juvenile fire setter can be classified as any youth setting a fire regardless of the reason. Many communities have programs to deal with this problem, most based on models developed by the United States Fire Administration. We reviewed our programs data to compare it with that published nationally. Currently there is not a nationwide database to compare fire setter data. A single institution, retrospective chart review of all fire setters between the years of January 1, 2003 and December 31, 2005 was completed. There were 133 participants ages 3 to 17. Information obtained included age, location, ignition source, court order and recidivism. Analysis from our data set found 26% of the peak ages for fire involvement to be 12 and 14. Location, ignition source, and court ordered participants were divided into two age groups: 3 to 10 (N = 58) and 11 to 17 (N = 75). Bedrooms ranked first for the younger population and schools for the latter. Fifty-four percentage of the 133 participants used lighters over matches. Twelve percentage of the 3- to 10-year-olds were court mandated, compared with 52% of the 11- to 17-year-olds. Recidivism rates were 4 to 10% with a 33 to 38% survey return rate. Currently there is no state or nationwide, time honored data base to compare facts from which conclusions can be drawn. Starting small with a statewide database could educe a stimulus for a national database. This could also enhance the information provided by the United States Fire Administration, National Fire Data Center beginning one juvenile firesetter program and State Fire Marshal's office at a time.

  5. 76 FR 28795 - Privacy Act of 1974; Department of Homeland Security United States Coast Guard-024 Auxiliary...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... 1974; Department of Homeland Security United States Coast Guard-024 Auxiliary Database System of... Security/United States Coast Guard-024 Auxiliary Database (AUXDATA) System of Records.'' This system of... titled, ``DHS/USCG-024 Auxiliary Database (AUXDATA) System of Records.'' The AUXDATA system is the USCG's...

  6. Appendix A. Borderlands Site Database

    Treesearch

    A.C. MacWilliams

    2006-01-01

    The database includes modified components of the Arizona State Museum Site Recording System (Arizona State Museum 1993) and the New Mexico NMCRIS User?s Guide (State of New Mexico 1993). When sites contain more than one recorded component, these instances were entered separately with the result that many sites have multiple entries. Information for this database...

  7. The microcomputer scientific software series 9: user's guide to Geo-CLM: geostatistical interpolation of the historical climatic record in the Lake States.

    Treesearch

    Margaret R. Holdaway

    1994-01-01

    Describes Geo-CLM, a computer application (for Mac or DOS) whose primary aim is to perform multiple kriging runs to interpolate the historic climatic record at research plots in the Lake States. It is an exploration and analysis tool. Addition capabilities include climatic databases, a flexible test mode, cross validation, lat/long conversion, English/metric units,...

  8. Soil Carbon Variability and Change Detection in the Forest Inventory Analysis Database of the United States

    NASA Astrophysics Data System (ADS)

    Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.

    2014-12-01

    The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enders, Alexander L.; Lousteau, Angela L.

    The Desktop Analysis Reporting Tool (DART) is a software package that allows users to easily view and analyze daily files that span long periods. DART gives users the capability to quickly determine the state of health of a radiation portal monitor (RPM), troubleshoot and diagnose problems, and view data in various time frames to perform trend analysis. In short, it converts the data strings written in the daily files into meaningful tables and plots. The standalone version of DART (“soloDART”) utilizes a database engine that is included with the application; no additional installations are necessary. There is also a networkedmore » version of DART (“polyDART”) that is designed to maximize the benefit of a centralized data repository while distributing the workload to individual desktop machines. This networked approach requires a more complex database manager Structured Query Language (SQL) Server; however, SQL Server is not currently provided with DART. Regardless of which version is used, DART will import daily files from RPMs, store the relevant data in its database, and it can produce reports for status, trend analysis, and reporting purposes.« less

  10. Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle

    NASA Astrophysics Data System (ADS)

    Huh, Lynn

    The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.

  11. The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States

    USGS Publications Warehouse

    Horton, John D.; San Juan, Carma A.; Stoeser, Douglas B.

    2017-06-30

    The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States (https://doi. org/10.5066/F7WH2N65) represents a seamless, spatial database of 48 State geologic maps that range from 1:50,000 to 1:1,000,000 scale. A national digital geologic map database is essential in interpreting other datasets that support numerous types of national-scale studies and assessments, such as those that provide geochemistry, remote sensing, or geophysical data. The SGMC is a compilation of the individual U.S. Geological Survey releases of the Preliminary Integrated Geologic Map Databases for the United States. The SGMC geodatabase also contains updated data for seven States and seven entirely new State geologic maps that have been added since the preliminary databases were published. Numerous errors have been corrected and enhancements added to the preliminary datasets using thorough quality assurance/quality control procedures. The SGMC is not a truly integrated geologic map database because geologic units have not been reconciled across State boundaries. However, the geologic data contained in each State geologic map have been standardized to allow spatial analyses of lithology, age, and stratigraphy at a national scale.

  12. Vulnerabilities of Local Healthcare Providers in Complex Emergencies: Findings from the Manipur Micro-level Insurgency Database 2008-2009.

    PubMed

    Sinha, Samrat; David, Siddarth; Gerdin, Martin; Roy, Nobhojit

    2013-04-24

    Research on healthcare delivery in zones of conflict requires sustained and systematic attention. In the context of the South Asian region, there has been an absence of research on the vulnerabilities of health care workers and institutions in areas affected by armed conflict. The paper presents a case study of the varied nature of security challenges faced by local healthcare providers in the state of Manipur in the North-eastern region of India, located in the Indo-Myanmar frontier region which has been experiencing armed violence and civil strife since the late 1960s. . The aim of this study was to assess longitudinal and spatial trends in incidents involving health care workers in Manipur during the period 2008 to 2009. We conducted a retrospective database analysis of the Manipur Micro-level Insurgency Database 2008-2009, created by using local newspaper archives to measure the overall burden of violence experienced in the state over a two year period. Publicly available press releases of armed groups and local hospitals in the state were used to supplement the quantitative data. Simple linear regression was used to assess longitudinal trends. Data was visualized with GIS-software for spatial analysis. The mean proportion of incidents involving health care workers per month was 2.7% and ranged between 0 and 6.1% (table 2). There was a significant (P=0.037) month-to-month variation in the proportion of incidents involving health care workers, as well as a upward trend of about 0.11% per month. Spatial analysis revealed different patterns depending on whether absolute, population-adjusted, or incident-adjusted frequencies served as the basis of the analysis. The paper shows a small but steady rise in violence against health workers and health institutions impeding health services in Manipur's pervasive violence. More evidence-building backed by research along with institutional obligations and commitment is essential to protect the health-systems Keywords: India, Manipur, insurgency, healthcare, security, ethnic strife.

  13. Farmers and woods: a look at woodlands and woodland-owner intentions in the heartland

    Treesearch

    W. Keith Moser; Earl C. Leatherberry; Mark H. Hansen; Brett Butler

    2005-01-01

    This paper reports the results of a pilot study that explores the relationship between farm woodland owners` stated intentions for owning woodland, and their use of the land, with the structure and composition of the woodland. Two databases maintained by the USDA Forest Service, Forest Inventory and Analysis (FIA) program were used in the analysis-- the FIA forest...

  14. VerSeDa: vertebrate secretome database

    PubMed Central

    Cortazar, Ana R.; Oguiza, José A.

    2017-01-01

    Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. Database URL: VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php PMID:28365718

  15. Shared Freight Transportation and Energy Commodities Phase One: Coal, Crude Petroleum, & Natural Gas Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Shih-Miao; Hwang, Ho-Ling; Davidson, Diane

    2016-07-01

    The Freight Analysis Framework (FAF) integrates data from a variety of sources to create a comprehensive picture of nationwide freight movements among states and major metropolitan areas for all modes of transportation. It provides a national picture of current freight flows to, from, and within the United States, assigns selected flows to the transportation network, and projects freight flow patterns into the future. The latest release of FAF is known as FAF4 with a base year of 2012. The FAF4 origin-destination-commodity-mode (ODCM) matrix is provided at national, state, major metropolitan areas, and major gateways with significant freight activities (e.g., Elmore » Paso, Texas). The U.S. Department of Energy (DOE) is interested in using FAF4 database for its strategic planning and policy analysis, particularly in association with the transportation of energy commodities. However, the geographic specification that DOE requires is a county-level ODCM matrix. Unfortunately, the geographic regions in the FAF4 database were not available at the DOE desired detail. Due to this limitation, DOE tasked Oak Ridge National Laboratory (ORNL) to assist in generating estimates of county-level flows for selected energy commodities by mode of transportation.« less

  16. Wildlife strikes to civil aircraft in the United States, 1990-2007

    DOT National Transportation Integrated Search

    2008-06-01

    This report presents a summary analysis of data from the FAAs National Wildlife Strike : Database for the 18-year period 1990 through 2007. Unless noted, all totals are for the : 17-year period, and percentages are of the total known. Because of t...

  17. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  18. Analysis of workplace compliance measurements of asbestos by the U.S. Occupational Safety and Health Administration (1984-2011).

    PubMed

    Cowan, Dallas M; Cheng, Thales J; Ground, Matthew; Sahmel, Jennifer; Varughese, Allysha; Madl, Amy K

    2015-08-01

    The United States Occupational Safety and Health Administration (OSHA) maintains the Chemical Exposure Health Data (CEHD) and the Integrated Management Information System (IMIS) databases, which contain quantitative and qualitative data resulting from compliance inspections conducted from 1984 to 2011. This analysis aimed to evaluate trends in workplace asbestos concentrations over time and across industries by combining the samples from these two databases. From 1984 to 2011, personal air samples ranged from 0.001 to 175 f/cc. Asbestos compliance sampling data associated with the construction, automotive repair, manufacturing, and chemical/petroleum/rubber industries included measurements in excess of 10 f/cc, and were above the permissible exposure limit from 2001 to 2011. The utility of combining the databases was limited by the completeness and accuracy of the data recorded. In this analysis, 40% of the data overlapped between the two databases. Other limitations included sampling bias associated with compliance sampling and errors occurring from user-entered data. A clear decreasing trend in both airborne fiber concentrations and the numbers of asbestos samples collected parallels historically decreasing trends in the consumption of asbestos, and declining mesothelioma incidence rates. Although air sampling data indicated that airborne fiber exposure potential was high (>10 f/cc for short and long-term samples) in some industries (e.g., construction, manufacturing), airborne concentrations have significantly declined over the past 30 years. Recommendations for improving the existing exposure OSHA databases are provided. Copyright © 2015. Published by Elsevier Inc.

  19. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse event reporting, confidential enquiries, disease-only registers and health surveys. Databases in group I can be used not only to assess effectiveness but also to assess diffusion and equity. Databases in group II can only assess diffusion. Group III has restricted scope for assessing HTs, except for analysis of adverse events. For use in costing, databases need to include unit costs or prices. Some databases included unit cost as well as a specific HT. A list of around 270 databases was identified at the level of UK, England and Wales or England (over 1000 including Scotland, Wales and Northern Ireland). Allocation of these to the above groups identified around 60 databases with some potential for HT assessment, roughly half to group I. Eighteen clinical registers were identified as having the greatest potential although the clinical administrative datasets had potential mainly owing to their inclusion of a wide range of technologies. Only two databases were identified that could directly be used in costing. The review of the potential capture of HTs prioritized by the UK's NHS R&D HTA programme showed that only 10% would be captured in these databases, mainly drugs prescribed in primary care. The review of the use of routine databases in any form of HT assessment indicated that clinical registers were mainly used for national comparative audit. Some databases have only been used in annual reports, usually time trend analysis. A few peer-reviewed papers used a clinical register to assess the effectiveness of a technology. Accessibility is suggested as a barrier to using most databases. Clinical administrative databases (group Ib) have mainly been used to build population needs indices and performance indicators. A review of the validity of used databases showed that although internal consistency checks were common, relatively few had any form of external audit. Some comparative audit databases have data scrutinised by participating units. Issues around coverage and coding have, in general, received little attention. NHS funding of databases has been mainly for 'Central Returns' for management purposes, which excludes those databases with the greatest potential for HT assessment. Funding for databases was various, but some are unfunded, relying on goodwill. The estimated total cost of databases in group I plus selected databases from groups II and III has been estimated at pound 50 million or around 0.1% of annual NHS spend. A few databases with limited potential for HT assessment account for the bulk of spending. Suggestions for policy include clarification of responsibility for the strategic development of databases, improved resourcing, and issues around coding, confidentiality, ownership and access, maintenance of clinical support, optimal use of information technology, filling gaps and remedying deficiencies. Recommendations for researchers include closer policy links between routine data and R&D, and selective investment in the more promising databases. Recommended research topics include optimal capture and coding of the range of HTs, international comparisons of the role, funding and use of routine data in healthcare systems and use of routine database in trials and in modelling. Independent evaluations are recommended for information strategies (such as those around the National Service Frameworks and various collaborations) and for electronic patient and health records.

  20. Institutional charges and disparities in outpatient brain biopsies in four US States: the State Ambulatory Database (SASD).

    PubMed

    Bekelis, Kimon; Missios, Symeon; Roberts, David W

    2013-11-01

    Several groups have demonstrated the safety of ambulatory brain biopsies, with no patients experiencing complications related to early discharge. Although they appear to be safe, the reasons factoring into the selection of patients have not been investigated. We performed a cross-sectional study involving 504 patients who underwent outpatient and 10,328 patients who underwent inpatient brain biopsies and were registered in State Ambulatory Surgery Databases and State Inpatient Databases respectively for four US States (New York, California, Florida, North Carolina). In a multivariate analysis private insurance (OR 2.45, 95 % CI, 1.85, 3.24), was significantly associated with outpatient procedures. Higher Charlson Comorbidity Index (OR 0.16, 95 % CI, 0.08, 0.32), high income (OR 0.37, 95 % CI, 0.26, 0.53), and high volume hospitals (OR 0.30, 95 % CI, 0.23, 0.39) were associated with a decreased chance of outpatient procedures. No sex, or racial disparities were observed. Institutional charges were significantly less for outpatient brain biopsies. There was no difference in the rate of 30-day postoperative readmissions among inpatient and outpatient procedures. The median charge for inpatient surgery was 51,316 as compared to 12,266 for the outpatient setting (P < 0.0001, Student's t test). Access to ambulatory brain biopsies appears to be more common for patients with private insurance and less comorbidities, in the setting of lower volume hospitals. Further investigation is needed in the direction of mapping these disparities in resource utilization.

  1. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    NASA Astrophysics Data System (ADS)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  2. Design and Implementation of an Environmental Mercury Database for Northeastern North America

    NASA Astrophysics Data System (ADS)

    Clair, T. A.; Evers, D.; Smith, T.; Goodale, W.; Bernier, M.

    2002-12-01

    An important issue faced when attempting to interpret geochemical variability studies across large regions, is the accumulation, access and consistent display of data from a large number of sources. We were given the opportunity to provide a regional assessment of mercury distribution in surface waters, sediments, invertebrates, fish, and birds in a region extending from New York State to the Island of Newfoundland. We received over 20 individual databases from State, Provincial, and Federal governments, as well as university researchers from both Canada and the United States. These databases came in a variety of formats and sizes. Our challenge was to find a way of accumulating and presenting the large amounts of acquired data, in a consistent, easily accessible fashion, which could then be more easily interpreted. Moreover, the database had to be portable and easily distributable to the large number of study participants. We developed a static database structure using a web-based approach which we were then able to mount on a server which was accessible to all project participants. The site also contained all the necessary documentation related to the data, its acquisition, as well as the methods used in its analysis and interpretation. We then copied the complete web site on CDROM's which we then distributed to all project participants, funding agencies, and other interested parties. The CDROM formed a permanent record of the project and was issued ISSN and ISBN numbers so that the information remained accessible to researchers in perpetuity. Here we present an overview of the CDROM and data structures, of the information accumulated over the first year of the study, and initial interpretation of the results.

  3. Bridging international law and rights-based litigation: mapping health-related rights through the development of the Global Health and Human Rights Database.

    PubMed

    Meier, Benjamin Mason; Cabrera, Oscar A; Ayala, Ana; Gostin, Lawrence O

    2012-06-15

    The O'Neill Institute for National and Global Health Law at Georgetown University, the World Health Organization, and the Lawyers Collective have come together to develop a searchable Global Health and Human Rights Database that maps the intersection of health and human rights in judgments, international and regional instruments, and national constitutions. Where states long remained unaccountable for violations of health-related human rights, litigation has arisen as a central mechanism in an expanding movement to create rights-based accountability. Facilitated by the incorporation of international human rights standards in national law, this judicial enforcement has supported the implementation of rights-based claims, giving meaning to states' longstanding obligations to realize the highest attainable standard of health. Yet despite these advancements, there has been insufficient awareness of the international and domestic legal instruments enshrining health-related rights and little understanding of the scope and content of litigation upholding these rights. As this accountability movement evolves, the Global Health and Human Rights Database seeks to chart this burgeoning landscape of international instruments, national constitutions, and judgments for health-related rights. Employing international legal research to document and catalogue these three interconnected aspects of human rights for the public's health, the Database's categorization by human rights, health topics, and regional scope provides a comprehensive means of understanding health and human rights law. Through these categorizations, the Global Health and Human Rights Database serves as a basis for analogous legal reasoning across states to serve as precedents for future cases, for comparative legal analysis of similar health claims in different country contexts, and for empirical research to clarify the impact of human rights judgments on public health outcomes. Copyright © 2012 Meier, Nygren-Krug, Cabrera, Ayala, and Gostin.

  4. Compilation, quality control, analysis, and summary of discrete suspended-sediment and ancillary data in the United States, 1901-2010

    USGS Publications Warehouse

    Lee, Casey J.; Glysson, G. Douglas

    2013-01-01

    Human-induced and natural changes to the transport of sediment and sediment-associated constituents can degrade aquatic ecosystems and limit human uses of streams and rivers. The lack of a dedicated, easily accessible, quality-controlled database of sediment and ancillary data has made it difficult to identify sediment-related water-quality impairments and has limited understanding of how human actions affect suspended-sediment concentrations and transport. The purpose of this report is to describe the creation of a quality-controlled U.S. Geological Survey suspended-sediment database, provide guidance for its use, and summarize characteristics of suspended-sediment data through 2010. The database is provided as an online application at http://cida.usgs.gov/sediment to allow users to view, filter, and retrieve available suspended-sediment and ancillary data. A data recovery, filtration, and quality-control process was performed to expand the availability, representativeness, and utility of existing suspended-sediment data collected by the U.S. Geological Survey in the United States before January 1, 2011. Information on streamflow condition, sediment grain size, and upstream landscape condition were matched to sediment data and sediment-sampling sites to place data in context with factors that may influence sediment transport. Suspended-sediment and selected ancillary data are presented from across the United States with respect to time, streamflow, and landscape condition. Examples of potential uses of this database for identifying sediment-related impairments, assessing trends, and designing new data collection activities are provided. This report and database can support local and national-level decision making, project planning, and data mining activities related to the transport of suspended-sediment and sediment-associated constituents.

  5. Multimodal optical imaging database from tumour brain human tissue: endogenous fluorescence from glioma, metastasis and control tissues

    NASA Astrophysics Data System (ADS)

    Poulon, Fanny; Ibrahim, Ali; Zanello, Marc; Pallud, Johan; Varlet, Pascale; Malouki, Fatima; Abi Lahoud, Georges; Devaux, Bertrand; Abi Haidar, Darine

    2017-02-01

    Eliminating time-consuming process of conventional biopsy is a practical improvement, as well as increasing the accuracy of tissue diagnoses and patient comfort. We addressed these needs by developing a multimodal nonlinear endomicroscope that allows real-time optical biopsies during surgical procedure. It will provide immediate information for diagnostic use without removal of tissue and will assist the choice of the optimal surgical strategy. This instrument will combine several means of contrast: non-linear fluorescence, second harmonic generation signal, reflectance, fluorescence lifetime and spectral analysis. Multimodality is crucial for reliable and comprehensive analysis of tissue. Parallel to the instrumental development, we currently improve our understanding of the endogeneous fluorescence signal with the different modalities that will be implemented in the stated. This endeavor will allow to create a database on the optical signature of the diseased and control brain tissues. This proceeding will present the preliminary results of this database on three types of tissues: cortex, metastasis and glioblastoma.

  6. Databases as policy instruments. About extending networks as evidence-based policy.

    PubMed

    de Bont, Antoinette; Stoevelaar, Herman; Bal, Roland

    2007-12-07

    This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively). We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national) bodies of professionals, resulting in restrictive prescription behavior amongst physicians. The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.

  7. Competitive Employment Outcomes among Racial and Ethnic Groups with Criminal Histories & Mental Impairment

    ERIC Educational Resources Information Center

    Gines, Jason Elliott

    2013-01-01

    This study investigated vocational rehabilitation (VR) outcomes among people with criminal histories with mental impairments who were served in a state-federal VR agency during fiscal year 2010 as extracted from the Rehabilitation Services Administration (RSA) 911 national database. Using hierarchical logistic analysis, this study examined…

  8. Iraq Reconstruction: Lessons Learned from Investigations, 2004-2012

    DTIC Science & Technology

    2012-04-01

    about 10 miles south of Baghdad, where Bloom supplied contracting offi cers with, in the words of one defendant, “women of the night, Cuban cigars ...Department of State Offi ce of Inspector General (DoS OIG) SPITFIRE investigative teams employed electronic databases and specialized forensic analysis to

  9. Effect of initial conditions of a catchment on seasonal streamflow prediction using ensemble streamflow prediction (ESP) technique for the Rangitata and Waitaki River basins on the South Island of New Zealand

    NASA Astrophysics Data System (ADS)

    Singh, Shailesh Kumar; Zammit, Christian; Hreinsson, Einar; Woods, Ross; Clark, Martyn; Hamlet, Alan

    2013-04-01

    Increased access to water is a key pillar of the New Zealand government plan for economic growths. Variable climatic conditions coupled with market drivers and increased demand on water resource result in critical decision made by water managers based on climate and streamflow forecast. Because many of these decisions have serious economic implications, accurate forecast of climate and streamflow are of paramount importance (eg irrigated agriculture and electricity generation). New Zealand currently does not have a centralized, comprehensive, and state-of-the-art system in place for providing operational seasonal to interannual streamflow forecasts to guide water resources management decisions. As a pilot effort, we implement and evaluate an experimental ensemble streamflow forecasting system for the Waitaki and Rangitata River basins on New Zealand's South Island using a hydrologic simulation model (TopNet) and the familiar ensemble streamflow prediction (ESP) paradigm for estimating forecast uncertainty. To provide a comprehensive database for evaluation of the forecasting system, first a set of retrospective model states simulated by the hydrologic model on the first day of each month were archived from 1972-2009. Then, using the hydrologic simulation model, each of these historical model states was paired with the retrospective temperature and precipitation time series from each historical water year to create a database of retrospective hindcasts. Using the resulting database, the relative importance of initial state variables (such as soil moisture and snowpack) as fundamental drivers of uncertainties in forecasts were evaluated for different seasons and lead times. The analysis indicate that the sensitivity of flow forecast to initial condition uncertainty is depend on the hydrological regime and season of forecast. However initial conditions do not have a large impact on seasonal flow uncertainties for snow dominated catchments. Further analysis indicates that this result is valid when the hindcast database is conditioned by ENSO classification. As a result hydrological forecasts based on ESP technique, where present initial conditions with histological forcing data are used may be plausible for New Zealand catchments.

  10. Fatigue Crack Growth Database for Damage Tolerance Analysis

    NASA Technical Reports Server (NTRS)

    Forman, R. G.; Shivakumar, V.; Cardinal, J. W.; Williams, L. C.; McKeighan, P. C.

    2005-01-01

    The objective of this project was to begin the process of developing a fatigue crack growth database (FCGD) of metallic materials for use in damage tolerance analysis of aircraft structure. For this initial effort, crack growth rate data in the NASGRO (Registered trademark) database, the United States Air Force Damage Tolerant Design Handbook, and other publicly available sources were examined and used to develop a database that characterizes crack growth behavior for specific applications (materials). The focus of this effort was on materials for general commercial aircraft applications, including large transport airplanes, small transport commuter airplanes, general aviation airplanes, and rotorcraft. The end products of this project are the FCGD software and this report. The specific goal of this effort was to present fatigue crack growth data in three usable formats: (1) NASGRO equation parameters, (2) Walker equation parameters, and (3) tabular data points. The development of this FCGD will begin the process of developing a consistent set of standard fatigue crack growth material properties. It is envisioned that the end product of the process will be a general repository for credible and well-documented fracture properties that may be used as a default standard in damage tolerance analyses.

  11. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  12. CellLineNavigator: a workbench for cancer cell line analysis

    PubMed Central

    Krupp, Markus; Itzel, Timo; Maass, Thorsten; Hildebrandt, Andreas; Galle, Peter R.; Teufel, Andreas

    2013-01-01

    The CellLineNavigator database, freely available at http://www.medicalgenomics.org/celllinenavigator, is a web-based workbench for large scale comparisons of a large collection of diverse cell lines. It aims to support experimental design in the fields of genomics, systems biology and translational biomedical research. Currently, this compendium holds genome wide expression profiles of 317 different cancer cell lines, categorized into 57 different pathological states and 28 individual tissues. To enlarge the scope of CellLineNavigator, the database was furthermore closely linked to commonly used bioinformatics databases and knowledge repositories. To ensure easy data access and search ability, a simple data and an intuitive querying interface were implemented. It allows the user to explore and filter gene expression, focusing on pathological or physiological conditions. For a more complex search, the advanced query interface may be used to query for (i) differentially expressed genes; (ii) pathological or physiological conditions; or (iii) gene names or functional attributes, such as Kyoto Encyclopaedia of Genes and Genomes pathway maps. These queries may also be combined. Finally, CellLineNavigator allows additional advanced analysis of differentially regulated genes by a direct link to the Database for Annotation, Visualization and Integrated Discovery (DAVID) Bioinformatics Resources. PMID:23118487

  13. Does an Otolaryngology-Specific Database Have Added Value? A Comparative Feasibility Analysis.

    PubMed

    Bellmunt, Angela M; Roberts, Rhonda; Lee, Walter T; Schulz, Kris; Pynnonen, Melissa A; Crowson, Matthew G; Witsell, David; Parham, Kourosh; Langman, Alan; Vambutas, Andrea; Ryan, Sheila E; Shin, Jennifer J

    2016-07-01

    There are multiple nationally representative databases that support epidemiologic and outcomes research, and it is unknown whether an otolaryngology-specific resource would prove indispensable or superfluous. Therefore, our objective was to determine the feasibility of analyses in the National Ambulatory Medical Care Survey (NAMCS) and National Hospital Ambulatory Medical Care Survey (NHAMCS) databases as compared with the otolaryngology-specific Creating Healthcare Excellence through Education and Research (CHEER) database. Parallel analyses in 2 data sets. Ambulatory visits in the United States. To test a fixed hypothesis that could be directly compared between data sets, we focused on a condition with expected prevalence high enough to substantiate availability in both. This query also encompassed a broad span of diagnoses to sample the breadth of available information. Specifically, we compared an assessment of suspected risk factors for sensorineural hearing loss in subjects 0 to 21 years of age, according to a predetermined protocol. We also assessed the feasibility of 6 additional diagnostic queries among all age groups. In the NAMCS/NHAMCS data set, the number of measured observations was not sufficient to support reliable numeric conclusions (percentage standard error among risk factors: 38.6-92.1). Analysis of the CHEER database demonstrated that age, sex, meningitis, and cytomegalovirus were statistically significant factors associated with pediatric sensorineural hearing loss (P < .01). Among the 6 additional diagnostic queries assessed, NAMCS/NHAMCS usage was also infeasible; the CHEER database contained 1585 to 212,521 more observations per annum. An otolaryngology-specific database has added utility when compared with already available national ambulatory databases. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  14. The pervasive crisis of diminishing radiation therapy access for vulnerable populations in the United States-part 3: Hispanic-American patients.

    PubMed

    McClelland, Shearwood; Perez, Carmen A

    2018-01-01

    Health disparities have profoundly affected underrepresented minorities throughout the United States, particularly with regard to access to evidence-based interventions such as surgery or medication. The degree of disparity in access to radiation therapy (RT) for Hispanic-American patients with cancer has not been previously examined in an extensive manner. An extensive literature search was performed using the PubMed database to examine studies investigating disparities in RT access for Hispanic-Americans. A total of 34 studies were found, spanning 10 organ systems. Disparities in access to RT for Hispanic-Americans were most prominently studied in cancers of the breast (15 studies), prostate (4 studies), head and neck (4 studies), and gynecologic system (3 studies). Disparities in RT access for Hispanic-Americans were prevalent regardless of the organ system studied and were compounded by limited English proficiency and/or birth outside of the United States. A total of 26 of 34 studies (77%) involved analysis of a population-based database, such as Surveillance, Epidemiology and End Result (15 studies); Surveillance, Epidemiology and End Result-Medicare (4 studies); National Cancer Database (3 studies); or a state tumor registry (4 studies). Hispanic-Americans in the United States have diminished RT access compared with Caucasian patients but are less likely to experience concomitant disparities in mortality than other underrepresented minorities that experience similar disparities (ie, African-Americans). Hispanic-Americans who are born outside of the United States and/or have limited English proficiency may be more likely to experience substandard RT access. These results underscore the importance of finding nationwide solutions to address such inequalities that hinder Hispanic-Americans and other underrepresented minorities throughout the United States.

  15. The LncRNA Connectivity Map: Using LncRNA Signatures to Connect Small Molecules, LncRNAs, and Diseases.

    PubMed

    Yang, Haixiu; Shang, Desi; Xu, Yanjun; Zhang, Chunlong; Feng, Li; Sun, Zeguo; Shi, Xinrui; Zhang, Yunpeng; Han, Junwei; Su, Fei; Li, Chunquan; Li, Xia

    2017-07-27

    Well characterized the connections among diseases, long non-coding RNAs (lncRNAs) and drugs are important for elucidating the key roles of lncRNAs in biological mechanisms in various biological states. In this study, we constructed a database called LNCmap (LncRNA Connectivity Map), available at http://www.bio-bigdata.com/LNCmap/ , to establish the correlations among diseases, physiological processes, and the action of small molecule therapeutics by attempting to describe all biological states in terms of lncRNA signatures. By reannotating the microarray data from the Connectivity Map database, the LNCmap obtained 237 lncRNA signatures of 5916 instances corresponding to 1262 small molecular drugs. We provided a user-friendly interface for the convenient browsing, retrieval and download of the database, including detailed information and the associations of drugs and corresponding affected lncRNAs. Additionally, we developed two enrichment analysis methods for users to identify candidate drugs for a particular disease by inputting the corresponding lncRNA expression profiles or an associated lncRNA list and then comparing them to the lncRNA signatures in our database. Overall, LNCmap could significantly improve our understanding of the biological roles of lncRNAs and provide a unique resource to reveal the connections among drugs, lncRNAs and diseases.

  16. myPhyloDB: a local web server for the storage and analysis of metagenomic data.

    PubMed

    Manter, Daniel K; Korsa, Matthew; Tebbe, Caleb; Delgado, Jorge A

    2016-01-01

    myPhyloDB v.1.1.2 is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of microbial community populations (e.g. 16S metagenomics data). MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all available data in the database. The data processing capabilities of myPhyloDB are also flexible enough to allow the upload and storage of pre-processed data, or use the built-in Mothur pipeline to automate the processing of raw sequencing data. myPhyloDB provides several analytical (e.g. analysis of covariance,t-tests, linear regression, differential abundance (DESeq2), and principal coordinates analysis (PCoA)) and normalization (rarefaction, DESeq2, and proportion) tools for the comparative analysis of taxonomic abundance, species richness and species diversity for projects of various types (e.g. human-associated, human gut microbiome, air, soil, and water) for any taxonomic level(s) desired. Finally, since myPhyloDB is a local web-server, users can quickly distribute data between colleagues and end-users by simply granting others access to their personal myPhyloDB database. myPhyloDB is available athttp://www.ars.usda.gov/services/software/download.htm?softwareid=472 and more information along with tutorials can be found on our websitehttp://www.myphylodb.org. Database URL:http://www.myphylodb.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.

  17. Loss and damage affecting the public health sector and society resulting from flooding and flash floods in Brazil between 2010 and 2014 - based on data from national and global information systems.

    PubMed

    Minervino, Aline Costa; Duarte, Elisabeth Carmen

    2016-03-01

    This article outlines the results of a descriptive study that analyses loss and damage caused by hydrometeorological disasters in Brazil between 2010 and 2014 using the EM DAT (global) and S2iD (national) databases. The analysis shows major differences in the total number of disaster events included in the databases (EM-DAT = 36; S2iD = 4,070) and estimated costs of loss and damage (EM-DAT - R$ 9.2 billion; S2iD - R$331.4 billion). The analysis also shows that the five states most affected by these events are Santa Catarina, Rio Grande do Sul, Minas Gerais, São Paulo and Paraná in Brazil's South and Southeast regions and that these results are consistent with the findings of other studies. The costs of disasters were highest for housing, public infrastructure works, collectively used public facilities, other public service facilities, and state health and education facilities. The costs associated with public health facilities were also high. Despite their limitations, both databases demonstrated their usefulness for determining seasonal and long-term trends and patterns, and risk areas, and thus assist decision makers in identifying areas that are most affected by and vulnerable to natural disasters.

  18. The New Politics of US Health Care Prices: Institutional Reconfiguration and the Emergence of All-Payer Claims Databases.

    PubMed

    Rocco, Philip; Kelly, Andrew S; Béland, Daniel; Kinane, Michael

    2017-02-01

    Prices are a significant driver of health care cost in the United States. Existing research on the politics of health system reform has emphasized the limited nature of policy entrepreneurs' efforts at solving the problem of rising prices through direct regulation at the state level. Yet this literature fails to account for how change agents in the states gradually reconfigured the politics of prices, forging new, transparency-based policy instruments called all-payer claims databases (APCDs), which are designed to empower consumers, purchasers, and states to make informed market and policy choices. Drawing on pragmatist institutional theory, this article shows how APCDs emerged as the dominant model for reforming health care prices. While APCD advocates faced significant institutional barriers to policy change, we show how they reconfigured existing ideas, tactical repertoires, and legal-technical infrastructures to develop a politically and technologically robust reform. Our analysis has important implications for theories of how change agents overcome structural barriers to health reform. Copyright © 2017 by Duke University Press.

  19. Interdisciplinary Investigations in Support of Project DI-MOD

    NASA Technical Reports Server (NTRS)

    Starks, Scott A. (Principal Investigator)

    1996-01-01

    Various concepts from time series analysis are used as the basis for the development of algorithms to assist in the analysis and interpretation of remote sensed imagery. An approach to trend detection that is based upon the fractal analysis of power spectrum estimates is presented. Additionally, research was conducted toward the development of a software architecture to support processing tasks associated with databases housing a variety of data. An algorithmic approach which provides for the automation of the state monitoring process is presented.

  20. NORTHWEST ENVIRONMENTAL DATABASE (NED) FOR WA, OR, AND ID

    EPA Science Inventory

    This database results from a massive data gathering program initiated by BPA/NPPC in the mid-1980s. Each state now manages the portion of the database within its borders. Data & evaluations were gathered by wildlife/game/fish biologists, and other state, federal, and tribal res...

  1. Human Connectome Project Informatics: quality control, database services, and data visualization

    PubMed Central

    Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.

    2013-01-01

    The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591

  2. Subscale Test Methods for Combustion Devices

    NASA Technical Reports Server (NTRS)

    Anderson, W. E.; Sisco, J. C.; Long, M. R.; Sung, I.-K.

    2005-01-01

    Stated goals for long-life LRE s have been between 100 and 500 cycles: 1) Inherent technical difficulty of accurately defining the transient and steady state thermochemical environments and structural response (strain); 2) Limited statistical basis on failure mechanisms and effects of design and operational variability; and 3) Very high test costs and budget-driven need to protect test hardware (aversion to test-to-failure). Ambitious goals will require development of new databases: a) Advanced materials, e.g., tailored composites with virtually unlimited property variations; b) Innovative functional designs to exploit full capabilities of advanced materials; and c) Different cycles/operations. Subscale testing is one way to address technical and budget challenges: 1) Prototype subscale combustors exposed to controlled simulated conditions; 2) Complementary to conventional laboratory specimen database development; 3) Instrumented with sensors to measure thermostructural response; and 4) Coupled with analysis

  3. Introduction to the DISRUPT postprandial database: subjects, studies and methodologies.

    PubMed

    Jackson, Kim G; Clarke, Dave T; Murray, Peter; Lovegrove, Julie A; O'Malley, Brendan; Minihane, Anne M; Williams, Christine M

    2010-03-01

    Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.

  4. Identifying Obstacles and Research Gaps of Telemedicine Projects: Approach for a State-of-the-Art Analysis.

    PubMed

    Harst, Lorenz; Timpel, Patrick; Otto, Lena; Wollschlaeger, Bastian; Richter, Peggy; Schlieter, Hannes

    2018-01-01

    This paper presents an approach for an evaluation of finished telemedicine projects using qualitative methods. Telemedicine applications are said to improve the performance of health care systems. While there are countless telemedicine projects, the vast majority never makes the threshold from testing to implementation and diffusion. Projects were collected from German project databases in the area of telemedicine following systematically developed criteria. In a testing phase, ten projects were subject to a qualitative content analysis to identify limitations, need for further research, and lessons learned. Using Mayring's method of inductive category development, six categories of possible future research were derived. Thus, the proposed method is an important contribution to diffusion and translation research regarding telemedicine, as it is applicable to a systematic research of databases.

  5. Chemical hazards database and detection system for Microgravity and Materials Processing Facility (MMPF)

    NASA Technical Reports Server (NTRS)

    Steele, Jimmy; Smith, Robert E.

    1991-01-01

    The ability to identify contaminants associated with experiments and facilities is directly related to the safety of the Space Station. A means of identifying these contaminants has been developed through this contracting effort. The delivered system provides a listing of the materials and/or chemicals associated with each facility, information as to the contaminant's physical state, a list of the quantity and/or volume of each suspected contaminant, a database of the toxicological hazards associated with each contaminant, a recommended means of rapid identification of the contaminants under operational conditions, a method of identifying possible failure modes and effects analysis associated with each facility, and a fault tree-type analysis that will provide a means of identifying potential hazardous conditions related to future planned missions.

  6. A Statewide Information Databases Program: What Difference Does It Make to Academic Libraries?

    ERIC Educational Resources Information Center

    Lester, June; Wallace, Danny P.

    2004-01-01

    The Oklahoma Department of Libraries (ODL) launched Oklahoma's statewide database program in 1997. For the state's academic libraries, the program extended access to information, increased database use, and fostered positive relationships among ODL, academic libraries, and Oklahoma State Regents for Higher Education (OSRHE), creating a more…

  7. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  8. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  9. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  10. 33 CFR 137.60 - Reviews of Federal, State, tribal, and local government records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (a) Federal, State, tribal, and local government records or databases of government records of the..., and tribal government records or databases of the government records and local government records and databases of the records should include— (1) Records of reported oil discharges present, including site...

  11. Epidemiology of Abusive Abdominal Trauma Hospitalizations in United States Children

    ERIC Educational Resources Information Center

    Lane, Wendy Gwirtzman; Dubowitz, Howard; Langenberg, Patricia; Dischinger, Patricia

    2012-01-01

    Objectives: (1) To estimate the incidence of abusive abdominal trauma (AAT) hospitalizations among US children age 0-9 years. (2) To identify demographic characteristics of children at highest risk for AAT. Design: Secondary data analysis of a cross-sectional, national hospitalization database. Setting: Hospitalization data from the 2003 and 2006…

  12. An Analysis of China’s Information Technology Strategies and their Implication for US National Security

    DTIC Science & Technology

    2006-06-01

    environment of Web-enabled database searches, online shopping , e-business, and daily credit-card use, which are very common in the United States. Cyberspace...establishing credibility for data exchange such as online shopping . Present regulations stipulate that security chips used by the Chinese government and

  13. One approach to design of speech emotion database

    NASA Astrophysics Data System (ADS)

    Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav

    2016-05-01

    This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.

  14. Proteomic analysis of tardigrades: towards a better understanding of molecular mechanisms by anhydrobiotic organisms.

    PubMed

    Schokraie, Elham; Hotz-Wagenblatt, Agnes; Warnken, Uwe; Mali, Brahim; Frohme, Marcus; Förster, Frank; Dandekar, Thomas; Hengherr, Steffen; Schill, Ralph O; Schnölzer, Martina

    2010-03-03

    Tardigrades are small, multicellular invertebrates which are able to survive times of unfavourable environmental conditions using their well-known capability to undergo cryptobiosis at any stage of their life cycle. Milnesium tardigradum has become a powerful model system for the analysis of cryptobiosis. While some genetic information is already available for Milnesium tardigradum the proteome is still to be discovered. Here we present to the best of our knowledge the first comprehensive study of Milnesium tardigradum on the protein level. To establish a proteome reference map we developed optimized protocols for protein extraction from tardigrades in the active state and for separation of proteins by high resolution two-dimensional gel electrophoresis. Since only limited sequence information of M. tardigradum on the genome and gene expression level is available to date in public databases we initiated in parallel a tardigrade EST sequencing project to allow for protein identification by electrospray ionization tandem mass spectrometry. 271 out of 606 analyzed protein spots could be identified by searching against the publicly available NCBInr database as well as our newly established tardigrade protein database corresponding to 144 unique proteins. Another 150 spots could be identified in the tardigrade clustered EST database corresponding to 36 unique contigs and ESTs. Proteins with annotated function were further categorized in more detail by their molecular function, biological process and cellular component. For the proteins of unknown function more information could be obtained by performing a protein domain annotation analysis. Our results include proteins like protein member of different heat shock protein families and LEA group 3, which might play important roles in surviving extreme conditions. The proteome reference map of Milnesium tardigradum provides the basis for further studies in order to identify and characterize the biochemical mechanisms of tolerance to extreme desiccation. The optimized proteomics workflow will enable application of sensitive quantification techniques to detect differences in protein expression, which are characteristic of the active and anhydrobiotic states of tardigrades.

  15. Proteomic Analysis of Tardigrades: Towards a Better Understanding of Molecular Mechanisms by Anhydrobiotic Organisms

    PubMed Central

    Schokraie, Elham; Hotz-Wagenblatt, Agnes; Warnken, Uwe; Mali, Brahim; Frohme, Marcus; Förster, Frank; Dandekar, Thomas; Hengherr, Steffen; Schill, Ralph O.; Schnölzer, Martina

    2010-01-01

    Background Tardigrades are small, multicellular invertebrates which are able to survive times of unfavourable environmental conditions using their well-known capability to undergo cryptobiosis at any stage of their life cycle. Milnesium tardigradum has become a powerful model system for the analysis of cryptobiosis. While some genetic information is already available for Milnesium tardigradum the proteome is still to be discovered. Principal Findings Here we present to the best of our knowledge the first comprehensive study of Milnesium tardigradum on the protein level. To establish a proteome reference map we developed optimized protocols for protein extraction from tardigrades in the active state and for separation of proteins by high resolution two-dimensional gel electrophoresis. Since only limited sequence information of M. tardigradum on the genome and gene expression level is available to date in public databases we initiated in parallel a tardigrade EST sequencing project to allow for protein identification by electrospray ionization tandem mass spectrometry. 271 out of 606 analyzed protein spots could be identified by searching against the publicly available NCBInr database as well as our newly established tardigrade protein database corresponding to 144 unique proteins. Another 150 spots could be identified in the tardigrade clustered EST database corresponding to 36 unique contigs and ESTs. Proteins with annotated function were further categorized in more detail by their molecular function, biological process and cellular component. For the proteins of unknown function more information could be obtained by performing a protein domain annotation analysis. Our results include proteins like protein member of different heat shock protein families and LEA group 3, which might play important roles in surviving extreme conditions. Conclusions The proteome reference map of Milnesium tardigradum provides the basis for further studies in order to identify and characterize the biochemical mechanisms of tolerance to extreme desiccation. The optimized proteomics workflow will enable application of sensitive quantification techniques to detect differences in protein expression, which are characteristic of the active and anhydrobiotic states of tardigrades. PMID:20224743

  16. A possible extension to the RInChI as a means of providing machine readable process data.

    PubMed

    Jacob, Philipp-Maximilian; Lan, Tian; Goodman, Jonathan M; Lapkin, Alexei A

    2017-04-11

    The algorithmic, large-scale use and analysis of reaction databases such as Reaxys is currently hindered by the absence of widely adopted standards for publishing reaction data in machine readable formats. Crucial data such as yields of all products or stoichiometry are frequently not explicitly stated in the published papers and, hence, not reported in the database entry for those reactions, limiting their usefulness for algorithmic analysis. This paper presents a possible extension to the IUPAC RInChI standard via an auxiliary layer, termed ProcAuxInfo, which is a standardised, extensible form in which to report certain key reaction parameters such as declaration of all products and reactants as well as auxiliaries known in the reaction, reaction stoichiometry, amounts of substances used, conversion, yield and operating conditions. The standard is demonstrated via creation of the RInChI including the ProcAuxInfo layer based on three published reactions and demonstrates accurate data recoverability via reverse translation of the created strings. Implementation of this or another method of reporting process data by the publishing community would ensure that databases, such as Reaxys, would be able to abstract crucial data for big data analysis of their contents.

  17. Trends in Outcomes and Hospitalization Charges of Infant Botulism in the United States: A Comparative Analysis Between Kids' Inpatient Database and National Inpatient Sample.

    PubMed

    Opila, Tamara; George, Asha; El-Ghanem, Mohammad; Souayah, Nizar

    2017-02-01

    New therapeutic strategies, including immune globulin intravenous, have emerged in the past two decades for the management of botulism. However, impact on outcomes and hospitalization charges among infants (aged ≤1 year) with botulism in the United States is unknown. We analyzed the Kids' Inpatient Database (KID) and National Inpatient Sample (NIS) for in-hospital outcomes and charges for infant botulism cases from 1997 to 2009. Demographics, discharge status, mortality, length of stay, and hospitalization charges were reported from the two databases and compared. Between 1997 and 2009, 504 infant hospitalizations were captured in KID', and 340 hospitalizations from NIS, for comparable years. A significant decrease was observed in mean length of stay for 'KID (P < 0.01); a similar decrease was observed for the NIS. The majority of patients were discharged to home. Despite an initial decrease after 1997, an increasing trend was observed for 'KID/NIS mean hospital charges from 2000 to 2009 (from $57,659/$56,309 to $143,171/$106,378; P < 0.001/P < 0.001). A linear increasing trend was evident when examining mean daily hospitalization charges for both databases. In conducting a subgroup analysis of the 'KID database, the youngest patients with infantile botulism (≤1.9 months) displayed the highest average number of procedures during their hospitalization (P < .001) and the highest rate of mechanical ventilation (P < .001), compared with their older counterparts. Infant botulism cases have demonstrated a significant increase in hospitalization charges over the years despite reduced length of stay. Additionally, there were significantly higher daily adjusted hospital charges and an increased rate of routine discharges for immune globulin intravenous-treated patients. More controlled studies are needed to define the criteria for cost-effective use of intravenous immune globulin in the population with infant botulism. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Survival of patients with gastric lymphoma in Germany and in the United States.

    PubMed

    Castro, Felipe A; Jansen, Lina; Krilaviciute, Agne; Katalinic, Alexander; Pulte, Dianne; Sirri, Eunice; Ressing, Meike; Holleczek, Bernd; Luttmann, Sabine; Brenner, Hermann

    2015-10-01

    This study aims to examine survival for gastric lymphomas and its main subtypes, mucosa-associated lymphoid tissue lymphoma (MALT), and diffuse large B-cell lymphoma (DLBCL), in Germany and in the United States. Data for patients diagnosed in 1997-2010 were used from 10 population-based German cancer registries and compared to the data from the US Surveillance, Epidemiology and End Results (SEER) 13 registries database. Patients age 15-74 diagnosed with gastric lymphomas were included in the analysis. Period analysis and modeled period analysis were used to estimate 5-year and 10-year relative survival (RS) in 2002-2010 and survival trends from 2002-2004 to 2008-2010. Overall, the database included 1534 and 2688 patients diagnosed with gastric lymphoma in 1997-2010 in Germany and in the United States, respectively. Survival was substantially higher for MALT (5-year and 10-year RS: 89.0% and 80.9% in Germany, 93.8% and 86.8% in the United States) than for DLBCL (67.5% and 59.2% in Germany, and 65.3% and 54.7% in the United States) in 2002-2010. Survival was slightly higher among female patients and decreased by age for gastric lymphomas combined and its main subtypes. A slight, nonsignificant, increase in the 5-year RS for gastric lymphomas combined was observed in Germany and the United States, with increases in 5-year RS between 2002-2004 and 2008-2010 from 77.1% to 81.0% and from 77.3% to 82.0%, respectively. Five-year RS of MALT exceeded 90% in 2008-2010 in both countries. Five-year RS of MALT meanwhile exceeds 90% in both Germany and the United States, but DLBCL has remained below 70% in both countries. © 2015 Journal of Gastroenterology and Hepatology Foundation and Wiley Publishing Asia Pty Ltd.

  19. Genomics and Public Health Research: Can the State Allow Access to Genomic Databases?

    PubMed Central

    Cousineau, J; Girard, N; Monardes, C; Leroux, T; Jean, M Stanton

    2012-01-01

    Because many diseases are multifactorial disorders, the scientific progress in genomics and genetics should be taken into consideration in public health research. In this context, genomic databases will constitute an important source of information. Consequently, it is important to identify and characterize the State’s role and authority on matters related to public health, in order to verify whether it has access to such databases while engaging in public health genomic research. We first consider the evolution of the concept of public health, as well as its core functions, using a comparative approach (e.g. WHO, PAHO, CDC and the Canadian province of Quebec). Following an analysis of relevant Quebec legislation, the precautionary principle is examined as a possible avenue to justify State access to and use of genomic databases for research purposes. Finally, we consider the Influenza pandemic plans developed by WHO, Canada, and Quebec, as examples of key tools framing public health decision-making process. We observed that State powers in public health, are not, in Quebec, well adapted to the expansion of genomics research. We propose that the scope of the concept of research in public health should be clear and include the following characteristics: a commitment to the health and well-being of the population and to their determinants; the inclusion of both applied research and basic research; and, an appropriate model of governance (authorization, follow-up, consent, etc.). We also suggest that the strategic approach version of the precautionary principle could guide collective choices in these matters. PMID:23113174

  20. The pervasive crisis of diminishing radiation therapy access for vulnerable populations in the United States, part 1: African-American patients.

    PubMed

    McClelland, Shearwood; Page, Brandi R; Jaboin, Jerry J; Chapman, Christina H; Deville, Curtiland; Thomas, Charles R

    2017-01-01

    African Americans experience the highest burden of cancer incidence and mortality in the United States and have been persistently less likely to receive interventional care, even when such care has been proven superior to conservative management by randomized controlled trials. The presence of disparities in access to radiation therapy (RT) for African American cancer patients has rarely been examined in an expansive fashion. An extensive literature search was performed using the PubMed database to examine studies investigating disparities in RT access for African Americans. A total of 55 studies were found, spanning 11 organ systems. Disparities in access to RT for African Americans were most prominently study in cancers of the breast (23 studies), prostate (7 studies), gynecologic system (5 studies), and hematologic system (5 studies). Disparities in RT access for African Americans were prevalent regardless of organ system studied and often occurred independently of socioeconomic status. Fifty of 55 studies (91%) involved analysis of a population-based database such as Surveillance, Epidemiology and End Result (SEER; 26 studies), SEER-Medicare (5 studies), National Cancer Database (3 studies), or a state tumor registry (13 studies). African Americans in the United States have diminished access to RT compared with Caucasian patients, independent of but often in concert with low socioeconomic status. These findings underscore the importance of finding systemic and systematic solutions to address these inequalities to reduce the barriers that patient race provides in receipt of optimal cancer care.

  1. The Virtual Observatory Service TheoSSA: Establishing a Database of Synthetic Stellar Flux Standards II. NLTE Spectral Analysis of the OB-Type Subdwarf Feige 110

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Rudkowski, A.; Kampka, D.; Werner, K.; Kruk, J. W.; Moehler, S.

    2014-01-01

    Context. In the framework of the Virtual Observatory (VO), the German Astrophysical VO (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperatures, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. Aims. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. Methods. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. Results. We present a state-of-the-art spectral analysis of Feige 110. We determined Teff =47 250 +/- 2000 K, log g=6.00 +/- 0.20, and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model atmosphere codes.

  2. A reservoir morphology database for the conterminous United States

    USGS Publications Warehouse

    Rodgers, Kirk D.

    2017-09-13

    The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.

  3. Needs Assessment for Behavioral Health Workforce: a State-Level Analysis.

    PubMed

    Nayar, Preethy; Apenteng, Bettye; Nguyen, Anh T; Shaw-Sutherland, Kelly; Ojha, Diptee; Deras, Marlene

    2017-07-01

    This study describes trends in the supply and the need for behavioral health professionals in Nebraska. A state-level health workforce database was used to estimate the behavioral health workforce supply and need. Compared with national estimates, Nebraska has a lower proportion of all categories of behavioral health professionals. The majority of Nebraska counties have unusually high needs for mental health professionals, with rural areas experiencing a decline in the supply of psychiatrists over the last decade. Availability of robust state-level health workforce data can assist in crafting effective policy for successful systems change, particularly for behavioral health.

  4. Acoustic analysis of normal Saudi adult voices.

    PubMed

    Malki, Khalid H; Al-Habib, Salman F; Hagr, Abulrahman A; Farahat, Mohamed M

    2009-08-01

    To determine the acoustic differences between Saudi adult male and female voices, and to compare the acoustic variables of the Multidimensional Voice Program (MDVP) obtained from North American adults to a group of Saudi males and females. A cross-sectional survey of normal adult male and female voices was conducted at King Abdulaziz University Hospital, Riyadh, Kingdom of Saudi Arabia between March 2007 and December 2008. Ninety-five Saudi subjects sustained the vowel /a/ 6 times, and the steady state portion of 3 samples was analyzed and compared with the samples of the KayPentax normative voice database. Significant differences were found between Saudi and North American KayPentax database groups. In the male subjects, 15 of 33 MDVP variables, and 10 of 33 variables in the female subjects were found to be significantly different from the KayPentax database. We conclude that the acoustical differences may reflect laryngeal anatomical or tissue differences between the Saudi and the KayPentax database.

  5. Generation and validation of a universal perinatal database and biospecimen repository: PeriBank.

    PubMed

    Antony, K M; Hemarajata, P; Chen, J; Morris, J; Cook, C; Masalas, D; Gedminas, M; Brown, A; Versalovic, J; Aagaard, K

    2016-11-01

    There is a dearth of biospecimen repositories available to perinatal researchers. In order to address this need, here we describe the methodology used to establish such a resource. With the collaboration of MedSci.net, we generated an online perinatal database with 847 fields of clinical information. Simultaneously, we established a biospecimen repository of the same clinical participants. The demographic and clinical outcomes data are described for the first 10 000 participants enrolled. The demographic characteristics are consistent with the demographics of the delivery hospitals. Quality analysis of the biospecimens reveals variation in very few analytes. Furthermore, since the creation of PeriBank, we have demonstrated validity of the database and tissue integrity of the biospecimen repository. Here we establish that the creation of a universal perinatal database and biospecimen collection is not only possible, but allows for the performance of state-of-the-science translational perinatal research and is a potentially valuable resource to academic perinatal researchers.

  6. VerSeDa: vertebrate secretome database.

    PubMed

    Cortazar, Ana R; Oguiza, José A; Aransay, Ana M; Lavín, José L

    2017-01-01

    Based on the current tools, de novo secretome (full set of proteins secreted by an organism) prediction is a time consuming bioinformatic task that requires a multifactorial analysis in order to obtain reliable in silico predictions. Hence, to accelerate this process and offer researchers a reliable repository where secretome information can be obtained for vertebrates and model organisms, we have developed VerSeDa (Vertebrate Secretome Database). This freely available database stores information about proteins that are predicted to be secreted through the classical and non-classical mechanisms, for the wide range of vertebrate species deposited at the NCBI, UCSC and ENSEMBL sites. To our knowledge, VerSeDa is the only state-of-the-art database designed to store secretome data from multiple vertebrate genomes, thus, saving an important amount of time spent in the prediction of protein features that can be retrieved from this repository directly. VerSeDa is freely available at http://genomics.cicbiogune.es/VerSeDa/index.php. © The Author(s) 2017. Published by Oxford University Press.

  7. Online Databases in Physics.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; Verbeck, Alison F.

    1984-01-01

    This overview of 47 online sources for physics information available in the United States--including sub-field databases, transdisciplinary databases, and multidisciplinary databases-- notes content, print source, language, time coverage, and databank. Two discipline-specific databases (SPIN and PHYSICS BRIEFS) are also discussed. (EJS)

  8. 75 FR 65611 - Native American Tribal Insignia Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ... DEPARTMENT OF COMMERCE Patent and Trademark Office Native American Tribal Insignia Database ACTION... comprehensive database containing the official insignia of all federally- and State- recognized Native American... to create this database. The USPTO database of official tribal insignias assists trademark attorneys...

  9. Wind Energy Conversion System Analysis Model (WECSAM) computer program documentation

    NASA Astrophysics Data System (ADS)

    Downey, W. T.; Hendrick, P. L.

    1982-07-01

    Described is a computer-based wind energy conversion system analysis model (WECSAM) developed to predict the technical and economic performance of wind energy conversion systems (WECS). The model is written in CDC FORTRAN V. The version described accesses a data base containing wind resource data, application loads, WECS performance characteristics, utility rates, state taxes, and state subsidies for a six state region (Minnesota, Michigan, Wisconsin, Illinois, Ohio, and Indiana). The model is designed for analysis at the county level. The computer model includes a technical performance module and an economic evaluation module. The modules can be run separately or together. The model can be run for any single user-selected county within the region or looped automatically through all counties within the region. In addition, the model has a restart capability that allows the user to modify any data-base value written to a scratch file prior to the technical or economic evaluation.

  10. Historical reconstructions of California wildfires vary by data source

    USGS Publications Warehouse

    Syphard, Alexandra D.; Keeley, Jon E.

    2016-01-01

    Historical data are essential for understanding how fire activity responds to different drivers. It is important that the source of data is commensurate with the spatial and temporal scale of the question addressed, but fire history databases are derived from different sources with different restrictions. In California, a frequently used fire history dataset is the State of California Fire and Resource Assessment Program (FRAP) fire history database, which circumscribes fire perimeters at a relatively fine scale. It includes large fires on both state and federal lands but only covers fires that were mapped or had other spatially explicit data. A different database is the state and federal governments’ annual reports of all fires. They are more complete than the FRAP database but are only spatially explicit to the level of county (California Department of Forestry and Fire Protection – Cal Fire) or forest (United States Forest Service – USFS). We found substantial differences between the FRAP database and the annual summaries, with the largest and most consistent discrepancy being in fire frequency. The FRAP database missed the majority of fires and is thus a poor indicator of fire frequency or indicators of ignition sources. The FRAP database is also deficient in area burned, especially before 1950. Even in contemporary records, the huge number of smaller fires not included in the FRAP database account for substantial cumulative differences in area burned. Wildfires in California account for nearly half of the western United States fire suppression budget. Therefore, the conclusions about data discrepancies and the implications for fire research are of broad importance.

  11. A UML Profile for State Analysis

    NASA Technical Reports Server (NTRS)

    Murray, Alex; Rasmussen, Robert

    2010-01-01

    State Analysis is a systems engineering methodology for the specification and design of control systems, developed at the Jet Propulsion Laboratory. The methodology emphasizes an analysis of the system under control in terms of States and their properties and behaviors and their effects on each other, a clear separation of the control system from the controlled system, cognizance in the control system of the controlled system's State, goal-based control built on constraining the controlled system's States, and disciplined techniques for State discovery and characterization. State Analysis (SA) introduces two key diagram types: State Effects and Goal Network diagrams. The team at JPL developed a tool for performing State Analysis. The tool includes a drawing capability, backed by a database that supports the diagram types and the organization of the elements of the SA models. But the tool does not support the usual activities of software engineering and design - a disadvantage, since systems to which State Analysis can be applied tend to be very software-intensive. This motivated the work described in this paper: the development of a preliminary Unified Modeling Language (UML) profile for State Analysis. Having this profile would enable systems engineers to specify a system using the methods and graphical language of State Analysis, which is easily linked with a larger system model in SysML (Systems Modeling Language), while also giving software engineers engaged in implementing the specified control system immediate access to and use of the SA model, in the same language, UML, used for other software design. That is, a State Analysis profile would serve as a shared modeling bridge between system and software models for the behavior aspects of the system. This paper begins with an overview of State Analysis and its underpinnings, followed by an overview of the mapping of SA constructs to the UML metamodel. It then delves into the details of these mappings and the constraints associated with them. Finally, we give an example of the use of the profile for expressing an example SA model.

  12. Fast vessel segmentation in retinal images using multi-scale enhancement and second-order local entropy

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.

    2012-03-01

    Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.

  13. Optimal tree increment models for the Northeastern United Statesq

    Treesearch

    Don C. Bragg

    2003-01-01

    used the potential relative increment (PRI) methodology to develop optimal tree diameter growth models for the Northeastern United States. Thirty species from the Eastwide Forest Inventory Database yielded 69,676 individuals, which were then reduced to fast-growing subsets for PRI analysis. For instance, only 14 individuals from the greater than 6,300-tree eastern...

  14. Forest inventory, catastrophic events and historic geospatial assessments in the south

    Treesearch

    Dennis M. Jacobs

    2007-01-01

    Catastrophic events are a regular occurrence of disturbance to forestland in the Southern United States. Each major event affects the integrity of the forest inventory database developed and maintained by the Forest Inventory & Analysis Research Work Unit of the U.S. Department of Agriculture, Forest Service. Some of these major disturbances through the years have...

  15. Nanoscience

    DTIC Science & Technology

    2011-07-22

    L., Upgrading of Existing X - Ray Photoelectron Spectrometer Capabilities for Development and Analysis of Novel Energetic NanoCluster materials (DURIP...References From the Technical Reports database Allara, David L., Pennsylvania State University, Upgrading of Existing X - Ray Photoelectron...Scanning probe  X - ray Of these techniques, the most popularly used is the scanning probe, also known as the Dip-Pen Nanolithography (DPN) technique

  16. Student Research Projects

    NASA Technical Reports Server (NTRS)

    Yeske, Lanny A.

    1998-01-01

    Numerous FY1998 student research projects were sponsored by the Mississippi State University Center for Air Sea Technology. This technical note describes these projects which include research on: (1) Graphical User Interfaces, (2) Master Environmental Library, (3) Database Management Systems, (4) Naval Interactive Data Analysis System, (5) Relocatable Modeling Environment, (6) Tidal Models, (7) Book Inventories, (8) System Analysis, (9) World Wide Web Development, (10) Virtual Data Warehouse, (11) Enterprise Information Explorer, (12) Equipment Inventories, (13) COADS, and (14) JavaScript Technology.

  17. HPMCD: the database of human microbial communities from metagenomic datasets and microbial reference genomes.

    PubMed

    Forster, Samuel C; Browne, Hilary P; Kumar, Nitin; Hunt, Martin; Denise, Hubert; Mitchell, Alex; Finn, Robert D; Lawley, Trevor D

    2016-01-04

    The Human Pan-Microbe Communities (HPMC) database (http://www.hpmcd.org/) provides a manually curated, searchable, metagenomic resource to facilitate investigation of human gastrointestinal microbiota. Over the past decade, the application of metagenome sequencing to elucidate the microbial composition and functional capacity present in the human microbiome has revolutionized many concepts in our basic biology. When sufficient high quality reference genomes are available, whole genome metagenomic sequencing can provide direct biological insights and high-resolution classification. The HPMC database provides species level, standardized phylogenetic classification of over 1800 human gastrointestinal metagenomic samples. This is achieved by combining a manually curated list of bacterial genomes from human faecal samples with over 21000 additional reference genomes representing bacteria, viruses, archaea and fungi with manually curated species classification and enhanced sample metadata annotation. A user-friendly, web-based interface provides the ability to search for (i) microbial groups associated with health or disease state, (ii) health or disease states and community structure associated with a microbial group, (iii) the enrichment of a microbial gene or sequence and (iv) enrichment of a functional annotation. The HPMC database enables detailed analysis of human microbial communities and supports research from basic microbiology and immunology to therapeutic development in human health and disease. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Testing the Storm et al. (2010) meta-analysis using Bayesian and frequentist approaches: reply to Rouder et al. (2013).

    PubMed

    Storm, Lance; Tressoldi, Patrizio E; Utts, Jessica

    2013-01-01

    Rouder, Morey, and Province (2013) stated that (a) the evidence-based case for psi in Storm, Tressoldi, and Di Risio's (2010) meta-analysis is supported only by a number of studies that used manual randomization, and (b) when these studies are excluded so that only investigations using automatic randomization are evaluated (and some additional studies previously omitted by Storm et al., 2010, are included), the evidence for psi is "unpersuasive." Rouder et al. used a Bayesian approach, and we adopted the same methodology, finding that our case is upheld. Because of recent updates and corrections, we reassessed the free-response databases of Storm et al. using a frequentist approach. We discuss and critique the assumptions and findings of Rouder et al. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  19. Publication trend, resource utilization, and impact of the US National Cancer Database: A systematic review.

    PubMed

    Su, Chang; Peng, Cuiying; Agbodza, Ena; Bai, Harrison X; Huang, Yuqian; Karakousis, Giorgos; Zhang, Paul J; Zhang, Zishu

    2018-03-01

    The utilization and impact of the studies published using the National Cancer Database (NCDB) is currently unclear. In this study, we aim to characterize the published studies, and identify relatively unexplored areas for future investigations. A literature search was performed using PubMed in January 2017 to identify all papers published using NCDB data. Characteristics of the publications were extracted. Citation frequencies were obtained through the Web of Science. Three hundred 2 articles written by 230 first authors met the inclusion criteria. The number of publications grew exponentially since 2013, with 108 articles published in 2016. Articles were published in 86 journals. The majority of the published papers focused on digestive system cancer, while bone and joints, eye and orbit, myeloma, mesothelioma, and Kaposi Sarcoma were never studied. Thirteen institutions in the United States were associated with more than 5 publications. The papers have been cited for a total of 9858 times since the publication of the first paper in 1992. Frequently appearing keywords congregated into 3 clusters: "demographics," "treatments and survival," and "statistical analysis method." Even though the main focuses of the articles captured a extremely wide range, they can be classified into 2 main categories: survival analysis and characterization. Other focuses include database(s) analysis and/or comparison, and hospital reporting. The surging interest in the use of NCDB is accompanied by unequal utilization of resources by individuals and institutions. Certain areas were relatively understudied and should be further explored.

  20. 12 CFR 1026.35 - Requirements for higher-priced mortgage loans.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...), established by the United States Department of Agriculture's Economic Research Service (USDA-ERS). A creditor.... (iii) National Registry means the database of information about State certified and licensed appraisers... appropriate, for each category. (v) National Registry means the database of information about State certified...

  1. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    National Institute of Standards and Technology Data Gateway

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  2. Is the clinicopathological pattern of colorectal carcinoma similar in the state and private healthcare systems of South Africa? Analysis of a Durban colorectal cancer database.

    PubMed

    Ntombela, Xolani H; Zulu, Babongile Mw; Masenya, Molikane; Sartorius, Ben; Madiba, Thandinkosi E

    2017-10-01

    Previous state hospital-based local studies suggest varying population-based clinicopathological patterns of colorectal cancer (CRC). Patients diagnosed with CRC in the state and private sector hospitals in Durban, South Africa over a 12-month period (January-December 2009) form the basis of our study. Of 491 patients (172 state and 319 private sector patients), 258 were men. State patients were younger than private patients. Anatomical site distribution was similar in both groups with minor variations. Stage IV disease was more common in state patients. State patients were younger, presented with advanced disease and had a lower resection rate. Black patients were the youngest, presented with advanced disease and had the lowest resection rate.

  3. Progress in developing analytical and label-based dietary supplement databases at the NIH Office of Dietary Supplements

    PubMed Central

    Dwyer, Johanna T.; Picciano, Mary Frances; Betz, Joseph M.; Fisher, Kenneth D.; Saldanha, Leila G.; Yetley, Elizabeth A.; Coates, Paul M.; Milner, John A.; Whitted, Jackie; Burt, Vicki; Radimer, Kathy; Wilger, Jaimie; Sharpless, Katherine E.; Holden, Joanne M.; Andrews, Karen; Roseland, Janet; Zhao, Cuiwei; Schweitzer, Amy; Harnly, James; Wolf, Wayne R.; Perry, Charles R.

    2013-01-01

    Although an estimated 50% of adults in the United States consume dietary supplements, analytically substantiated data on their bioactive constituents are sparse. Several programs funded by the Office of Dietary Supplements (ODS) at the National Institutes of Health enhance dietary supplement database development and help to better describe the quantitative and qualitative contributions of dietary supplements to total dietary intakes. ODS, in collaboration with the United States Department of Agriculture, is developing a Dietary Supplement Ingredient Database (DSID) verified by chemical analysis. The products chosen initially for analytical verification are adult multivitamin-mineral supplements (MVMs). These products are widely used, analytical methods are available for determining key constituents, and a certified reference material is in development. Also MVMs have no standard scientific, regulatory, or marketplace definitions and have widely varying compositions, characteristics, and bioavailability. Furthermore, the extent to which actual amounts of vitamins and minerals in a product deviate from label values is not known. Ultimately, DSID will prove useful to professionals in permitting more accurate estimation of the contribution of dietary supplements to total dietary intakes of nutrients and better evaluation of the role of dietary supplements in promoting health and well-being. ODS is also collaborating with the National Center for Health Statistics to enhance the National Health and Nutrition Examination Survey dietary supplement label database. The newest ODS effort explores the feasibility and practicality of developing a database of all dietary supplement labels marketed in the US. This article describes these and supporting projects. PMID:25346570

  4. Chemical analyses of coal, coal-associated rocks and coal combustion products collected for the National Coal Quality Inventory

    USGS Publications Warehouse

    Hatch, Joseph R.; Bullock, John H.; Finkelman, Robert B.

    2006-01-01

    In 1999, the USGS initiated the National Coal Quality Inventory (NaCQI) project to address a need for quality information on coals that will be mined during the next 20-30 years. At the time this project was initiated, the publicly available USGS coal quality data was based on samples primarily collected and analyzed between 1973 and 1985. The primary objective of NaCQI was to create a database containing comprehensive, accurate and accessible chemical information on the quality of mined and prepared United States coals and their combustion byproducts. This objective was to be accomplished through maintaining the existing publicly available coal quality database, expanding the database through the acquisition of new samples from priority areas, and analysis of the samples using updated coal analytical chemistry procedures. Priorities for sampling include those areas where future sources of compliance coal are federally owned. This project was a cooperative effort between the U.S. Geological Survey (USGS), State geological surveys, universities, coal burning utilities, and the coal mining industry. Funding support came from the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE).

  5. The distribution of common construction materials at risk to acid deposition in the United States

    NASA Astrophysics Data System (ADS)

    Lipfert, Frederick W.; Daum, Mary L.

    Information on the geographic distribution of various types of exposed materials is required to estimate the economic costs of damage to construction materials from acid deposition. This paper focuses on the identification, evaluation and interpretation of data describing the distributions of exterior construction materials, primarily in the United States. This information could provide guidance on how data needed for future economic assessments might be acquired in the most cost-effective ways. Materials distribution surveys from 16 cities in the U.S. and Canada and five related databases from government agencies and trade organizations were examined. Data on residential buildings are more commonly available than on nonresidential buildings; little geographically resolved information on distributions of materials in infrastructure was found. Survey results generally agree with the appropriate ancillary databases, but the usefulness of the databases is often limited by their coarse spatial resolution. Information on those materials which are most sensitive to acid deposition is especially scarce. Since a comprehensive error analysis has never been performed on the data required for an economic assessment, it is not possible to specify the corresponding detailed requirements for data on the distributions of materials.

  6. Preliminary United States-Mexico border watershed analysis, twin cities area of Nogales, Arizona and Nogales, Sonora

    USGS Publications Warehouse

    Brady, Laura Margaret; Gray, Floyd; Castaneda, Mario; Bultman, Mark; Bolm, Karen Sue

    2002-01-01

    The United States - Mexico border area faces the challenge of integrating aspects of its binational physical boundaries to form a unified or, at least, compatible natural resource management plan. Specified geospatial components such as stream drainages, mineral occurrences, vegetation, wildlife, and land-use can be analyzed in terms of their overlapping impacts upon one another. Watersheds have been utilized as a basic unit in resource analysis because they contain components that are interrelated and can be viewed as a single interactive ecological system. In developing and analyzing critical regional natural resource databases, the Environmental Protection Agency (EPA) and other federal and non-governmental agencies have adopted a ?watershed by watershed? approach to dealing with such complicated issues as ecosystem health, natural resource use, urban growth, and pollutant transport within hydrologic systems. These watersheds can facilitate the delineation of both large scale and locally important hydrologic systems and urban management parameters necessary for sustainable, diversified land-use. The twin border cities area of Nogales, Sonora and Nogales, Arizona, provide the ideal setting to demonstrate the utility and application of a complete, cross-border, geographic information systems (GIS) based, watershed analysis in the characterization of a wide range of natural resource as well as urban features and their interactions. In addition to the delineation of a unified, cross-border watershed, the database contains sewer/water line locations and status, well locations, geology, hydrology, topography, soils, geomorphology, and vegetation data, as well as remotely sensed imagery. This report is preliminary and part of an ongoing project to develop a GIS database that will be widely accessible to the general public, researchers, and the local land management community with a broad range of application and utility.

  7. Type 2 Diabetes Research Yield, 1951-2012: Bibliometrics Analysis and Density-Equalizing Mapping

    PubMed Central

    Geaney, Fiona; Scutaru, Cristian; Kelly, Clare; Glynn, Ronan W.; Perry, Ivan J.

    2015-01-01

    The objective of this paper is to provide a detailed evaluation of type 2 diabetes mellitus research output from 1951-2012, using large-scale data analysis, bibliometric indicators and density-equalizing mapping. Data were retrieved from the Science Citation Index Expanded database, one of the seven curated databases within Web of Science. Using Boolean operators "OR", "AND" and "NOT", a search strategy was developed to estimate the total number of published items. Only studies with an English abstract were eligible. Type 1 diabetes and gestational diabetes items were excluded. Specific software developed for the database analysed the data. Information including titles, authors’ affiliations and publication years were extracted from all files and exported to excel. Density-equalizing mapping was conducted as described by Groenberg-Kloft et al, 2008. A total of 24,783 items were published and cited 476,002 times. The greatest number of outputs were published in 2010 (n=2,139). The United States contributed 28.8% to the overall output, followed by the United Kingdom (8.2%) and Japan (7.7%). Bilateral cooperation was most common between the United States and United Kingdom (n=237). Harvard University produced 2% of all publications, followed by the University of California (1.1%). The leading journals were Diabetes, Diabetologia and Diabetes Care and they contributed 9.3%, 7.3% and 4.0% of the research yield, respectively. In conclusion, the volume of research is rising in parallel with the increasing global burden of disease due to type 2 diabetes mellitus. Bibliometrics analysis provides useful information to scientists and funding agencies involved in the development and implementation of research strategies to address global health issues. PMID:26208117

  8. Worldwide nanotechnology development: a comparative study of USPTO, EPO, and JPO patents (1976-2004)

    NASA Astrophysics Data System (ADS)

    Li, Xin; Lin, Yiling; Chen, Hsinchun; Roco, Mihail C.

    2007-12-01

    To assess worldwide development of nanotechnology, this paper compares the numbers and contents of nanotechnology patents in the United States Patent and Trademark Office (USPTO), European Patent Office (EPO), and Japan Patent Office (JPO). It uses the patent databases as indicators of nanotechnology trends via bibliographic analysis, content map analysis, and citation network analysis on nanotechnology patents per country, institution, and technology field. The numbers of nanotechnology patents published in USPTO and EPO have continued to increase quasi-exponentially since 1980, while those published in JPO stabilized after 1993. Institutions and individuals located in the same region as a repository's patent office have a higher contribution to the nanotechnology patent publication in that repository ("home advantage" effect). The USPTO and EPO databases had similar high-productivity contributing countries and technology fields with large number of patents, but quite different high-impact countries and technology fields after the average number of received cites. Bibliographic analysis on USPTO and EPO patents shows that researchers in the United States and Japan published larger numbers of patents than other countries, and that their patents were more frequently cited by other patents. Nanotechnology patents covered physics research topics in all three repositories. In addition, USPTO showed the broadest representation in coverage in biomedical and electronics areas. The analysis of citations by technology field indicates that USPTO had a clear pattern of knowledge diffusion from highly cited fields to less cited fields, while EPO showed knowledge exchange mainly occurred among highly cited fields.

  9. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information

    EPA Science Inventory

    The National Land Cover Database (NLCD) provides nationwide data on land cover and land cover change at the native 30-m spatial resolution of the Landsat Thematic Mapper (TM). The database is designed to provide five-year cyclical updating of United States land cover and associat...

  10. Thrombotic events associated with C1 esterase inhibitor products in patients with hereditary angioedema: investigation from the United States Food and Drug Administration adverse event reporting system database.

    PubMed

    Gandhi, Pranav K; Gentry, William M; Bottorff, Michael B

    2012-10-01

    To investigate reports of thrombotic events associated with the use of C1 esterase inhibitor products in patients with hereditary angioedema in the United States. Retrospective data mining analysis. The United States Food and Drug Administration (FDA) adverse event reporting system (AERS) database. Case reports of C1 esterase inhibitor products, thrombotic events, and C1 esterase inhibitor product-associated thrombotic events (i.e., combination cases) were extracted from the AERS database, using the time frames of each respective product's FDA approval date through the second quarter of 2011. Bayesian statistical methodology within the neural network architecture was implemented to identify potential signals of a drug-associated adverse event. A potential signal is generated when the lower limit of the 95% 2-sided confidence interval of the information component, denoted by IC₀₂₅ , is greater than zero. This suggests that the particular drug-associated adverse event was reported to the database more often than statistically expected from reports available in the database. Ten combination cases of thrombotic events associated with the use of one C1 esterase inhibitor product (Cinryze) were identified in patients with hereditary angioedema. A potential signal demonstrated by an IC₀₂₅ value greater than zero (IC₀₂₅ = 2.91) was generated for these combination cases. The extracted cases from the AERS indicate continuing reports of thrombotic events associated with the use of one C1 esterase inhibitor product among patients with hereditary angioedema. The AERS is incapable of establishing a causal link and detecting the true frequency of an adverse event associated with a drug; however, potential signals of C1 esterase inhibitor product-associated thrombotic events among patients with hereditary angioedema were identified in the extracted combination cases. © 2012 Pharmacotherapy Publications, Inc.

  11. A Toolkit for Active Object-Oriented Databases with Application to Interoperability

    NASA Technical Reports Server (NTRS)

    King, Roger

    1996-01-01

    In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object-oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.

  12. A Toolkit for Active Object-Oriented Databases with Application to Interoperability

    NASA Technical Reports Server (NTRS)

    King, Roger

    1996-01-01

    In our original proposal we stated that our research would 'develop a novel technology that provides a foundation for collaborative information processing.' The essential ingredient of this technology is the notion of 'deltas,' which are first-class values representing collections of proposed updates to a database. The Heraclitus framework provides a variety of algebraic operators for building up, combining, inspecting, and comparing deltas. Deltas can be directly applied to the database to yield a new state, or used 'hypothetically' in queries against the state that would arise if the delta were applied. The central point here is that the step of elevating deltas to 'first-class' citizens in database programming languages will yield tremendous leverage on the problem of supporting updates in collaborative information processing. In short, our original intention was to develop the theoretical and practical foundation for a technology based on deltas in an object- oriented database context, develop a toolkit for active object-oriented databases, and apply this toward collaborative information processing.

  13. Linked Patient-Reported Outcomes Data From Patients With Multiple Sclerosis Recruited on an Open Internet Platform to Health Care Claims Databases Identifies a Representative Population for Real-Life Data Analysis in Multiple Sclerosis.

    PubMed

    Risson, Valery; Ghodge, Bhaskar; Bonzani, Ian C; Korn, Jonathan R; Medin, Jennie; Saraykar, Tanmay; Sengupta, Souvik; Saini, Deepanshu; Olson, Melvin

    2016-09-22

    An enormous amount of information relevant to public health is being generated directly by online communities. To explore the feasibility of creating a dataset that links patient-reported outcomes data, from a Web-based survey of US patients with multiple sclerosis (MS) recruited on open Internet platforms, to health care utilization information from health care claims databases. The dataset was generated by linkage analysis to a broader MS population in the United States using both pharmacy and medical claims data sources. US Facebook users with an interest in MS were alerted to a patient-reported survey by targeted advertisements. Eligibility criteria were diagnosis of MS by a specialist (primary progressive, relapsing-remitting, or secondary progressive), ≥12-month history of disease, age 18-65 years, and commercial health insurance. Participants completed a questionnaire including data on demographic and disease characteristics, current and earlier therapies, relapses, disability, health-related quality of life, and employment status and productivity. A unique anonymous profile was generated for each survey respondent. Each anonymous profile was linked to a number of medical and pharmacy claims datasets in the United States. Linkage rates were assessed and survey respondents' representativeness was evaluated based on differences in the distribution of characteristics between the linked survey population and the general MS population in the claims databases. The advertisement was placed on 1,063,973 Facebook users' pages generating 68,674 clicks, 3719 survey attempts, and 651 successfully completed surveys, of which 440 could be linked to any of the claims databases for 2014 or 2015 (67.6% linkage rate). Overall, no significant differences were found between patients who were linked and not linked for educational status, ethnicity, current or prior disease-modifying therapy (DMT) treatment, or presence of a relapse in the last 12 months. The frequencies of the most common MS symptoms did not differ significantly between linked patients and the general MS population in the databases. Linked patients were slightly younger and less likely to be men than those who were not linkable. Linking patient-reported outcomes data, from a Web-based survey of US patients with MS recruited on open Internet platforms, to health care utilization information from claims databases may enable rapid generation of a large population of representative patients with MS suitable for outcomes analysis.

  14. Daily Snow Depth Measurements from 195 Stations in the United States (1997) (NDP-059)

    DOE Data Explorer

    Easterling, D. R. [NOAA, National Climatic Data Center; Jamason, P. [NOAA, National Climatic Data Center; Bowman, D. P. [NOAA, National Climatic Data Center; Hughes, P. Y. [NOAA, National Climatic Data Center; Mason, E. H. [NOAA, National Climatic Data Center; Allison, L. J. [ORNL, Carbon Dioxide Information Analysis Center (CDIAC)

    1997-02-01

    This data package provides daily measurements of snow depth at 195 National Weather Service (NWS) first-order climatological stations in the United States. The data have been assembled and made available by the National Climatic Data Center (NCDC) in Asheville, North Carolina. The 195 stations encompass 388 unique sampling locations in 48 of the 50 states; no observations from Delaware or Hawaii are included in the database. Station selection criteria emphasized the quality and length of station records while seeking to provide a network with good geographic coverage. Snow depth at the 388 locations was measured once per day on ground open to the sky. The daily snow depth is the total depth of the snow on the ground at measurement time. The time period covered by the database is 1893-1992; however, not all station records encompass the complete period. While a station record ideally should contain daily data for at least the seven winter months (January through April and October through December), not all stations have complete records. Each logical record in the snow depth database contains one station's daily data values for a period of one month, including data source, measurement, and quality flags. The snow depth data have undergone extensive manual and automated quality assurance checks by NCDC and the Carbon Dioxide Information Analysis Center (CDIAC). These reviews involved examining the data for completeness, reasonableness, and accuracy, and included comparison of some data records with records in NCDC's Summary of the Day First Order online database. Since the snow depth measurements have been taken at NWS first-order stations that have long periods of record, they should prove useful in monitoring climate change.

  15. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    PubMed

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  16. A dynamic appearance descriptor approach to facial actions temporal modeling.

    PubMed

    Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja

    2014-02-01

    Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.

  17. Improved Infrastucture for Cdms and JPL Molecular Spectroscopy Catalogues

    NASA Astrophysics Data System (ADS)

    Endres, Christian; Schlemmer, Stephan; Drouin, Brian; Pearson, John; Müller, Holger S. P.; Schilke, P.; Stutzki, Jürgen

    2014-06-01

    Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.

  18. Uranium Mines and Mills Location Database

    EPA Pesticide Factsheets

    EPA has compiled mine location information from federal, state, and Tribal agencies into a single database as part of its investigation into the potential environmental hazards of wastes from abandoned uranium mines in the western United States.

  19. Very large database of lipids: rationale and design.

    PubMed

    Martin, Seth S; Blaha, Michael J; Toth, Peter P; Joshi, Parag H; McEvoy, John W; Ahmed, Haitham M; Elshazly, Mohamed B; Swiger, Kristopher J; Michos, Erin D; Kwiterovich, Peter O; Kulkarni, Krishnaji R; Chimera, Joseph; Cannon, Christopher P; Blumenthal, Roger S; Jones, Steven R

    2013-11-01

    Blood lipids have major cardiovascular and public health implications. Lipid-lowering drugs are prescribed based in part on categorization of patients into normal or abnormal lipid metabolism, yet relatively little emphasis has been placed on: (1) the accuracy of current lipid measures used in clinical practice, (2) the reliability of current categorizations of dyslipidemia states, and (3) the relationship of advanced lipid characterization to other cardiovascular disease biomarkers. To these ends, we developed the Very Large Database of Lipids (NCT01698489), an ongoing database protocol that harnesses deidentified data from the daily operations of a commercial lipid laboratory. The database includes individuals who were referred for clinical purposes for a Vertical Auto Profile (Atherotech Inc., Birmingham, AL), which directly measures cholesterol concentrations of low-density lipoprotein, very low-density lipoprotein, intermediate-density lipoprotein, high-density lipoprotein, their subclasses, and lipoprotein(a). Individual Very Large Database of Lipids studies, ranging from studies of measurement accuracy, to dyslipidemia categorization, to biomarker associations, to characterization of rare lipid disorders, are investigator-initiated and utilize peer-reviewed statistical analysis plans to address a priori hypotheses/aims. In the first database harvest (Very Large Database of Lipids 1.0) from 2009 to 2011, there were 1 340 614 adult and 10 294 pediatric patients; the adult sample had a median age of 59 years (interquartile range, 49-70 years) with even representation by sex. Lipid distributions closely matched those from the population-representative National Health and Nutrition Examination Survey. The second harvest of the database (Very Large Database of Lipids 2.0) is underway. Overall, the Very Large Database of Lipids database provides an opportunity for collaboration and new knowledge generation through careful examination of granular lipid data on a large scale. © 2013 Wiley Periodicals, Inc.

  20. MPA Portable: A Stand-Alone Software Package for Analyzing Metaproteome Samples on the Go.

    PubMed

    Muth, Thilo; Kohrs, Fabian; Heyer, Robert; Benndorf, Dirk; Rapp, Erdmann; Reichl, Udo; Martens, Lennart; Renard, Bernhard Y

    2018-01-02

    Metaproteomics, the mass spectrometry-based analysis of proteins from multispecies samples faces severe challenges concerning data analysis and results interpretation. To overcome these shortcomings, we here introduce the MetaProteomeAnalyzer (MPA) Portable software. In contrast to the original server-based MPA application, this newly developed tool no longer requires computational expertise for installation and is now independent of any relational database system. In addition, MPA Portable now supports state-of-the-art database search engines and a convenient command line interface for high-performance data processing tasks. While search engine results can easily be combined to increase the protein identification yield, an additional two-step workflow is implemented to provide sufficient analysis resolution for further postprocessing steps, such as protein grouping as well as taxonomic and functional annotation. Our new application has been developed with a focus on intuitive usability, adherence to data standards, and adaptation to Web-based workflow platforms. The open source software package can be found at https://github.com/compomics/meta-proteome-analyzer .

  1. Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)

    USGS Publications Warehouse

    Granato, Gregory E.

    2004-01-01

    In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of the WSSMP database. All WSSMP information in the database is, ultimately, linked to the individual water suppliers and to a WSSMP 'cycle' (which is currently a 5-year planning cycle for compiling WSSMP information). The database file contains 172 tables - 47 data tables, 61 association tables, 61 domain tables, and 3 example import-link tables. This database is currently implemented in the Microsoft Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. Design documentation facilitates current use and potential modification for future use of the database. Information within the structure of the WSSMP database file (WSSMPv01.mdb), a data dictionary file (WSSMPDD1.pdf), a detailed database-design diagram (WSSMPPL1.pdf), and this database-design report (OFR2004-1231.pdf) documents the design of the database. This report includes a discussion of each WSSMP data structure with an accompanying database-design diagram. Appendix 1 of this report is an index of the diagrams in the report and on the plate; this index is organized by table name in alphabetical order. Each of these products is included in digital format on the enclosed CD-ROM to facilitate use or modification of the database.

  2. Database of Renewable Energy and Energy Efficiency Incentives and Policies Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lips, Brian

    The Database of State Incentives for Renewables and Efficiency (DSIRE) is an online resource that provides summaries of all financial incentives and regulatory policies that support the use of renewable energy and energy efficiency across all 50 states. This project involved making enhancements to the database and website, and the ongoing research and maintenance of the policy and incentive summaries.

  3. Hierarchical Control of Semi-Autonomous Teams Under Uncertainty (HICST)

    DTIC Science & Technology

    2004-05-01

    17 2.4 Module 4: Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5... Database SoW 1 2 34 5 Txt file: paths Figure 3: Integration of modules 1-5. The modules make provision for human intervention, not indicated in the...figure. SoW is ‘state of the world’. 3. Task execution; 4. Database for state estimation; 5. Java interface to OEP; 6. Robust dynamic programming for

  4. Epiphytic Macrolichen Community Composition Database—epiphytic lichen synusiae in forested areas of the US

    Treesearch

    Sarah. Jovan

    2012-01-01

    The Forest Inventory and Analysis (FIA) Program's Lichen Communities Indicator is used for tracking epiphytic macrolichen diversity and is applied for monitoring air quality and climate change effects on forest health in the United States. Started in 1994, the Epiphytic Macrolichen Community Composition Database (GIVD ID NA-US-012) now has over 8,000 surveys of...

  5. Implementation of an open adoption research data management system for clinical studies.

    PubMed

    Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate

    2017-07-06

    Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.

  6. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  7. Detecting Spatial Patterns of Natural Hazards from the Wikipedia Knowledge Base

    NASA Astrophysics Data System (ADS)

    Fan, J.; Stewart, K.

    2015-07-01

    The Wikipedia database is a data source of immense richness and variety. Included in this database are thousands of geotagged articles, including, for example, almost real-time updates on current and historic natural hazards. This includes usercontributed information about the location of natural hazards, the extent of the disasters, and many details relating to response, impact, and recovery. In this research, a computational framework is proposed to detect spatial patterns of natural hazards from the Wikipedia database by combining topic modeling methods with spatial analysis techniques. The computation is performed on the Neon Cluster, a high performance-computing cluster at the University of Iowa. This work uses wildfires as the exemplar hazard, but this framework is easily generalizable to other types of hazards, such as hurricanes or flooding. Latent Dirichlet Allocation (LDA) modeling is first employed to train the entire English Wikipedia dump, transforming the database dump into a 500-dimension topic model. Over 230,000 geo-tagged articles are then extracted from the Wikipedia database, spatially covering the contiguous United States. The geo-tagged articles are converted into an LDA topic space based on the topic model, with each article being represented as a weighted multidimension topic vector. By treating each article's topic vector as an observed point in geographic space, a probability surface is calculated for each of the topics. In this work, Wikipedia articles about wildfires are extracted from the Wikipedia database, forming a wildfire corpus and creating a basis for the topic vector analysis. The spatial distribution of wildfire outbreaks in the US is estimated by calculating the weighted sum of the topic probability surfaces using a map algebra approach, and mapped using GIS. To provide an evaluation of the approach, the estimation is compared to wildfire hazard potential maps created by the USDA Forest service.

  8. Who pays for agricultural injury care?

    PubMed

    Costich, Julia

    2010-01-01

    Analysis of 295 agricultural injury hospitalizations in a single state's hospital discharge database found that workers' compensation covered only 5% of the inpatient stays. Other sources were commercial health insurance (47%), Medicare (31%), and Medicaid (7%); 9% were uninsured. Estimated mean hospital and physician payments (not costs or charges) were $12,056 per hospitalization. Nearly one sixth (16%) of hospitalizations were either unreimbursed or covered by Medicaid, indicating a substantial cost-shift to public funding sources. Problems in characterizing agricultural injuries and states' exceptions to workers' compensation coverage mandates point to the need for comprehensive health coverage.

  9. Vaccines are different: A systematic review of budget impact analyses of vaccines.

    PubMed

    Loze, Priscilla Magalhaes; Nasciben, Luciana Bertholim; Sartori, Ana Marli Christovam; Itria, Alexander; Novaes, Hillegonda Maria Dutilh; de Soárez, Patrícia Coelho

    2017-05-15

    Several countries require manufacturers to present a budget impact analysis (BIA), together with a cost-effectiveness analysis, to support national funding requests. However, guidelines for conducting BIA of vaccines are scarce. To analyze the methodological approaches used in published budget impact analysis (BIA) of vaccines, discussing specific methodological issues related to vaccines. This systematic review of the literature on BIA of vaccines was carried out in accordance with the Centre for Reviews and Dissemination - CRD guidelines. We searched multiple databases: MedLine, Embase, Biblioteca Virtual de Saúde (BVS), Cochrane Library, DARE Database, NHS Economic Evaluation Database (NHS EED), HTA Database (via Centre for Reviews and Dissemination - CRD), and grey literature. Two researchers, working independently, selected the studies and extracted the data. The methodology quality of individual studies was assessed using the ISPOR 2012 Budget Impact Analysis Good Practice II Task Force. A qualitative narrative synthesis was conducted. Twenty-two studies were reviewed. The most frequently evaluated vaccines were pneumococcal (41%), influenza (23%) and rotavirus (18%). The target population was stated in 21 studies (95%) and the perspective was clear in 20 (91%). Only 36% reported the calculations used to complete the BIA, 27% informed the total and disaggregated costs for each time period, and 9% showed the change in resource use for each time period. More than half of the studies (55%, n=12) reported less than 50% of the items recommended in the checklist. The production of BIA of vaccines has increased from 2009. The report of the methodological steps was unsatisfactory, making it difficult to assess the validity of the results presented. Vaccines specific issues should be discussed in international guidelines for BIA of vaccines, to improve the quality of the studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Comprehensive national database of tree effects on air quality and human health in the United States.

    PubMed

    Hirabayashi, Satoshi; Nowak, David J

    2016-08-01

    Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and urban areas in every county in the conterminous United States. The results populated a national database of annual air pollutant removal, concentration changes, and reductions in adverse health incidences and costs for NO2, O3, PM2.5 and SO2. The developed database enabled a first order approximation of air quality and associated human health benefits provided by trees with any forest configurations anywhere in the conterminous United States over time. Comprehensive national database of tree effects on air quality and human health in the United States was developed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Geologic database for digital geology of California, Nevada, and Utah: an application of the North American Data Model

    USGS Publications Warehouse

    Bedford, David R.; Ludington, Steve; Nutt, Constance M.; Stone, Paul A.; Miller, David M.; Miller, Robert J.; Wagner, David L.; Saucedo, George J.

    2003-01-01

    The USGS is creating an integrated national database for digital state geologic maps that includes stratigraphic, age, and lithologic information. The majority of the conterminous 48 states have digital geologic base maps available, often at scales of 1:500,000. This product is a prototype, and is intended to demonstrate the types of derivative maps that will be possible with the national integrated database. This database permits the creation of a number of types of maps via simple or sophisticated queries, maps that may be useful in a number of areas, including mineral-resource assessment, environmental assessment, and regional tectonic evolution. This database is distributed with three main parts: a Microsoft Access 2000 database containing geologic map attribute data, an Arc/Info (Environmental Systems Research Institute, Redlands, California) Export format file containing points representing designation of stratigraphic regions for the Geologic Map of Utah, and an ArcView 3.2 (Environmental Systems Research Institute, Redlands, California) project containing scripts and dialogs for performing a series of generalization and mineral resource queries. IMPORTANT NOTE: Spatial data for the respective stage geologic maps is not distributed with this report. The digital state geologic maps for the states involved in this report are separate products, and two of them are produced by individual state agencies, which may be legally and/or financially responsible for this data. However, the spatial datasets for maps discussed in this report are available to the public. Questions regarding the distribution, sale, and use of individual state geologic maps should be sent to the respective state agency. We do provide suggestions for obtaining and formatting the spatial data to make it compatible with data in this report. See section ‘Obtaining and Formatting Spatial Data’ in the PDF version of the report.

  12. Intrusive Rock Database for the Digital Geologic Map of Utah

    USGS Publications Warehouse

    Nutt, C.J.; Ludington, Steve

    2003-01-01

    Digital geologic maps offer the promise of rapid and powerful answers to geologic questions using Geographic Information System software (GIS). Using modern GIS and database methods, a specialized derivative map can be easily prepared. An important limitation can be shortcomings in the information provided in the database associated with the digital map, a database which is often based on the legend of the original map. The purpose of this report is to show how the compilation of additional information can, when prepared as a database that can be used with the digital map, be used to create some types of derivative maps that are not possible with the original digital map and database. This Open-file Report consists of computer files with information about intrusive rocks in Utah that can be linked to the Digital Geologic Map of Utah (Hintze et al., 2000), an explanation of how to link the databases and map, and a list of references for the databases. The digital map, which represents the 1:500,000-scale Geologic Map of Utah (Hintze, 1980), can be obtained from the Utah Geological Survey (Map 179DM). Each polygon in the map has a unique identification number. We selected the polygons identified on the geologic map as intrusive rock, and constructed a database (UT_PLUT.xls) that classifies the polygons into plutonic map units (see tables). These plutonic map units are the key information that is used to relate the compiled information to the polygons on the map. The map includes a few polygons that were coded as intrusive on the state map but are largely volcanic rock; in these cases we note the volcanic rock names (rhyolite and latite) as used in the original sources Some polygons identified on the digital state map as intrusive rock were misidentified; these polygons are noted in a separate table of the database, along with some information about their true character. Fields may be empty because of lack of information from references used or difficulty in finding information. The information in the database is from a variety of sources, including geologic maps at scales ranging from 1:500,000 to 1:24,000, and thesis monographs. The references are shown twice: alphabetically and by region. The digital geologic map of Utah (Hintze and others, 2000) classifies intrusive rocks into only 3 categories, distinguished by age. They are: Ti, Tertiary intrusive rock; Ji, Upper to Middle Jurassic granite to quartz monzonite; and pCi, Early Proterozoic to Late Archean intrusive rock. Use of the tables provided in this report will permit selection and classification of those rocks by lithology and age. This database is a pilot study by the Survey and Analysis Project of the U.S. Geological Survey to characterize igneous rocks and link them to a digital map. The database, and others like it, will evolve as the project continues and other states are completed. We release this version now as an example, as a reference, and for those interested in Utah plutonic rocks.

  13. Clear-Sky Probability for the August 21, 2017, Total Solar Eclipse Using the NREL National Solar Radiation Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron M; Roberts, Billy J; Kutchenreiter, Mark C

    The National Renewable Energy Laboratory (NREL) and collaborators have created a clear-sky probability analysis to help guide viewers of the August 21, 2017, total solar eclipse, the first continent-spanning eclipse in nearly 100 years in the United States. Using cloud and solar data from NREL's National Solar Radiation Database (NSRDB), the analysis provides cloudless sky probabilities specific to the date and time of the eclipse. Although this paper is not intended to be an eclipse weather forecast, the detailed maps can help guide eclipse enthusiasts to likely optimal viewing locations. Additionally, high-resolution data are presented for the centerline of themore » path of totality, representing the likelihood for cloudless skies and atmospheric clarity. The NSRDB provides industry, academia, and other stakeholders with high-resolution solar irradiance data to support feasibility analyses for photovoltaic and concentrating solar power generation projects.« less

  14. A Priori Analysis of Subgrid-Scale Models for Large Eddy Simulations of Supercritical Binary-Species Mixing Layers

    NASA Technical Reports Server (NTRS)

    Okong'o, Nora; Bellan, Josette

    2005-01-01

    Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.

  15. 47 CFR 54.404 - The National Lifeline Accountability Database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false The National Lifeline Accountability Database... National Lifeline Accountability Database. (a) State certification. An eligible telecommunications carrier... within 90 days of filing. (b) The National Lifeline Accountability Database. In order to receive Lifeline...

  16. 47 CFR 54.404 - The National Lifeline Accountability Database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false The National Lifeline Accountability Database... National Lifeline Accountability Database. (a) State certification. An eligible telecommunications carrier... within 90 days of filing. (b) The National Lifeline Accountability Database. In order to receive Lifeline...

  17. 47 CFR 54.404 - The National Lifeline Accountability Database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false The National Lifeline Accountability Database... National Lifeline Accountability Database. (a) State certification. An eligible telecommunications carrier... within 90 days of filing. (b) The National Lifeline Accountability Database. In order to receive Lifeline...

  18. 78 FR 60861 - Native American Tribal Insignia Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-02

    ... Database ACTION: Proposed collection; comment request. SUMMARY: The United States Patent and Trademark... the report was that the USPTO create and maintain an accurate and comprehensive database containing... this recommendation, the Senate Committee on Appropriations directed the USPTO to create this database...

  19. [Construction and application of special analysis database of geoherbs based on 3S technology].

    PubMed

    Guo, Lan-ping; Huang, Lu-qi; Lv, Dong-mei; Shao, Ai-juan; Wang, Jian

    2007-09-01

    In this paper,the structures, data sources, data codes of "the spacial analysis database of geoherbs" based 3S technology are introduced, and the essential functions of the database, such as data management, remote sensing, spacial interpolation, spacial statistics, spacial analysis and developing are described. At last, two examples for database usage are given, the one is classification and calculating of NDVI index of remote sensing image in geoherbal area of Atractylodes lancea, the other one is adaptation analysis of A. lancea. These indicate that "the spacial analysis database of geoherbs" has bright prospect in spacial analysis of geoherbs.

  20. Academic impact of a public electronic health database: bibliometric analysis of studies using the general practice research database.

    PubMed

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.

  1. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  2. Publication trend, resource utilization, and impact of the US National Cancer Database

    PubMed Central

    Su, Chang; Peng, Cuiying; Agbodza, Ena; Bai, Harrison X.; Huang, Yuqian; Karakousis, Giorgos; Zhang, Paul J.; Zhang, Zishu

    2018-01-01

    Abstract Background: The utilization and impact of the studies published using the National Cancer Database (NCDB) is currently unclear. In this study, we aim to characterize the published studies, and identify relatively unexplored areas for future investigations. Methods: A literature search was performed using PubMed in January 2017 to identify all papers published using NCDB data. Characteristics of the publications were extracted. Citation frequencies were obtained through the Web of Science. Results: Three hundred 2 articles written by 230 first authors met the inclusion criteria. The number of publications grew exponentially since 2013, with 108 articles published in 2016. Articles were published in 86 journals. The majority of the published papers focused on digestive system cancer, while bone and joints, eye and orbit, myeloma, mesothelioma, and Kaposi Sarcoma were never studied. Thirteen institutions in the United States were associated with more than 5 publications. The papers have been cited for a total of 9858 times since the publication of the first paper in 1992. Frequently appearing keywords congregated into 3 clusters: “demographics,” “treatments and survival,” and “statistical analysis method.” Even though the main focuses of the articles captured a extremely wide range, they can be classified into 2 main categories: survival analysis and characterization. Other focuses include database(s) analysis and/or comparison, and hospital reporting. Conclusion: The surging interest in the use of NCDB is accompanied by unequal utilization of resources by individuals and institutions. Certain areas were relatively understudied and should be further explored. PMID:29489679

  3. Methylenetetrahydrofolate reductase polymorphisms and susceptibility to acute lymphoblastic leukemia in a Chinese population: a meta-analysis.

    PubMed

    Xiao, Yi; Deng, Tao-Ran; Su, Chang-Liang; Shang, Zhen

    2014-01-01

    Although many epidemiologic studies have investigated the methylenetetrahydrofolate reductase (MTHFR) gene polymorphisms and their association with acute lymphoblastic leukemia (ALL), definitive conclusions cannot be drawn. To clarify the effects of MTHFR polymorphisms on the risk of ALL, a meta-analysis was performed in a Chinese population. A computerized literature search was carried out in PubMed, the Chinese Biomedicine (CBM) database, China National Knowledge Infrastructure (CNKI) platform, and the Wanfang database (Chinese) to collect relevant articles. A total of 11 articles including 1,738 ALL cases and 2,438 controls were included in this meta-analysis. Overall, a significantly decreased association was found between the MTHFR C677T polymorphism and ALL risk when all studies in Chinese populations were pooled into the meta-analysis. In subgroup analyses stratified by age, ethnicity, and source of controls, the same results were observed in children, in population-based studies, and in people with no stated ethnicity. However, a significantly increased association was also found for MTHFR C677T in hospital-based studies, and for MTHFR A1298C in people with no stated ethnicity. Our results suggest that the MTHFR C677T and A1298C polymorphisms may be potential biomarkers for ALL risk in Chinese populations, and studies with a larger sample size and wider population spectrum are required before definitive conclusions can be drawn. © 2014 S. Karger GmbH, Freiburg.

  4. An open access database for the evaluation of heart sound algorithms.

    PubMed

    Liu, Chengyu; Springer, David; Li, Qiao; Moody, Benjamin; Juan, Ricardo Abad; Chorro, Francisco J; Castells, Francisco; Roig, José Millet; Silva, Ikaro; Johnson, Alistair E W; Syed, Zeeshan; Schmidt, Samuel E; Papadaniil, Chrysa D; Hadjileontiadis, Leontios; Naseri, Hosein; Moukadem, Ali; Dieterlen, Alain; Brandt, Christian; Tang, Hong; Samieinasab, Maryam; Samieinasab, Mohammad Reza; Sameni, Reza; Mark, Roger G; Clifford, Gari D

    2016-12-01

    In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.

  5. Food Composition Database Format and Structure: A User Focused Approach

    PubMed Central

    Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836

  6. Short Fiction on Film: A Relational DataBase.

    ERIC Educational Resources Information Center

    May, Charles

    Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…

  7. Online bibliographic sources in hydrology

    USGS Publications Warehouse

    Wild, Emily C.; Havener, W. Michael

    2001-01-01

    Traditional commercial bibliographic databases and indexes provide some access to hydrology materials produced by the government; however, these sources do not provide comprehensive coverage of relevant hydrologic publications. This paper discusses bibliographic information available from the federal government and state geological surveys, water resources agencies, and depositories. In addition to information in these databases, the paper describes the scope, styles of citing, subject terminology, and the ways these information sources are currently being searched, formally and informally, by hydrologists. Information available from the federal and state agencies and from the state depositories might be missed by limiting searches to commercially distributed databases.

  8. Software for quantitative analysis of radiotherapy: overview, requirement analysis and design solutions.

    PubMed

    Zhang, Lanlan; Hub, Martina; Mang, Sarah; Thieke, Christian; Nix, Oliver; Karger, Christian P; Floca, Ralf O

    2013-06-01

    Radiotherapy is a fast-developing discipline which plays a major role in cancer care. Quantitative analysis of radiotherapy data can improve the success of the treatment and support the prediction of outcome. In this paper, we first identify functional, conceptional and general requirements on a software system for quantitative analysis of radiotherapy. Further we present an overview of existing radiotherapy analysis software tools and check them against the stated requirements. As none of them could meet all of the demands presented herein, we analyzed possible conceptional problems and present software design solutions and recommendations to meet the stated requirements (e.g. algorithmic decoupling via dose iterator pattern; analysis database design). As a proof of concept we developed a software library "RTToolbox" following the presented design principles. The RTToolbox is available as open source library and has already been tested in a larger-scale software system for different use cases. These examples demonstrate the benefit of the presented design principles. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. R & D on carbon nanostructures in Russia: scientometric analysis, 1990-2011

    NASA Astrophysics Data System (ADS)

    Terekhov, Alexander I.

    2015-02-01

    The analysis, based on scientific publications and patents, was conducted to form an understanding of the overall scientific and technology landscape in the field of carbon nanostructures and determine Russia's place on it. The scientific publications came from the Science Citation Index Expanded database (DB SCIE); the patent information was extracted from databases of the United States Patent and Trade Office (USPTO), the World Intellectual Property Organization (WIPO), and Russian Federal Service for Intellectual Property (Rospatent). We used also data about research projects, obtained via information systems of the U.S. National Science Foundation (NSF) and the Russian Foundation for Basic Research (RFBR). Bibliometric methods are used to rank countries, institutions, and scientists, contributing to the carbon nanostructures research. We analyze the current state and trends of the research in Russia as compared to other countries, and the contribution and impact of its institutions, especially research of the "highest quality." Considerable focus is on research collaboration and its relationship with citation impact. Patent datasets are used to determine the composition of participants of innovative processes and international patent activity of Russian inventors in the field, and to identify the most active representatives of small and medium business and some technological developments ripe for commercialization. The article contains a critical analysis of the findings, including a policy discussion of the country's scientific authorities.

  10. Utilization and success rates of unstimulated in vitro fertilization in the United States: an analysis of the Society for Assisted Reproductive Technology database.

    PubMed

    Gordon, John David; DiMattina, Michael; Reh, Andrea; Botes, Awie; Celia, Gerard; Payson, Mark

    2013-08-01

    To examine the utilization and outcomes of natural cycle (unstimulated) IVF as reported to the Society of Assisted Reproductive Technology (SART) in 2006 and 2007. Retrospective analysis. Dataset analysis from the SART Clinical Outcome Reporting System national database. All patients undergoing IVF as reported to SART in 2006 and 2007. None. Utilization of unstimulated IVF; description of patient demographics; and comparison of implantation and pregnancy rates between unstimulated and stimulated IVF cycles. During 2006 and 2007 a total of 795 unstimulated IVF cycles were initiated. Success rates were age dependent, with patients <35 years of age demonstrating clinical pregnancy rates per cycle start, retrieval, and transfer of 19.2%, 26.8%, and 35.9%, respectively. Implantation rates were statistically higher for unstimulated compared with stimulated IVF in patients who were 35 to 42 years old. Unstimulated IVF represents <1% of the total IVF cycles initiated in the United States. The pregnancy and live birth rates per initiated cycle were 19.2% and 15.2%, respectively, in patients <35 years old. The implantation rates in unstimulated IVF cycles compared favorably to stimulated IVF. Natural cycle IVF may be considered in a wide range of patients as an alternative therapy for the infertile couple. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  11. Automated generation and ensemble-learned matching of X-ray absorption spectra

    NASA Astrophysics Data System (ADS)

    Zheng, Chen; Mathew, Kiran; Chen, Chi; Chen, Yiming; Tang, Hanmei; Dozier, Alan; Kas, Joshua J.; Vila, Fernando D.; Rehr, John J.; Piper, Louis F. J.; Persson, Kristin A.; Ong, Shyue Ping

    2018-12-01

    X-ray absorption spectroscopy (XAS) is a widely used materials characterization technique to determine oxidation states, coordination environment, and other local atomic structure information. Analysis of XAS relies on comparison of measured spectra to reliable reference spectra. However, existing databases of XAS spectra are highly limited both in terms of the number of reference spectra available as well as the breadth of chemistry coverage. In this work, we report the development of XASdb, a large database of computed reference XAS, and an Ensemble-Learned Spectra IdEntification (ELSIE) algorithm for the matching of spectra. XASdb currently hosts more than 800,000 K-edge X-ray absorption near-edge spectra (XANES) for over 40,000 materials from the open-science Materials Project database. We discuss a high-throughput automation framework for FEFF calculations, built on robust, rigorously benchmarked parameters. FEFF is a computer program uses a real-space Green's function approach to calculate X-ray absorption spectra. We will demonstrate that the ELSIE algorithm, which combines 33 weak "learners" comprising a set of preprocessing steps and a similarity metric, can achieve up to 84.2% accuracy in identifying the correct oxidation state and coordination environment of a test set of 19 K-edge XANES spectra encompassing a diverse range of chemistries and crystal structures. The XASdb with the ELSIE algorithm has been integrated into a web application in the Materials Project, providing an important new public resource for the analysis of XAS to all materials researchers. Finally, the ELSIE algorithm itself has been made available as part of veidt, an open source machine-learning library for materials science.

  12. Documentation for the U.S. Geological Survey Public-Supply Database (PSDB): A database of permitted public-supply wells, surface-water intakes, and systems in the United States

    USGS Publications Warehouse

    Price, Curtis V.; Maupin, Molly A.

    2014-01-01

    The purpose of this report is to document the PSDB and explain the methods used to populate and update the data from the SDWIS, State datasets, and map and geospatial imagery. This report describes 3 data tables and 11 domain tables, including field contents, data sources, and relations between tables. Although the PSDB database is not available to the general public, this information should be useful for others who are developing other database systems to store and analyze public-supply system and facility data.

  13. Evidence for a QPO structure in the TeV and X-ray light curve during the 1997 high state γ emission of Mkn 501

    NASA Astrophysics Data System (ADS)

    Kranich, D.

    1999-08-01

    The BL Lac Object Mkn 501 was in a state of high activity in the TeV range in 1997. During this time Mkn 501 was observed by all Cherenkov-Telescopes of the HEGRA-Collaboration. Part of the data were also taken during moonshine thus providing a nearly continuous coverage for this object in the TeV-range. We have carried out a QPO analysis and found evidence for a 23 day periodicity. We applied the same analysis on the 'data by dwell' x-ray lightcurve from the RXTE/ASM database and found also evidence for the 23 day periodicity. The combined probability was -.

  14. Maritime Situational Awareness Research Infrastructure (MSARI): Requirements and High Level Design

    DTIC Science & Technology

    2013-03-01

    Exchange Model (NIEM)-Maritime [16], • Rapid Environmental Assessment (REA) database [17], • 2009 United States AIS Database 3, • PASTA -MARE project...upper/lower cases, plural, etc.) is very consistent and is pertinent for MSARI. The 2009 United States AIS and PASTA -MARE project databases, exclusively...designed for AIS, were found too restrictive for MSARI where other types of data are stored. How- ever, some lessons learned of the PASTA -MARE

  15. Data-Based School Improvement: The Role of Principals and School Supervisory Authorities within the Context of Low-Stakes Mandatory Proficiency Testing in Four German States

    ERIC Educational Resources Information Center

    Ramsteck, Carolin; Muslic, Barbara; Graf, Tanja; Maier, Uwe; Kuper, Harm

    2015-01-01

    Purpose: The purpose of this paper is to investigate how principals and school supervisory authorities understand and use feedback from mandatory proficiency tests (VERA) in the low-stakes context of Germany. For the analysis, the authors refer to a theoretical model of schools that differentiates between Autonomous and Managed Professional…

  16. A Review of Research on Metacognition in Science Education: Current and Future Directions

    ERIC Educational Resources Information Center

    Zohar, Anat; Barzilai, Sarit

    2013-01-01

    The goal of this study is to map the current state of research in the field of metacognition in science education, to identify key trends, and to discern areas and questions for future research. We conducted a systematic analysis of 178 studies published in peer-reviewed journals in the years 2000-2012 and indexed in the ERIC database. The…

  17. Impact of ecological and socioeconomic determinants on the spread of tallow tree in southern forest lands

    Treesearch

    Yuan Tan; Joseph Z. Fan; Christopher M. Oswalt

    2010-01-01

    Based on USDA Forest Service Forest Inventory and Analysis (FIA) database, relationships between the presence of tallow tree and related driving variables including forest landscape metrics, stand and site conditions, as well as natural and anthropogenic disturbances were analyzed for the southern states infested by tallow trees. Of the 9,966 re-measured FIA plots in...

  18. Creation of the NaSCoRD Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William

    This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less

  19. High-Performance Secure Database Access Technologies for HEP Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less

  20. State-by-state variations in cardiac rehabilitation participation are associated with educational attainment, income, and program availability.

    PubMed

    Gaalema, Diann E; Higgins, Stephen T; Shepard, Donald S; Suaya, Jose A; Savage, Patrick D; Ades, Philip A

    2014-01-01

    Wide geographic variations in cardiac rehabilitation (CR) participation in the United States have been demonstrated but are not well understood. Socioeconomic factors such as educational attainment are robust predictors of many health-related behaviors, including smoking, obesity, physical activity, substance abuse, and cardiovascular disease. We investigated potential associations between state-level differences in educational attainment, other socioeconomic factors, CR program availability, and variations in CR participation. A retrospective database analysis was conducted using data from the US Census Bureau, the Centers for Disease Control and Prevention, and the 1997 Medicare database. The outcome of interest was CR participation rates by state, and predictors included state-level high school (HS) graduation rates (in 2001 and 1970), median household income, smoking rates, density of CR program (programs per square mile and per state population), sex and race ratios, and median age. The relationship between HS graduation rates and CR participation by state was significant for both 2001 and 1970 (r = 0.64 and 0.44, respectively, P < .01). Adding the density of CR programs (per population) and income contributed significantly with a cumulative r value of 0.74 and 0.71 for the models using 2001 and 1970, respectively (Ps < .01). The amount of variance accounted for by each of the 3 variables differed between the 2000 and 1970 graduation rates, but both models were unaltered by including additional variables. State-level HS graduation rates, CR programs expressed as programs per population, and median income were strongly associated with geographic variations in CR participation rates.

  1. Bibliographic Databases Outside of the United States.

    ERIC Educational Resources Information Center

    McGinn, Thomas P.; And Others

    1988-01-01

    Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…

  2. Assessment of NASA's Aircraft Noise Prediction Capability

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2012-01-01

    A goal of NASA s Fundamental Aeronautics Program is the improvement of aircraft noise prediction. This document provides an assessment, conducted from 2006 to 2009, on the current state of the art for aircraft noise prediction by carefully analyzing the results from prediction tools and from the experimental databases to determine errors and uncertainties and compare results to validate the predictions. The error analysis is included for both the predictions and the experimental data and helps identify where improvements are required. This study is restricted to prediction methods and databases developed or sponsored by NASA, although in many cases they represent the current state of the art for industry. The present document begins with an introduction giving a general background for and a discussion on the process of this assessment followed by eight chapters covering topics at both the system and the component levels. The topic areas, each with multiple contributors, are aircraft system noise, engine system noise, airframe noise, fan noise, liner physics, duct acoustics, jet noise, and propulsion airframe aeroacoustics.

  3. Common Core of Data (CCD): School Years 1996-1997 through 2000-2001. [CD-ROM].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The Common Core of Data (CCD) is NCES's primary database on elementary and secondary public education in the United States. CCD is a comprehensive, annual, national statistical database of all elementary and secondary schools and school districts, which contains data that are comparable across all states. The 50 states and the District of Columbia…

  4. The economic impact of Clostridium difficile infection: a systematic review.

    PubMed

    Nanwa, Natasha; Kendzerska, Tetyana; Krahn, Murray; Kwong, Jeffrey C; Daneman, Nick; Witteman, William; Mittmann, Nicole; Cadarette, Suzanne M; Rosella, Laura; Sander, Beate

    2015-04-01

    With Clostridium difficile infection (CDI) on the rise, knowledge of the current economic burden of CDI can inform decisions on interventions related to CDI. We systematically reviewed CDI cost-of-illness (COI) studies. We performed literature searches in six databases: MEDLINE, Embase, the Health Technology Assessment Database, the National Health Service Economic Evaluation Database, the Cost-Effectiveness Analysis Registry, and EconLit. We also searched gray literature and conducted reference list searches. Two reviewers screened articles independently. One reviewer abstracted data and assessed quality using a modified guideline for economic evaluations. The second reviewer validated the abstraction and assessment. We identified 45 COI studies between 1988 and June 2014. Most (84%) of the studies were from the United States, calculating costs of hospital stays (87%), and focusing on direct costs (100%). Attributable mean CDI costs ranged from $8,911 to $30,049 for hospitalized patients. Few studies stated resource quantification methods (0%), an epidemiological approach (0%), or a justified study perspective (16%) in their cost analyses. In addition, few studies conducted sensitivity analyses (7%). Forty-five COI studies quantified and confirmed the economic impact of CDI. Costing methods across studies were heterogeneous. Future studies should follow standard COI methodology, expand study perspectives (e.g., patient), and explore populations least studied (e.g., community-acquired CDI).

  5. Distribution System Upgrade Unit Cost Database

    DOE Data Explorer

    Horowitz, Kelsey

    2017-11-30

    This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.

  6. Introducing the CPL/MUW proteome database: interpretation of human liver and liver cancer proteome profiles by referring to isolated primary cells.

    PubMed

    Wimmer, Helge; Gundacker, Nina C; Griss, Johannes; Haudek, Verena J; Stättner, Stefan; Mohr, Thomas; Zwickl, Hannes; Paulitschke, Verena; Baron, David M; Trittner, Wolfgang; Kubicek, Markus; Bayer, Editha; Slany, Astrid; Gerner, Christopher

    2009-06-01

    Interpretation of proteome data with a focus on biomarker discovery largely relies on comparative proteome analyses. Here, we introduce a database-assisted interpretation strategy based on proteome profiles of primary cells. Both 2-D-PAGE and shotgun proteomics are applied. We obtain high data concordance with these two different techniques. When applying mass analysis of tryptic spot digests from 2-D gels of cytoplasmic fractions, we typically identify several hundred proteins. Using the same protein fractions, we usually identify more than thousand proteins by shotgun proteomics. The data consistency obtained when comparing these independent data sets exceeds 99% of the proteins identified in the 2-D gels. Many characteristic differences in protein expression of different cells can thus be independently confirmed. Our self-designed SQL database (CPL/MUW - database of the Clinical Proteomics Laboratories at the Medical University of Vienna accessible via www.meduniwien.ac.at/proteomics/database) facilitates (i) quality management of protein identification data, which are based on MS, (ii) the detection of cell type-specific proteins and (iii) of molecular signatures of specific functional cell states. Here, we demonstrate, how the interpretation of proteome profiles obtained from human liver tissue and hepatocellular carcinoma tissue is assisted by the Clinical Proteomics Laboratories at the Medical University of Vienna-database. Therefore, we suggest that the use of reference experiments supported by a tailored database may substantially facilitate data interpretation of proteome profiling experiments.

  7. Implementation of a computer database testing and analysis program.

    PubMed

    Rouse, Deborah P

    2007-01-01

    The author is the coordinator of a computer software database testing and analysis program implemented in an associate degree nursing program. Computer software database programs help support the testing development and analysis process. Critical thinking is measurable and promoted with their use. The reader of this article will learn what is involved in procuring and implementing a computer database testing and analysis program in an academic nursing program. The use of the computerized database for testing and analysis will be approached as a method to promote and evaluate the nursing student's critical thinking skills and to prepare the nursing student for the National Council Licensure Examination.

  8. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  9. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  10. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  11. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  12. United States Army Medical Materiel Development Activity: 1997 Annual Report.

    DTIC Science & Technology

    1997-01-01

    business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was

  13. A case study for a digital seabed database: Bohai Sea engineering geology database

    NASA Astrophysics Data System (ADS)

    Tianyun, Su; Shikui, Zhai; Baohua, Liu; Ruicai, Liang; Yanpeng, Zheng; Yong, Wang

    2006-07-01

    This paper discusses the designing plan of ORACLE-based Bohai Sea engineering geology database structure from requisition analysis, conceptual structure analysis, logical structure analysis, physical structure analysis and security designing. In the study, we used the object-oriented Unified Modeling Language (UML) to model the conceptual structure of the database and used the powerful function of data management which the object-oriented and relational database ORACLE provides to organize and manage the storage space and improve its security performance. By this means, the database can provide rapid and highly effective performance in data storage, maintenance and query to satisfy the application requisition of the Bohai Sea Oilfield Paradigm Area Information System.

  14. Analysis of the evidence-practice gap to facilitate proper medical care for the elderly: investigation, using databases, of utilization measures for National Database of Health Insurance Claims and Specific Health Checkups of Japan (NDB).

    PubMed

    Nakayama, Takeo; Imanaka, Yuichi; Okuno, Yasushi; Kato, Genta; Kuroda, Tomohiro; Goto, Rei; Tanaka, Shiro; Tamura, Hiroshi; Fukuhara, Shunichi; Fukuma, Shingo; Muto, Manabu; Yanagita, Motoko; Yamamoto, Yosuke

    2017-06-06

    As Japan becomes a super-aging society, presentation of the best ways to provide medical care for the elderly, and the direction of that care, are important national issues. Elderly people have multi-morbidity with numerous medical conditions and use many medical resources for complex treatment patterns. This increases the likelihood of inappropriate medical practices and an evidence-practice gap. The present study aimed to: derive findings that are applicable to policy from an elucidation of the actual state of medical care for the elderly; establish a foundation for the utilization of National Database of Health Insurance Claims and Specific Health Checkups of Japan (NDB), and present measures for the utilization of existing databases in parallel with NDB validation.Cross-sectional and retrospective cohort studies were conducted using the NDB built by the Ministry of Health, Labor and Welfare of Japan, private health insurance claims databases, and the Kyoto University Hospital database (including related hospitals). Medical practices (drug prescription, interventional procedures, testing) related to four issues-potential inappropriate medication, cancer therapy, chronic kidney disease treatment, and end-of-life care-will be described. The relationships between these issues and clinical outcomes (death, initiation of dialysis and other adverse events) will be evaluated, if possible.

  15. Improving national-scale invasion maps: Tamarisk in the western United States

    USGS Publications Warehouse

    Jarnevich, C.S.; Evangelista, P.; Stohlgren, T.J.; Morisette, J.

    2011-01-01

    New invasions, better field data, and novel spatial-modeling techniques often drive the need to revisit previous maps and models of invasive species. Such is the case with the at least 10 species of Tamarix, which are invading riparian systems in the western United States and expanding their range throughout North America. In 2006, we developed a National Tamarisk Map by using a compilation of presence and absence locations with remotely sensed data and statistical modeling techniques. Since the publication of that work, our database of Tamarix distributions has grown significantly. Using the updated database of species occurrence, new predictor variables, and the maximum entropy (Maxent) model, we have revised our potential Tamarix distribution map for the western United States. Distance-to-water was the strongest predictor in the model (58.1%), while mean temperature of the warmest quarter was the second best predictor (18.4%). Model validation, averaged from 25 model iterations, indicated that our analysis had strong predictive performance (AUC = 0.93) and that the extent of Tamarix distributions is much greater than previously thought. The southwestern United States had the greatest suitable habitat, and this result differed from the 2006 model. Our work highlights the utility of iterative modeling for invasive species habitat modeling as new information becomes available. ?? 2011.

  16. Historical Analysis and Charaterization of Ground Level Ozone for Canada and United State

    NASA Astrophysics Data System (ADS)

    Lin, H.; Li, H.; Auld, H.

    2003-12-01

    Ground-level ozone has long been recognized as an important health and ecosystem-related air quality concern in Canada and the United States. In this work we seek to understand the characteristics of ground level ozone conditions for Canada and United States to support the Ozone Annex under the Canada-U.S. Air Quality Agreement. Our analyses are based upon the data collected by Canadian National Air Pollution Surveillance (NAPS, the NAPS database has also been expanded to include U.S. EPA ground level ozone data) network. Historical ozone data from 1974 to 2002 at a total of 538 stations (253 Canadian stations and 285 U.S. stations) were statistically analyzed using several methodologies including the Canada Wide Standard (CWS). A more detailed analysis including hourly, daily, monthly, seasonally and yearly ozone concentration distributions and trends was undertaken for 54 stations.

  17. Comparative proteome analysis of Milnesium tardigradum in early embryonic state versus adults in active and anhydrobiotic state.

    PubMed

    Schokraie, Elham; Warnken, Uwe; Hotz-Wagenblatt, Agnes; Grohme, Markus A; Hengherr, Steffen; Förster, Frank; Schill, Ralph O; Frohme, Marcus; Dandekar, Thomas; Schnölzer, Martina

    2012-01-01

    Tardigrades have fascinated researchers for more than 300 years because of their extraordinary capability to undergo cryptobiosis and survive extreme environmental conditions. However, the survival mechanisms of tardigrades are still poorly understood mainly due to the absence of detailed knowledge about the proteome and genome of these organisms. Our study was intended to provide a basis for the functional characterization of expressed proteins in different states of tardigrades. High-throughput, high-accuracy proteomics in combination with a newly developed tardigrade specific protein database resulted in the identification of more than 3000 proteins in three different states: early embryonic state and adult animals in active and anhydrobiotic state. This comprehensive proteome resource includes protein families such as chaperones, antioxidants, ribosomal proteins, cytoskeletal proteins, transporters, protein channels, nutrient reservoirs, and developmental proteins. A comparative analysis of protein families in the different states was performed by calculating the exponentially modified protein abundance index which classifies proteins in major and minor components. This is the first step to analyzing the proteins involved in early embryonic development, and furthermore proteins which might play an important role in the transition into the anhydrobiotic state.

  18. Comparative proteome analysis of Milnesium tardigradum in early embryonic state versus adults in active and anhydrobiotic state

    PubMed Central

    Schokraie, Elham; Warnken, Uwe; Hotz-Wagenblatt, Agnes; Grohme, Markus A.; Hengherr, Steffen; Förster, Frank; Schill, Ralph O.; Frohme, Marcus; Dandekar, Thomas; Schnölzer, Martina

    2012-01-01

    Tardigrades have fascinated researchers for more than 300 years because of their extraordinary capability to undergo cryptobiosis and survive extreme environmental conditions. However, the survival mechanisms of tardigrades are still poorly understood mainly due to the absence of detailed knowledge about the proteome and genome of these organisms. Our study was intended to provide a basis for the functional characterization of expressed proteins in different states of tardigrades. High-throughput, high-accuracy proteomics in combination with a newly developed tardigrade specific protein database resulted in the identification of more than 3000 proteins in three different states: early embryonic state and adult animals in active and anhydrobiotic state. This comprehensive proteome resource includes protein families such as chaperones, antioxidants, ribosomal proteins, cytoskeletal proteins, transporters, protein channels, nutrient reservoirs, and developmental proteins. A comparative analysis of protein families in the different states was performed by calculating the exponentially modified protein abundance index which classifies proteins in major and minor components. This is the first step to analyzing the proteins involved in early embryonic development, and furthermore proteins which might play an important role in the transition into the anhydrobiotic state. PMID:23029181

  19. Indian Renewable Energy and Energy Efficiency Policy Database (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushe, S.

    2013-09-01

    This fact sheet provides an overview of the Indian Renewable Energy and Energy Efficiency Policy Database (IREEED) developed in collaboration by the United States Department of Energy and India's Ministry of New and Renewable Energy. IREEED provides succinct summaries of India's central and state government policies and incentives related to renewable energy and energy efficiency. The online, public database was developed under the U.S.- India Energy Dialogue and the Clean Energy Solution Center.

  20. Perianesthetic and Anesthesia-Related Mortality in a Southeastern United States Population: A Longitudinal Review of a Prospectively Collected Quality Assurance Data Base.

    PubMed

    Pollard, Richard J; Hopkins, Thomas; Smith, C Tyler; May, Bryan V; Doyle, James; Chambers, C Labron; Clark, Reese; Buhrman, William

    2018-05-21

    Perianesthetic mortality (death occurring within 48 hours of an anesthetic) continues to vary widely depending on the study population examined. The authors study in a private practice physician group that covers multiple anesthetizing locations in the Southeastern United States. This group has in place a robust quality assurance (QA) database to follow all patients undergoing anesthesia. With this study, we estimate the incidence of anesthesia-related and perianesthetic mortality in this QA database. Following institutional review board approval, data from 2011 to 2016 were obtained from the QA database of a large, community-based anesthesiology group practice. The physician practice covers 233 anesthetizing locations across 20 facilities in 2 US states. All detected cases of perianesthetic death were extracted from the database and compared to the patients' electronic medical record. These cases were further examined by a committee of 3 anesthesiologists to determine whether the death was anesthesia related (a perioperative death solely attributable to either the anesthesia provider or anesthetic technique), anesthetic contributory (a perioperative death in which anesthesia role could not be entirely excluded), or not due to anesthesia. A total of 785,467 anesthesia procedures were examined from the study period. A total of 592 cases of perianesthetic deaths were detected, giving an overall death rate of 75.37 in 100,000 cases (95% CI, 69.5-81.7). Mortality judged to be anesthesia related was found in 4 cases, giving a mortality rate of 0.509 in 100,000 (95% CI, 0.198-1.31). Mortality judged to be anesthesia contributory were found in 18 cases, giving a mortality of 2.29 in 100,000 patients (95% CI, 1.45-3.7). A total of 570 cases were judged to be nonanesthesia related, giving an incidence of 72.6 per 100,000 anesthetics (95% CI, 69.3-75.7). In a large, comprehensive database representing the full range of anesthesia practices and locations in the Southeastern United States, the rate of perianesthestic death was 0.509 in 100,000 (95% CI, 0.198-1.31). Future in-depth analysis of the epidemiology of perianesthetic deaths will be reported in later studies.

  1. The Longhorn Array Database (LAD): An Open-Source, MIAME compliant implementation of the Stanford Microarray Database (SMD)

    PubMed Central

    Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R

    2003-01-01

    Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545

  2. Methods for Estimating Withdrawal and Return Flow by Census Block for 2005 and 2020 for New Hampshire

    USGS Publications Warehouse

    Hayes, Laura; Horn, Marilee A.

    2009-01-01

    The U.S. Geological Survey, in cooperation with the New Hampshire Department of Environmental Services, estimated the amount of water demand, consumptive use, withdrawal, and return flow for each U.S. Census block in New Hampshire for the years 2005 (current) and 2020. Estimates of domestic, commercial, industrial, irrigation, and other nondomestic water use were derived through the use and innovative integration of several State and Federal databases, and by use of previously developed techniques. The New Hampshire Water Demand database was created as part of this study to store and integrate State of New Hampshire data central to the project. Within the New Hampshire Water Demand database, a lookup table was created to link the State databases and identify water users common to more than one database. The lookup table also allowed identification of withdrawal and return-flow locations of registered and unregistered commercial, industrial, agricultural, and other nondomestic users. Geographic information system data from the State were used in combination with U.S. Census Bureau spatial data to locate and quantify withdrawals and return flow for domestic users in each census block. Analyzing and processing the most recently available data resulted in census-block estimations of 2005 water use. Applying population projections developed by the State to the data sets enabled projection of water use for the year 2020. The results for each census block are stored in the New Hampshire Water Demand database and may be aggregated to larger political areas or watersheds to assess relative hydrologic stress on the basis of current and potential water availability.

  3. The Forest Inventory and Analysis Database: Database description and users manual version 4.0 for Phase 2

    Treesearch

    Sharon W. Woudenberg; Barbara L. Conkling; Barbara M. O' Connell; Elizabeth B. LaPoint; Jeffery A. Turner; Karen L. Waddell

    2010-01-01

    This document is based on previous documentation of the nationally standardized Forest Inventory and Analysis database (Hansen and others 1992; Woudenberg and Farrenkopf 1995; Miles and others 2001). Documentation of the structure of the Forest Inventory and Analysis database (FIADB) for Phase 2 data, as well as codes and definitions, is provided. Examples for...

  4. Analysis of Total Food Intake and Composition of Individual's ...

    EPA Pesticide Factsheets

    EPA released the final report, Analysis of Total Food Intake and Composition of Individual’s Diet Based on USDA’s 1994-1996, 98 Continuing Survey of Food Intakes by Individuals (CSFII). The consumption of food by the general population is a significant route of potential exposure to hazardous substances that are present in the environment. For this reason, a thorough analysis of the dietary habits of the American public would aid in the identification of potential exposure pathways. To that end, the EPA developed per capita food intake rates for various food items and food categories using databases developed by the United States Department of Agriculture (USDA). These intake rates were incorporated into EPA's 1997 Exposure Factors Handbook. Since that time, EPA has recommended that the food intake study be updated and expanded to include a more comprehensive analysis of food intake. That analysis is presented in this document. The purpose of this study is to characterize the consumption of food by the people of the United States.

  5. Integrating forensic information in a crime intelligence database.

    PubMed

    Rossy, Quentin; Ioset, Sylvain; Dessimoz, Damien; Ribaux, Olivier

    2013-07-10

    Since 2008, intelligence units of six states of the western part of Switzerland have been sharing a common database for the analysis of high volume crimes. On a daily basis, events reported to the police are analysed, filtered and classified to detect crime repetitions and interpret the crime environment. Several forensic outcomes are integrated in the system such as matches of traces with persons, and links between scenes detected by the comparison of forensic case data. Systematic procedures have been settled to integrate links assumed mainly through DNA profiles, shoemarks patterns and images. A statistical outlook on a retrospective dataset of series from 2009 to 2011 of the database informs for instance on the number of repetition detected or confirmed and increased by forensic case data. Time needed to obtain forensic intelligence in regard with the type of marks treated, is seen as a critical issue. Furthermore, the underlying integration process of forensic intelligence into the crime intelligence database raised several difficulties in regards of the acquisition of data and the models used in the forensic databases. Solutions found and adopted operational procedures are described and discussed. This process form the basis to many other researches aimed at developing forensic intelligence models. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Guidelines for establishing and maintaining construction quality databases : tech brief.

    DOT National Transportation Integrated Search

    2006-12-01

    Construction quality databases contain a variety of construction-related data that characterize the quality of materials and workmanship. The primary purpose of construction quality databases is to help State highway agencies (SHAs) assess the qualit...

  7. The U.S. Geological Survey’s nonindigenous aquatic species database: over thirty years of tracking introduced aquatic species in the United States (and counting)

    USGS Publications Warehouse

    Fuller, Pamela L.; Neilson, Matthew E.

    2015-01-01

    The U.S. Geological Survey’s Nonindigenous Aquatic Species (NAS) Database has tracked introductions of freshwater aquatic organisms in the United States for the past four decades. A website provides access to occurrence reports, distribution maps, and fact sheets for more than 1,000 species. The site also includes an on-line reporting system and an alert system for new occurrences. We provide an historical overview of the database, a description of its current capabilities and functionality, and a basic characterization of the data contained within the database.

  8. A Quantum Private Query Protocol for Enhancing both User and Database Privacy

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Hua; Bai, Xue-Wei; Li, Lei-Lei; Shi, Wei-Min; Yang, Yu-Guang

    2018-01-01

    In order to protect the privacy of query user and database, some QKD-based quantum private query (QPQ) protocols were proposed. Unfortunately some of them cannot resist internal attack from database perfectly; some others can ensure better user privacy but require a reduction of database privacy. In this paper, a novel two-way QPQ protocol is proposed to ensure the privacy of both sides of communication. In our protocol, user makes initial quantum states and derives the key bit by comparing initial quantum state and outcome state returned from database by ctrl or shift mode instead of announcing two non-orthogonal qubits as others which may leak part secret information. In this way, not only the privacy of database be ensured but also user privacy is strengthened. Furthermore, our protocol can also realize the security of loss-tolerance, cheat-sensitive, and resisting JM attack etc. Supported by National Natural Science Foundation of China under Grant Nos. U1636106, 61572053, 61472048, 61602019, 61502016; Beijing Natural Science Foundation under Grant Nos. 4152038, 4162005; Basic Research Fund of Beijing University of Technology (No. X4007999201501); The Scientific Research Common Program of Beijing Municipal Commission of Education under Grant No. KM201510005016

  9. Arkansas Groundwater-Quality Network

    USGS Publications Warehouse

    Pugh, Aaron L.; Jackson, Barry T.; Miller, Roger

    2014-01-01

    Arkansas is the fourth largest user of groundwater in the United States, where groundwater accounts for two-thirds of the total water use. Groundwater use in the State increased by 510 percent between 1965 and 2005 (Holland, 2007). The Arkansas Groundwater-Quality Network is a Web map interface (http://ar.water.usgs.gov/wqx) that provides rapid access to the U.S. Geological Survey’s (USGS) National Water Information System (NWIS) and the U.S. Environmental Protection Agency’s (USEPA) STOrage and RETrieval (STORET) databases of ambient water information. The interface enables users to perform simple graphical analysis and download selected water-quality data.

  10. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  11. Very fast road database verification using textured 3D city models obtained from airborne imagery

    NASA Astrophysics Data System (ADS)

    Bulatov, Dimitri; Ziems, Marcel; Rottensteiner, Franz; Pohl, Melanie

    2014-10-01

    Road databases are known to be an important part of any geodata infrastructure, e.g. as the basis for urban planning or emergency services. Updating road databases for crisis events must be performed quickly and with the highest possible degree of automation. We present a semi-automatic algorithm for road verification using textured 3D city models, starting from aerial or even UAV-images. This algorithm contains two processes, which exchange input and output, but basically run independently from each other. These processes are textured urban terrain reconstruction and road verification. The first process contains a dense photogrammetric reconstruction of 3D geometry of the scene using depth maps. The second process is our core procedure, since it contains various methods for road verification. Each method represents a unique road model and a specific strategy, and thus is able to deal with a specific type of roads. Each method is designed to provide two probability distributions, where the first describes the state of a road object (correct, incorrect), and the second describes the state of its underlying road model (applicable, not applicable). Based on the Dempster-Shafer Theory, both distributions are mapped to a single distribution that refers to three states: correct, incorrect, and unknown. With respect to the interaction of both processes, the normalized elevation map and the digital orthophoto generated during 3D reconstruction are the necessary input - together with initial road database entries - for the road verification process. If the entries of the database are too obsolete or not available at all, sensor data evaluation enables classification of the road pixels of the elevation map followed by road map extraction by means of vectorization and filtering of the geometrically and topologically inconsistent objects. Depending on the time issue and availability of a geo-database for buildings, the urban terrain reconstruction procedure has semantic models of buildings, trees, and ground as output. Building s and ground are textured by means of available images. This facilitates the orientation in the model and the interactive verification of the road objects that where initially classified as unknown. The three main modules of the texturing algorithm are: Pose estimation (if the videos are not geo-referenced), occlusion analysis, and texture synthesis.

  12. Analysis of a Meteorological Database for London Heathrow in the Context of Wake Vortex Hazards

    NASA Astrophysics Data System (ADS)

    Agnew, P.; Ogden, D. J.; Hoad, D. J.

    2003-04-01

    A database of meteorological parameters collected by aircraft arriving at LHR has recently been compiled. We have used the recorded variation of temperature and wind with height to deduce the 'wake vortex behaviour class' (WVBC) along the glide slope, as experienced by each flight. The integrated state of the glide slope has been investigated, allowing us to estimate the proportion of time for which the wake vortex threat is reduced, due to either rapid decay or transport off the glide slope. A numerical weather prediction model was used to forecast the meteorological parameters for periods coinciding with the aircraft data. This allowed us to perform a comparison of forecast WVBC with those deduced from the aircraft measurements.

  13. US EPA record of decision review for landfills: Sanitary landfill (740-G), Savannah River Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-06-01

    This report presents the results of a review of the US Environmental Protection Agency (EPA) Record of Decision System (RODS) database search conducted to identify Superfund landfill sites where a Record of Decision (ROD) has been prepared by EPA, the States or the US Army Corps of Engineers describing the selected remedy at the site. ROD abstracts from the database were reviewed to identify site information including site type, contaminants of concern, components of the selected remedy, and cleanup goals. Only RODs from landfill sites were evaluated so that the results of the analysis can be used to support themore » remedy selection process for the Sanitary Landfill at the Savannah River Site (SRS).« less

  14. A community effort to construct a gravity database for the United States and an associated Web portal

    USGS Publications Warehouse

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  15. Measuring ability to assess claims about treatment effects: a latent trait analysis of items from the 'Claim Evaluation Tools' database using Rasch modelling.

    PubMed

    Austvoll-Dahlgren, Astrid; Guttersrud, Øystein; Nsangi, Allen; Semakula, Daniel; Oxman, Andrew D

    2017-05-25

    The Claim Evaluation Tools database contains multiple-choice items for measuring people's ability to apply the key concepts they need to know to be able to assess treatment claims. We assessed items from the database using Rasch analysis to develop an outcome measure to be used in two randomised trials in Uganda. Rasch analysis is a form of psychometric testing relying on Item Response Theory. It is a dynamic way of developing outcome measures that are valid and reliable. To assess the validity, reliability and responsiveness of 88 items addressing 22 key concepts using Rasch analysis. We administrated four sets of multiple-choice items in English to 1114 people in Uganda and Norway, of which 685 were children and 429 were adults (including 171 health professionals). We scored all items dichotomously. We explored summary and individual fit statistics using the RUMM2030 analysis package. We used SPSS to perform distractor analysis. Most items conformed well to the Rasch model, but some items needed revision. Overall, the four item sets had satisfactory reliability. We did not identify significant response dependence between any pairs of items and, overall, the magnitude of multidimensionality in the data was acceptable. The items had a high level of difficulty. Most of the items conformed well to the Rasch model's expectations. Following revision of some items, we concluded that most of the items were suitable for use in an outcome measure for evaluating the ability of children or adults to assess treatment claims. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Scientific Data Collection/Analysis: 1994-2004

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes technologies for lightweight, temperature-tolerant, radiation-hard sensors. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.

  17. 40 CFR 312.26 - Reviews of Federal, State, Tribal, and local government records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... use restrictions, applicable to the subject property. (c) With regard to nearby or adjoining properties, the review of federal, tribal, state, and local government records or databases of government... records of reported releases or threatened releases. Such records or databases containing such records and...

  18. Transportation-markings database : railway signals, signs, marks and markers. Part I 3, Volume 3, additional studies

    DOT National Transportation Integrated Search

    2009-01-01

    The Database demonstrates the unity and commonality of T-M but presents each one in its separate state. Yet in that process the full panopoly of T-M is unfolded including their shared and connected state. There are thousands of Trasportation-Markings...

  19. 42 CFR 435.407 - Types of acceptable documentary evidence of citizenship.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... under penalty of perjury by a residential care facility director or administrator on behalf of an... documented and recorded in a State database subsequent changes in eligibility should not require repeating.... The State need only check its databases to verify that the individual already established citizenship...

  20. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  1. Comparison of costs and discharge outcomes for patients hospitalized for ischemic or hemorrhagic stroke with or without atrial fibrillation in the United States.

    PubMed

    Pan, Xianying; Simon, Teresa A; Hamilton, Melissa; Kuznik, Andreas

    2015-05-01

    This retrospective analysis investigated the impact of baseline clinical characteristics, including atrial fibrillation (AF), on hospital discharge status (to home or continuing care), mortality, length of hospital stay, and treatment costs in patients hospitalized for stroke. The analysis included adult patients hospitalized with a primary diagnosis of ischemic or hemorrhagic stroke between January 2006 and June 2011 from the premier alliance database, a large nationally representative database of inpatient health records. Patients included in the analysis were categorized as with or without AF, based on the presence or absence of a secondary listed diagnosis of AF. Irrespective of stroke type (ischemic or hemorrhagic), AF was associated with an increased risk of mortality during the index hospitalization event, as well as a higher probability of discharge to a continuing care facility, longer duration of stay, and higher treatment costs. In patients hospitalized for a stroke event, AF appears to be an independent risk factor of in-hospital mortality, discharge to continuing care, length of hospital stay, and increased treatment costs.

  2. Bioinformatics analysis of transcriptome dynamics during growth in angus cattle longissimus muscle.

    PubMed

    Moisá, Sonia J; Shike, Daniel W; Graugnard, Daniel E; Rodriguez-Zas, Sandra L; Everts, Robin E; Lewin, Harris A; Faulkner, Dan B; Berger, Larry L; Loor, Juan J

    2013-01-01

    Transcriptome dynamics in the longissimus muscle (LM) of young Angus cattle were evaluated at 0, 60, 120, and 220 days from early-weaning. Bioinformatic analysis was performed using the dynamic impact approach (DIA) by means of Kyoto Encyclopedia of Genes and Genomes (KEGG) and Database for Annotation, Visualization and Integrated Discovery (DAVID) databases. Between 0 to 120 days (growing phase) most of the highly-impacted pathways (eg, ascorbate and aldarate metabolism, drug metabolism, cytochrome P450 and Retinol metabolism) were inhibited. The phase between 120 to 220 days (finishing phase) was characterized by the most striking differences with 3,784 differentially expressed genes (DEGs). Analysis of those DEGs revealed that the most impacted KEGG canonical pathway was glycosylphosphatidylinositol (GPI)-anchor biosynthesis, which was inhibited. Furthermore, inhibition of calpastatin and activation of tyrosine aminotransferase ubiquitination at 220 days promotes proteasomal degradation, while the concurrent activation of ribosomal proteins promotes protein synthesis. Therefore, the balance of these processes likely results in a steady-state of protein turnover during the finishing phase. Results underscore the importance of transcriptome dynamics in LM during growth.

  3. Clean Energy in City Codes: A Baseline Analysis of Municipal Codification across the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Jeffrey J.; Aznar, Alexandra; Dane, Alexander

    Municipal governments in the United States are well positioned to influence clean energy (energy efficiency and alternative energy) and transportation technology and strategy implementation within their jurisdictions through planning, programs, and codification. Municipal governments are leveraging planning processes and programs to shape their energy futures. There is limited understanding in the literature related to codification, the primary way that municipal governments enact enforceable policies. The authors fill the gap in the literature by documenting the status of municipal codification of clean energy and transportation across the United States. More directly, we leverage online databases of municipal codes to develop nationalmore » and state-specific representative samples of municipal governments by population size. Our analysis finds that municipal governments with the authority to set residential building energy codes within their jurisdictions frequently do so. In some cases, communities set codes higher than their respective state governments. Examination of codes across the nation indicates that municipal governments are employing their code as a policy mechanism to address clean energy and transportation.« less

  4. Regular Consumption of Sauerkraut and Its Effect on Human Health: A Bibliometric Analysis

    PubMed Central

    Ostermann, Thomas; Boehm, Katja; Molsberger, Friedrich

    2014-01-01

    Background: Sauerkraut is one of the most common and oldest forms of preserving cabbage and can be traced back as a food source to the 4th century BC. It contains a large quantity of lactic acid and tyramines, as well as vitamins and minerals, and has few calories. Objective: We aimed to provide an overview regarding the evidence of the effects of sauerkraut on human health by means of a bibliometric analysis. Methodology: Electronic databases (Medline, AMED, CamBase, CamQuest, the Cochrane Central Register of Controlled Trials, the Database of Abstracts of Reviews of Effects, the Cochrane Database of Systematic Reviews, EMBASE, the Karger-Publisher and the Thieme-Publisher databases) were searched from their inception until September 2012. Results: The search revealed 139 publications ranging over a 90-year period from 1921 to 2012. The majority of publications originated from Europe (48.6%), followed by the United States (30.7%) and Asia (10%). More than half of the research (56.8%) focused on food analysis, and 23.7% evaluated the impact of sauerkraut on health, including risk factors or digestive well-being. Direct research in humans was almost constant over time at about 11.5%. The studies found that sauerkraut induced inflammation locally, but repeated intake may result in diarrhea. Some studies pointed out anticarcinogenic effects of sauerkraut, while others concentrated on the interaction with monoamine oxidase inhibitors (MAOIs). Discussion: Sauerkraut, one of the oldest traditional foods, has a variety of beneficial effects on human health. However, unwanted effects such as intolerance reactions must be considered when dealing with sauerkraut as a functional food. PMID:25568828

  5. A New Paradigm to Analyze Data Completeness of Patient Data.

    PubMed

    Nasir, Ayan; Gurupur, Varadraj; Liu, Xinliang

    2016-08-03

    There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data.

  6. A New Paradigm to Analyze Data Completeness of Patient Data

    PubMed Central

    Nasir, Ayan; Liu, Xinliang

    2016-01-01

    Summary Background There is a need to develop a tool that will measure data completeness of patient records using sophisticated statistical metrics. Patient data integrity is important in providing timely and appropriate care. Completeness is an important step, with an emphasis on understanding the complex relationships between data fields and their relative importance in delivering care. This tool will not only help understand where data problems are but also help uncover the underlying issues behind them. Objectives Develop a tool that can be used alongside a variety of health care database software packages to determine the completeness of individual patient records as well as aggregate patient records across health care centers and subpopulations. Methods The methodology of this project is encapsulated within the Data Completeness Analysis Package (DCAP) tool, with the major components including concept mapping, CSV parsing, and statistical analysis. Results The results from testing DCAP with Healthcare Cost and Utilization Project (HCUP) State Inpatient Database (SID) data show that this tool is successful in identifying relative data completeness at the patient, subpopulation, and database levels. These results also solidify a need for further analysis and call for hypothesis driven research to find underlying causes for data incompleteness. Conclusion DCAP examines patient records and generates statistics that can be used to determine the completeness of individual patient data as well as the general thoroughness of record keeping in a medical database. DCAP uses a component that is customized to the settings of the software package used for storing patient data as well as a Comma Separated Values (CSV) file parser to determine the appropriate measurements. DCAP itself is assessed through a proof of concept exercise using hypothetical data as well as available HCUP SID patient data. PMID:27484918

  7. Example of monitoring measurements in a virtual eye clinic using 'big data'.

    PubMed

    Jones, Lee; Bryan, Susan R; Miranda, Marco A; Crabb, David P; Kotecha, Aachal

    2017-10-26

    To assess the equivalence of measurement outcomes between patients attending a standard glaucoma care service, where patients see an ophthalmologist in a face-to-face setting, and a glaucoma monitoring service (GMS). The average mean deviation (MD) measurement on the visual field (VF) test for 250 patients attending a GMS were compared with a 'big data' repository of patients attending a standard glaucoma care service (reference database). In addition, the speed of VF progression between GMS patients and reference database patients was compared. Reference database patients were used to create expected outcomes that GMS patients could be compared with. For GMS patients falling outside of the expected limits, further analysis was carried out on the clinical management decisions for these patients. The average MD of patients in the GMS ranged from +1.6 dB to -18.9 dB between two consecutive appointments at the clinic. In the first analysis, 12 (4.8%; 95% CI 2.5% to 8.2%) GMS patients scored outside the 90% expected values based on the reference database. In the second analysis, 1.9% (95% CI 0.4% to 5.4%) GMS patients had VF changes outside of the expected 90% limits. Using 'big data' collected in the standard glaucoma care service, we found that patients attending a GMS have equivalent outcomes on the VF test. Our findings provide support for the implementation of virtual healthcare delivery in the hospital eye service. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. ABERRANT RESTING-STATE BRAIN ACTIVITY IN POSTTRAUMATIC STRESS DISORDER: A META-ANALYSIS AND SYSTEMATIC REVIEW.

    PubMed

    Koch, Saskia B J; van Zuiden, Mirjam; Nawijn, Laura; Frijling, Jessie L; Veltman, Dick J; Olff, Miranda

    2016-07-01

    About 10% of trauma-exposed individuals develop PTSD. Although a growing number of studies have investigated resting-state abnormalities in PTSD, inconsistent results suggest a need for a meta-analysis and a systematic review. We conducted a systematic literature search in four online databases using keywords for PTSD, functional neuroimaging, and resting-state. In total, 23 studies matched our eligibility criteria. For the meta-analysis, we included 14 whole-brain resting-state studies, reporting data on 663 participants (298 PTSD patients and 365 controls). We used the activation likelihood estimation approach to identify concurrence of whole-brain hypo- and hyperactivations in PTSD patients during rest. Seed-based studies could not be included in the quantitative meta-analysis. Therefore, a separate qualitative systematic review was conducted on nine seed-based functional connectivity studies. The meta-analysis showed consistent hyperactivity in the ventral anterior cingulate cortex and the parahippocampus/amygdala, but hypoactivity in the (posterior) insula, cerebellar pyramis and middle frontal gyrus in PTSD patients, compared to healthy controls. Partly concordant with these findings, the systematic review on seed-based functional connectivity studies showed enhanced salience network (SN) connectivity, but decreased default mode network (DMN) connectivity in PTSD. Combined, these altered resting-state connectivity and activity patterns could represent neurobiological correlates of increased salience processing and hypervigilance (SN), at the cost of awareness of internal thoughts and autobiographical memory (DMN) in PTSD. However, several discrepancies between findings of the meta-analysis and systematic review were observed, stressing the need for future studies on resting-state abnormalities in PTSD patients. © 2016 Wiley Periodicals, Inc.

  9. The ClinicalTrials.gov results database--update and key issues.

    PubMed

    Zarin, Deborah A; Tse, Tony; Williams, Rebecca J; Califf, Robert M; Ide, Nicholas C

    2011-03-03

    The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research. We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010. As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants. ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.

  10. Text Mining Genotype-Phenotype Relationships from Biomedical Literature for Database Curation and Precision Medicine.

    PubMed

    Singhal, Ayush; Simmons, Michael; Lu, Zhiyong

    2016-11-01

    The practice of precision medicine will ultimately require databases of genes and mutations for healthcare providers to reference in order to understand the clinical implications of each patient's genetic makeup. Although the highest quality databases require manual curation, text mining tools can facilitate the curation process, increasing accuracy, coverage, and productivity. However, to date there are no available text mining tools that offer high-accuracy performance for extracting such triplets from biomedical literature. In this paper we propose a high-performance machine learning approach to automate the extraction of disease-gene-variant triplets from biomedical literature. Our approach is unique because we identify the genes and protein products associated with each mutation from not just the local text content, but from a global context as well (from the Internet and from all literature in PubMed). Our approach also incorporates protein sequence validation and disease association using a novel text-mining-based machine learning approach. We extract disease-gene-variant triplets from all abstracts in PubMed related to a set of ten important diseases (breast cancer, prostate cancer, pancreatic cancer, lung cancer, acute myeloid leukemia, Alzheimer's disease, hemochromatosis, age-related macular degeneration (AMD), diabetes mellitus, and cystic fibrosis). We then evaluate our approach in two ways: (1) a direct comparison with the state of the art using benchmark datasets; (2) a validation study comparing the results of our approach with entries in a popular human-curated database (UniProt) for each of the previously mentioned diseases. In the benchmark comparison, our full approach achieves a 28% improvement in F1-measure (from 0.62 to 0.79) over the state-of-the-art results. For the validation study with UniProt Knowledgebase (KB), we present a thorough analysis of the results and errors. Across all diseases, our approach returned 272 triplets (disease-gene-variant) that overlapped with entries in UniProt and 5,384 triplets without overlap in UniProt. Analysis of the overlapping triplets and of a stratified sample of the non-overlapping triplets revealed accuracies of 93% and 80% for the respective categories (cumulative accuracy, 77%). We conclude that our process represents an important and broadly applicable improvement to the state of the art for curation of disease-gene-variant relationships.

  11. Combining NLCD and MODIS to create a land cover-albedo database for the continental United States

    USGS Publications Warehouse

    Wickham, J.; Barnes, Christopher A.; Nash, M.S.; Wade, T.G.

    2015-01-01

    Land surface albedo is an essential climate variable that is tightly linked to land cover, such that specific land cover classes (e.g., deciduous broadleaf forest, cropland) have characteristic albedos. Despite the normative of land-cover class specific albedos, there is considerable variability in albedo within a land cover class. The National Land Cover Database (NLCD) and the Moderate Resolution Imaging Spectroradiometer (MODIS) albedo product were combined to produce a long-term (14 years) integrated land cover-albedo database for the continental United States that can be used to examine the temporal behavior of albedo as a function of land cover. The integration identifies areas of homogeneous land cover at the nominal spatial resolution of the MODIS (MCD43A) albedo product (500 m × 500 m) from the NLCD product (30 m × 30 m), and provides an albedo data record per 500 m × 500 m pixel for 14 of the 16 NLCD land cover classes. Individual homogeneous land cover pixels have up to 605 albedo observations, and 75% of the pixels have at least 319 MODIS albedo observations (≥ 50% of the maximum possible number of observations) for the study period (2000–2013). We demonstrated the utility of the database by conducting a multivariate analysis of variance of albedo for each NLCD land cover class, showing that locational (pixel-to-pixel) and inter-annual variability were significant factors in addition to expected seasonal (intra-annual) and geographic (latitudinal) effects.

  12. Monitoring product safety in the postmarketing environment.

    PubMed

    Sharrar, Robert G; Dieck, Gretchen S

    2013-10-01

    The safety profile of a medicinal product may change in the postmarketing environment. Safety issues not identified in clinical development may be seen and need to be evaluated. Methods of evaluating spontaneous adverse experience reports and identifying new safety risks include a review of individual reports, a review of a frequency distribution of a list of the adverse experiences, the development and analysis of a case series, and various ways of examining the database for signals of disproportionality, which may suggest a possible association. Regulatory agencies monitor product safety through a variety of mechanisms including signal detection of the adverse experience safety reports in databases and by requiring and monitoring risk management plans, periodic safety update reports and postauthorization safety studies. The United States Food and Drug Administration is working with public, academic and private entities to develop methods for using large electronic databases to actively monitor product safety. Important identified risks will have to be evaluated through observational studies and registries.

  13. Initial Flight Test Evaluation of the F-15 ACTIVE Axisymmetric Vectoring Nozzle Performance

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Hathaway, Ross; Ferguson, Michael D.

    1998-01-01

    A full envelope database of a thrust-vectoring axisymmetric nozzle performance for the Pratt & Whitney Pitch/Yaw Balance Beam Nozzle (P/YBBN) is being developed using the F-15 Advanced Control Technology for Integrated Vehicles (ACTIVE) aircraft. At this time, flight research has been completed for steady-state pitch vector angles up to 20' at an altitude of 30,000 ft from low power settings to maximum afterburner power. The nozzle performance database includes vector forces, internal nozzle pressures, and temperatures all of which can be used for regression analysis modeling. The database was used to substantiate a set of nozzle performance data from wind tunnel testing and computational fluid dynamic analyses. Findings from initial flight research at Mach 0.9 and 1.2 are presented in this paper. The results show that vector efficiency is strongly influenced by power setting. A significant discrepancy in nozzle performance has been discovered between predicted and measured results during vectoring.

  14. NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction

    NASA Technical Reports Server (NTRS)

    Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan

    2004-01-01

    This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.

  15. New Dimensions for the Online Catalog: The Dartmouth College Library Experience [and] TOC/DOC at Caltech: Evolution of Citation Access Online [and] Locally Loaded Databases in Arizona State University's Online Catalog Using the CARL System.

    ERIC Educational Resources Information Center

    Klemperer, Katharina; And Others

    1989-01-01

    Each of three articles describes an academic library's online catalog that includes locally created databases. Topics covered include database and software selection; systems design and development; database producer negotiations; problems encountered during implementation; database loading; training and documentation; and future plans. (CLB)

  16. Verification of road databases using multiple road models

    NASA Astrophysics Data System (ADS)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  17. Data Quality Control and Maintenance for the Qweak Experiment

    NASA Astrophysics Data System (ADS)

    Heiner, Nicholas; Spayde, Damon

    2014-03-01

    The Qweak collaboration seeks to quantify the weak charge of a proton through the analysis of the parity-violating electron asymmetry in elastic electron-proton scattering. The asymmetry is calculated by measuring how many electrons deflect from a hydrogen target at the chosen scattering angle for aligned and anti-aligned electron spins, then evaluating the difference between the numbers of deflections that occurred for both polarization states. The weak charge can then be extracted from this data. Knowing the weak charge will allow us to calculate the electroweak mixing angle for the particular Q2 value of the chosen electrons, which the Standard Model makes a firm prediction for. Any significant deviation from this prediction would be a prime indicator of the existence of physics beyond what the Standard Model describes. After the experiment was conducted at Jefferson Lab, collected data was stored within a MySQL database for further analysis. I will present an overview of the database and its functions as well as a demonstration of the quality checks and maintenance performed on the data itself. These checks include an analysis of errors occurring throughout the experiment, specifically data acquisition errors within the main detector array, and an analysis of data cuts.

  18. 16 CFR 1102.12 - Manufacturer comments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.12 Manufacturer... Database if such manufacturer comment meets the following requirements: (1) Manufacturer comment relates to... publication in the Database. (2) Unique identifier. A manufacturer comment must state the unique identifier...

  19. 16 CFR 1102.12 - Manufacturer comments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.12 Manufacturer... Database if such manufacturer comment meets the following requirements: (1) Manufacturer comment relates to... publication in the Database. (2) Unique identifier. A manufacturer comment must state the unique identifier...

  20. Tree chemistry database (version 1.0)

    Treesearch

    Linda H. Pardo; Molly Robin-Abbott; Natasha Duarte; Eric K. Miller

    2005-01-01

    The Tree Chemistry Database is a relational database of C, N, P, K, Ca, Mg, Mn, and Al concentrations in bole bark, bole wood, branches, twigs, and foliage. Compiled from data in 218 articles and publications, the database contains reported nutrient and biomass values for tree species in the Northeastern United States. Nutrient data can be sorted on parameters such as...

  1. Preparing College Students To Search Full-Text Databases: Is Instruction Necessary?

    ERIC Educational Resources Information Center

    Riley, Cheryl; Wales, Barbara

    Full-text databases allow Central Missouri State University's clients to access some of the serials that libraries have had to cancel due to escalating subscription costs; EbscoHost, the subject of this study, is one such database. The database is available free to all Missouri residents. A survey was designed consisting of 21 questions intended…

  2. Northern Forest Futures reporting tools and database guide

    Treesearch

    Patrick D. Miles; Robert J. Huggett; W. Keith Moser

    2015-01-01

    The Northern Forest Futures database (NFFDB) supports the reporting of both current and projected future forest conditions for the 20 states that make up the U.S. North, an area bounded by Maine, Maryland, Missouri, and Minnesota. The NFFDB database and attendant reporting tools are available to the public as a Microsoft AccessTM database. The...

  3. Upper extremity deep venous thrombosis after port insertion: What are the risk factors?

    PubMed

    Tabatabaie, Omidreza; Kasumova, Gyulnara G; Kent, Tara S; Eskander, Mariam F; Fadayomi, Ayotunde B; Ng, Sing Chau; Critchlow, Jonathan F; Tawa, Nicholas E; Tseng, Jennifer F

    2017-08-01

    Totally implantable venous access devices (ports) are widely used, especially for cancer chemotherapy. Although their use has been associated with upper extremity deep venous thrombosis, the risk factors of upper extremity deep venous thrombosis in patients with a port are not studied adequately. The Healthcare Cost and Utilization Project's Florida State Ambulatory Surgery and Services Database was queried between 2007 and 2011 for patients who underwent outpatient port insertion, identified by Current Procedural Terminology code. Patients were followed in the State Ambulatory Surgery and Services Database, State Inpatient Database, and State Emergency Department Database for upper extremity deep venous thrombosis occurrence. The cohort was divided into a test cohort and a validation cohort based on the year of port placement. A multivariable logistic regression model was developed to identify risk factors for upper extremity deep venous thrombosis in patients with a port. The model then was tested on the validation cohort. Of the 51,049 patients in the derivation cohort, 926 (1.81%) developed an upper extremity deep venous thrombosis. On multivariate analysis, independently significant predictors of upper extremity deep venous thrombosis included age <65 years (odds ratio = 1.22), Elixhauser score of 1 to 2 compared with zero (odds ratio = 1.17), end-stage renal disease (versus no kidney disease; odds ratio = 2.63), history of any deep venous thrombosis (odds ratio = 1.77), all-cause 30-day revisit (odds ratio = 2.36), African American race (versus white; odds ratio = 1.86), and other nonwhite races (odds ratio = 1.35). Additionally, compared with genitourinary malignancies, patients with gastrointestinal (odds ratio = 1.55), metastatic (odds ratio = 1.76), and lung cancers (odds ratio = 1.68) had greater risks of developing an upper extremity deep venous thrombosis. This study identified major risk factors of upper extremity deep venous thrombosis. Further studies are needed to evaluate the appropriateness of thromboprophylaxis in patients at greater risk of upper extremity deep venous thrombosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. R2 & NE State - 2010 Census; Housing and Population Summary

    EPA Pesticide Factsheets

    The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. States and equivalent entities are the primary governmental divisions of the United States. In addition to the fifty States, the Census Bureau treats the District of Columbia, Puerto Rico, and each of the Island Areas (American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands) as the statistical equivalents of States for the purpose of data presentation.This table contains housing data derived from the U.S. Census 2010 Summary file 1 database for states. The 2010 Summary File 1 (SF 1) contains data compiled from the 2010 Decennial Census questions. This table contains data on housing units, owner and rental.This table contains population data derived from the U.S. Census 2010 Summary file 1 database for states. The 2010 Summary File 1 (SF 1) contains data compiled from the 2010 Decennial Census questions. This table contains data on ancestry, age, and sex.

  5. Authority of Pharmacists to Administer Human Papillomavirus Vaccine: Alignment of State Laws With Age-Level Recommendations.

    PubMed

    Dingman, Deirdre A; Schmit, Cason D

    One strategy to increase the uptake of human papillomavirus (HPV) vaccine among adolescents is through the use of pharmacists. Our objectives were to (1) use a publicly available database to describe the statutory and regulatory authority of pharmacists to administer the HPV vaccine in the United States and (2) discuss how the current status of laws may influence achievement of the Healthy People 2020 goal of 80% HPV vaccination rate for teenagers aged 13-15. Using information from the Centers for Disease Control and Prevention's (CDC's) Public Health Law Program database, we identified state laws in effect as of January 1, 2016, giving pharmacists authority to administer vaccines. We used a standardized analysis algorithm to determine whether states' laws (1) authorized pharmacists to administer HPV vaccine, (2) required third-party authorization for pharmacist administration, and (3) restricted HPV vaccine administration by pharmacists to certain patient age groups. Of 50 states and the District of Columbia, 40 had laws expressly granting pharmacists authority to administer HPV vaccine to patients, but only 22 had laws that authorized pharmacists to vaccinate preadolescents aged 11 or 12 (ie, the CDC-recommended age group). Pharmacists were granted prescriptive authority by 5 states, and they were given authority pursuant to general (non-patient-specific) third-party authorization (eg, a licensed health care provider) by 32 states or patient-specific third-party authorization by 3 states. Most states permitted pharmacists to administer HPV vaccines only to boys and girls older than 11 or 12, which may hinder achievement of the Healthy People 2020 goal for HPV vaccination. Efforts should be made to strengthen the role of pharmacists in addressing this public health issue.

  6. Methodology for Estimating ton-Miles of Goods Movements for U.S. Freight Mulitimodal Network System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    2013-01-01

    Ton-miles is a commonly used measure of freight transportation output. Estimation of ton-miles in the U.S. transportation system requires freight flow data at disaggregated level (either by link flow, path flows or origin-destination flows between small geographic areas). However, the sheer magnitude of the freight data system as well as industrial confidentiality concerns in Census survey, limit the freight data which is made available to the public. Through the years, the Center for Transportation Analysis (CTA) of the Oak Ridge National Laboratory (ORNL) has been working in the development of comprehensive national and regional freight databases and network flow models.more » One of the main products of this effort is the Freight Analysis Framework (FAF), a public database released by the ORNL. FAF provides to the general public a multidimensional matrix of freight flows (weight and dollar value) on the U.S. transportation system between states, major metropolitan areas, and remainder of states. Recently, the CTA research team has developed a methodology to estimate ton-miles by mode of transportation between the 2007 FAF regions. This paper describes the data disaggregation methodology. The method relies on the estimation of disaggregation factors that are related to measures of production, attractiveness and average shipments distances by mode service. Production and attractiveness of counties are captured by the total employment payroll. Likely mileages for shipments between counties are calculated by using a geographic database, i.e. the CTA multimodal network system. Results of validation experiments demonstrate the validity of the method. Moreover, 2007 FAF ton-miles estimates are consistent with the major freight data programs for rail and water movements.« less

  7. Health Care Disparity and Pregnancy-Related Mortality in the United States, 2005-2014.

    PubMed

    Moaddab, Amirhossein; Dildy, Gary A; Brown, Haywood L; Bateni, Zhoobin H; Belfort, Michael A; Sangi-Haghpeykar, Haleh; Clark, Steven L

    2018-04-01

    To quantitate the contribution of various demographic factors to the U.S. maternal mortality ratio. This was a retrospective observational study. We analyzed data from the Centers for Disease Control and Prevention (CDC) National Center for Health Statistics database and the Detailed Mortality Underlying Cause of Death database (CDC WONDER) from 2005 to 2014 that contains mortality and population counts for all U.S. counties. Bivariate correlations between the maternal mortality ratio and all maternal demographic, lifestyle, health, and medical service utilization characteristics were calculated. We performed a maximum likelihood factor analysis with varimax rotation retaining variables that were significant (P<.05) in the univariate analysis to deal with multicollinearity among the existing variables. The United States has experienced an increase in maternal mortality ratio since 2005 with rates increasing from 15 per 100,00 live births in 2005 to 21-22 per 100,000 live births in 2013 and 2014. (P<.001) This increase in mortality was most pronounced in non-Hispanic black women, with ratios rising from 39 to 49 per 100,000 live births. A significant correlation between state mortality ranking and the percentage of non-Hispanic black women in the delivery population was demonstrated. Cesarean deliveries, unintended births, unmarried status, percentage of deliveries to non-Hispanic black women, and four or fewer prenatal visits were significantly (P<.05) associated with the increased maternal mortality ratio. The current U.S. maternal mortality ratio is heavily influenced by a higher rate of death among non-Hispanic black or unmarried patients with unplanned pregnancies. Racial disparities in health care availability and access or utilization by underserved populations are important issues faced by states seeking to decrease maternal mortality.

  8. MIPS: a database for genomes and protein sequences.

    PubMed Central

    Mewes, H W; Heumann, K; Kaps, A; Mayer, K; Pfeiffer, F; Stocker, S; Frishman, D

    1999-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Martinsried near Munich, Germany, develops and maintains genome oriented databases. It is commonplace that the amount of sequence data available increases rapidly, but not the capacity of qualified manual annotation at the sequence databases. Therefore, our strategy aims to cope with the data stream by the comprehensive application of analysis tools to sequences of complete genomes, the systematic classification of protein sequences and the active support of sequence analysis and functional genomics projects. This report describes the systematic and up-to-date analysis of genomes (PEDANT), a comprehensive database of the yeast genome (MYGD), a database reflecting the progress in sequencing the Arabidopsis thaliana genome (MATD), the database of assembled, annotated human EST clusters (MEST), and the collection of protein sequence data within the framework of the PIR-International Protein Sequence Database (described elsewhere in this volume). MIPS provides access through its WWW server (http://www.mips.biochem.mpg.de) to a spectrum of generic databases, including the above mentioned as well as a database of protein families (PROTFAM), the MITOP database, and the all-against-all FASTA database. PMID:9847138

  9. Image Analysis in Plant Sciences: Publish Then Perish.

    PubMed

    Lobet, Guillaume

    2017-07-01

    Image analysis has become a powerful technique for most plant scientists. In recent years dozens of image analysis tools have been published in plant science journals. These tools cover the full spectrum of plant scales, from single cells to organs and canopies. However, the field of plant image analysis remains in its infancy. It still has to overcome important challenges, such as the lack of robust validation practices or the absence of long-term support. In this Opinion article, I: (i) present the current state of the field, based on data from the plant-image-analysis.org database; (ii) identify the challenges faced by its community; and (iii) propose workable ways of improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. 76 FR 20438 - Proposed Model Performance Measures for State Traffic Records Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-12

    ... what data elements are critical. States should take advantage of these decision-making opportunities to... single database. Error means the recorded value for some data element of interest is incorrect. Error... into the database) and the number of missing (blank) data elements in the records that are in a...

  11. Database of Sources of Environmental Releases of Dioxin-Like Compounds in the United States

    EPA Science Inventory

    The Database of Sources of Environmental Releases of Dioxin-like Compounds in the United States (US)Exponential 6 parameterization for the JCZ3-EOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGee, B.C.; Hobbs, M.L.; Baer, M.R.

    1998-07-01

    A database has been created for use with the Jacobs-Cowperthwaite-Zwisler-3 equation-of-state (JCZ3-EOS) to determine thermochemical equilibrium for detonation and expansion states of energetic materials. The JCZ3-EOS uses the exponential 6 intermolecular potential function to describe interactions between molecules. All product species are characterized by r*, the radius of the minimum pair potential energy, and {var_epsilon}/k, the well depth energy normalized by Boltzmann`s constant. These parameters constitute the JCZS (S for Sandia) EOS database describing 750 gases (including all the gases in the JANNAF tables), and have been obtained by using Lennard-Jones potential parameters, a corresponding states theory, pure liquid shockmore » Hugoniot data, and fit values using an empirical EOS. This database can be used with the CHEETAH 1.40 or CHEETAH 2.0 interface to the TIGER computer program that predicts the equilibrium state of gas- and condensed-phase product species. The large JCZS-EOS database permits intermolecular potential based equilibrium calculations of energetic materials with complex elemental composition.« less

  12. First Look: TRADEMARKSCAN Database.

    ERIC Educational Resources Information Center

    Fernald, Anne Conway; Davidson, Alan B.

    1984-01-01

    Describes database produced by Thomson and Thomson and available on Dialog which contains over 700,000 records representing all active federal trademark registrations and applications for registrations filed in United States Patent and Trademark Office. A typical record, special features, database applications, learning to use TRADEMARKSCAN, and…

  13. 16 CFR § 1102.12 - Manufacturer comments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Content Requirements § 1102.12 Manufacturer... Database if such manufacturer comment meets the following requirements: (1) Manufacturer comment relates to... publication in the Database. (2) Unique identifier. A manufacturer comment must state the unique identifier...

  14. [THE USE OF OPEN REAL ESTATE DATABASES FOR THE ANALYSIS OF INFLUENCE OF CONCOMITANT FACTORS ON THE STATE OF THE URBAN POPULATION'S HEALTH].

    PubMed

    Zheleznyak, E V; Khripach, L V

    2015-01-01

    There was suggested a new method of the assessment of certain social-lifestyle factors in hygienic health examination of the urban population, based on the work with the open real estate databases on residential areas of the given city. On the example of the Moscow FlatInfo portal for a sample of 140 residents of the city of Moscow there was studied the distribution of such available for analysis factors as a typical design of the building, where studied citizen resides, the year of its construction and the market price of 1m2 of housing space in this house. The latter value is a quantitative integrated assessment of the social and lifestyle quality of housing, depending on the type and technical condition of the building, neighborhood environment, infrastructure of the region and many other factors, and may be a useful supplemental index in hygienic research.

  15. Islamic World Science Citation Center (ISC): Evaluating Scholary Journals Based on Citation Analysis

    PubMed Central

    Mehrad, Jaffar; Arastoopoor, Sholeh

    2012-01-01

    Introduction: Citation analysis is currently one of the most widely used metrics for analyzing the scientific contribution in different fields. The Islamic World Science Citation Center (ISC) aims at promoting technical cooperation among Muslim scientists and their respected centers based on these theories. It also facilitates the accessibility of knowledge and research contribution among them. This paper aims at revealing some of the outmost features of ISC databases, in order to give a fairly clear view of what it is and what are its products. The paper consists of three major parts. After an introduction about the Islamic World Science Citation Center, the paper deals with major tools and products of ISC. In the third part ISCs’ journal Submission system is presented as an automatic means, by which users can upload journals’ papers into the respected databases. Conclusion: Some complementary remarks have been made regarding the current state of ISC and its future plans. PMID:23322953

  16. Compilation and analysis of multiple groundwater-quality datasets for Idaho

    USGS Publications Warehouse

    Hundt, Stephen A.; Hopkins, Candice B.

    2018-05-09

    Groundwater is an important source of drinking and irrigation water throughout Idaho, and groundwater quality is monitored by various Federal, State, and local agencies. The historical, multi-agency records of groundwater quality include a valuable dataset that has yet to be compiled or analyzed on a statewide level. The purpose of this study is to combine groundwater-quality data from multiple sources into a single database, to summarize this dataset, and to perform bulk analyses to reveal spatial and temporal patterns of water quality throughout Idaho. Data were retrieved from the Water Quality Portal (https://www.waterqualitydata.us/), the Idaho Department of Environmental Quality, and the Idaho Department of Water Resources. Analyses included counting the number of times a sample location had concentrations above Maximum Contaminant Levels (MCL), performing trends tests, and calculating correlations between water-quality analytes. The water-quality database and the analysis results are available through USGS ScienceBase (https://doi.org/10.5066/F72V2FBG).

  17. PrionHome: a database of prions and other sequences relevant to prion phenomena.

    PubMed

    Harbi, Djamel; Parthiban, Marimuthu; Gendoo, Deena M A; Ehsani, Sepehr; Kumar, Manish; Schmitt-Ulms, Gerold; Sowdhamini, Ramanathan; Harrison, Paul M

    2012-01-01

    Prions are units of propagation of an altered state of a protein or proteins; prions can propagate from organism to organism, through cooption of other protein copies. Prions contain no necessary nucleic acids, and are important both as both pathogenic agents, and as a potential force in epigenetic phenomena. The original prions were derived from a misfolded form of the mammalian Prion Protein PrP. Infection by these prions causes neurodegenerative diseases. Other prions cause non-Mendelian inheritance in budding yeast, and sometimes act as diseases of yeast. We report the bioinformatic construction of the PrionHome, a database of >2000 prion-related sequences. The data was collated from various public and private resources and filtered for redundancy. The data was then processed according to a transparent classification system of prionogenic sequences (i.e., sequences that can make prions), prionoids (i.e., proteins that propagate like prions between individual cells), and other prion-related phenomena. There are eight PrionHome classifications for sequences. The first four classifications are derived from experimental observations: prionogenic sequences, prionoids, other prion-related phenomena, and prion interactors. The second four classifications are derived from sequence analysis: orthologs, paralogs, pseudogenes, and candidate-prionogenic sequences. Database entries list: supporting information for PrionHome classifications, prion-determinant areas (where relevant), and disordered and compositionally-biased regions. Also included are literature references for the PrionHome classifications, transcripts and genomic coordinates, and structural data (including comparative models made for the PrionHome from manually curated alignments). We provide database usage examples for both vertebrate and fungal prion contexts. Using the database data, we have performed a detailed analysis of the compositional biases in known budding-yeast prionogenic sequences, showing that the only abundant bias pattern is for asparagine bias with subsidiary serine bias. We anticipate that this database will be a useful experimental aid and reference resource. It is freely available at: http://libaio.biol.mcgill.ca/prion.

  18. PrionHome: A Database of Prions and Other Sequences Relevant to Prion Phenomena

    PubMed Central

    Harbi, Djamel; Parthiban, Marimuthu; Gendoo, Deena M. A.; Ehsani, Sepehr; Kumar, Manish; Schmitt-Ulms, Gerold; Sowdhamini, Ramanathan; Harrison, Paul M.

    2012-01-01

    Prions are units of propagation of an altered state of a protein or proteins; prions can propagate from organism to organism, through cooption of other protein copies. Prions contain no necessary nucleic acids, and are important both as both pathogenic agents, and as a potential force in epigenetic phenomena. The original prions were derived from a misfolded form of the mammalian Prion Protein PrP. Infection by these prions causes neurodegenerative diseases. Other prions cause non-Mendelian inheritance in budding yeast, and sometimes act as diseases of yeast. We report the bioinformatic construction of the PrionHome, a database of >2000 prion-related sequences. The data was collated from various public and private resources and filtered for redundancy. The data was then processed according to a transparent classification system of prionogenic sequences (i.e., sequences that can make prions), prionoids (i.e., proteins that propagate like prions between individual cells), and other prion-related phenomena. There are eight PrionHome classifications for sequences. The first four classifications are derived from experimental observations: prionogenic sequences, prionoids, other prion-related phenomena, and prion interactors. The second four classifications are derived from sequence analysis: orthologs, paralogs, pseudogenes, and candidate-prionogenic sequences. Database entries list: supporting information for PrionHome classifications, prion-determinant areas (where relevant), and disordered and compositionally-biased regions. Also included are literature references for the PrionHome classifications, transcripts and genomic coordinates, and structural data (including comparative models made for the PrionHome from manually curated alignments). We provide database usage examples for both vertebrate and fungal prion contexts. Using the database data, we have performed a detailed analysis of the compositional biases in known budding-yeast prionogenic sequences, showing that the only abundant bias pattern is for asparagine bias with subsidiary serine bias. We anticipate that this database will be a useful experimental aid and reference resource. It is freely available at: http://libaio.biol.mcgill.ca/prion. PMID:22363733

  19. Component, Context and Manufacturing Model Library (C2M2L)

    DTIC Science & Technology

    2013-03-01

    Penn State team were stored in a relational database for easy access, storage and maintainability. The relational database consisted of a PostGres ...file into a format that can be imported into the PostGres database. This same custom application was used to generate Microsoft Excel templates...Press Break Forming Equipment 4.14 Manufacturing Model Library Database Structure The data storage mechanism for the ARL PSU MML was a PostGres database

  1. A Web application for the management of clinical workflow in image-guided and adaptive proton therapy for prostate cancer treatments.

    PubMed

    Yeung, Daniel; Boes, Peter; Ho, Meng Wei; Li, Zuofeng

    2015-05-08

    Image-guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X-rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post-treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state-of-the-art Web technologies, a domain model closely matching the workflow, a database-supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model-View-Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client-side technologies, such as jQuery, jQuery Plug-ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process.

  2. S.I.I.A for monitoring crop evolution and anomaly detection in Andalusia by remote sensing

    NASA Astrophysics Data System (ADS)

    Rodriguez Perez, Antonio Jose; Louakfaoui, El Mostafa; Munoz Rastrero, Antonio; Rubio Perez, Luis Alberto; de Pablos Epalza, Carmen

    2004-02-01

    A new remote sensing application was developed and incorporated to the Agrarian Integrated Information System (S.I.I.A), project which is involved on integrating the regional farming databases from a geographical point of view, adding new values and uses to the original information. The project is supported by the Studies and Statistical Service, Regional Government Ministry of Agriculture and Fisheries (CAP). The process integrates NDVI values from daily NOAA-AVHRR and monthly IRS-WIFS images, and crop classes location maps. Agrarian local information and meteorological information is being included in the working process to produce a synergistic effect. An updated crop-growing evaluation state is obtained by 10-days periods, crop class, sensor type (including data fusion) and administrative geographical borders. Last ten years crop database (1992-2002) has been organized according to these variables. Crop class database can be accessed by an application which helps users on the crop statistical analysis. Multi-temporal and multi-geographical comparative analysis can be done by the user, not only for a year but also for a historical point of view. Moreover, real time crop anomalies can be detected and analyzed. Most of the output products will be available on Internet in the near future by a on-line application.

  3. Software for pest-management science: computer models and databases from the United States Department of Agriculture-Agricultural Research Service.

    PubMed

    Wauchope, R Don; Ahuja, Lajpat R; Arnold, Jeffrey G; Bingner, Ron; Lowrance, Richard; van Genuchten, Martinus T; Adams, Larry D

    2003-01-01

    We present an overview of USDA Agricultural Research Service (ARS) computer models and databases related to pest-management science, emphasizing current developments in environmental risk assessment and management simulation models. The ARS has a unique national interdisciplinary team of researchers in surface and sub-surface hydrology, soil and plant science, systems analysis and pesticide science, who have networked to develop empirical and mechanistic computer models describing the behavior of pests, pest responses to controls and the environmental impact of pest-control methods. Historically, much of this work has been in support of production agriculture and in support of the conservation programs of our 'action agency' sister, the Natural Resources Conservation Service (formerly the Soil Conservation Service). Because we are a public agency, our software/database products are generally offered without cost, unless they are developed in cooperation with a private-sector cooperator. Because ARS is a basic and applied research organization, with development of new science as our highest priority, these products tend to be offered on an 'as-is' basis with limited user support except for cooperating R&D relationship with other scientists. However, rapid changes in the technology for information analysis and communication continually challenge our way of doing business.

  4. Digital Dental X-ray Database for Caries Screening

    NASA Astrophysics Data System (ADS)

    Rad, Abdolvahab Ehsani; Rahim, Mohd Shafry Mohd; Rehman, Amjad; Saba, Tanzila

    2016-06-01

    Standard database is the essential requirement to compare the performance of image analysis techniques. Hence the main issue in dental image analysis is the lack of available image database which is provided in this paper. Periapical dental X-ray images which are suitable for any analysis and approved by many dental experts are collected. This type of dental radiograph imaging is common and inexpensive, which is normally used for dental disease diagnosis and abnormalities detection. Database contains 120 various Periapical X-ray images from top to bottom jaw. Dental digital database is constructed to provide the source for researchers to use and compare the image analysis techniques and improve or manipulate the performance of each technique.

  5. An application of a relational database system for high-throughput prediction of elemental compositions from accurate mass values.

    PubMed

    Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke

    2013-01-15

    High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.

  6. MIPS PlantsDB: a database framework for comparative plant genome research.

    PubMed

    Nussbaumer, Thomas; Martis, Mihaela M; Roessner, Stephan K; Pfeifer, Matthias; Bader, Kai C; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel

    2013-01-01

    The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB-plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834-D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB.

  7. MIPS PlantsDB: a database framework for comparative plant genome research

    PubMed Central

    Nussbaumer, Thomas; Martis, Mihaela M.; Roessner, Stephan K.; Pfeifer, Matthias; Bader, Kai C.; Sharma, Sapna; Gundlach, Heidrun; Spannagl, Manuel

    2013-01-01

    The rapidly increasing amount of plant genome (sequence) data enables powerful comparative analyses and integrative approaches and also requires structured and comprehensive information resources. Databases are needed for both model and crop plant organisms and both intuitive search/browse views and comparative genomics tools should communicate the data to researchers and help them interpret it. MIPS PlantsDB (http://mips.helmholtz-muenchen.de/plant/genomes.jsp) was initially described in NAR in 2007 [Spannagl,M., Noubibou,O., Haase,D., Yang,L., Gundlach,H., Hindemitt, T., Klee,K., Haberer,G., Schoof,H. and Mayer,K.F. (2007) MIPSPlantsDB–plant database resource for integrative and comparative plant genome research. Nucleic Acids Res., 35, D834–D840] and was set up from the start to provide data and information resources for individual plant species as well as a framework for integrative and comparative plant genome research. PlantsDB comprises database instances for tomato, Medicago, Arabidopsis, Brachypodium, Sorghum, maize, rice, barley and wheat. Building up on that, state-of-the-art comparative genomics tools such as CrowsNest are integrated to visualize and investigate syntenic relationships between monocot genomes. Results from novel genome analysis strategies targeting the complex and repetitive genomes of triticeae species (wheat and barley) are provided and cross-linked with model species. The MIPS Repeat Element Database (mips-REdat) and Catalog (mips-REcat) as well as tight connections to other databases, e.g. via web services, are further important components of PlantsDB. PMID:23203886

  8. Insights on pumping well interpretation from flow dimension analysis: The learnings of a multi-context field database

    NASA Astrophysics Data System (ADS)

    Ferroud, Anouck; Chesnaux, Romain; Rafini, Silvain

    2018-01-01

    The flow dimension parameter n, derived from the Generalized Radial Flow model, is a valuable tool to investigate the actual flow regimes that really occur during a pumping test rather than suppose them to be radial, as postulated by the Theis-derived models. A numerical approach has shown that, when the flow dimension is not radial, using the derivative analysis rather than the conventional Theis and Cooper-Jacob methods helps to estimate much more accurately the hydraulic conductivity of the aquifer. Although n has been analysed in numerous studies including field-based studies, there is a striking lack of knowledge about its occurrence in nature and how it may be related to the hydrogeological setting. This study provides an overview of the occurrence of n in natural aquifers located in various geological contexts including crystalline rock, carbonate rock and granular aquifers. A comprehensive database is compiled from governmental and industrial sources, based on 69 constant-rate pumping tests. By means of a sequential analysis approach, we systematically performed a flow dimension analysis in which straight segments on drawdown-log derivative time series are interpreted as successive, specific and independent flow regimes. To reduce the uncertainties inherent in the identification of n sequences, we used the proprietary SIREN code to execute a dual simultaneous fit on both the drawdown and the drawdown-log derivative signals. Using the stated database, we investigate the frequency with which the radial and non-radial flow regimes occur in fractured rock and granular aquifers, and also provide outcomes that indicate the lack of applicability of Theis-derived models in representing nature. The results also emphasize the complexity of hydraulic signatures observed in nature by pointing out n sequential signals and non-integer n values that are frequently observed in the database.

  9. MIPS: a database for genomes and protein sequences

    PubMed Central

    Mewes, H. W.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Mayer, K.; Mokrejs, M.; Morgenstern, B.; Münsterkötter, M.; Rudd, S.; Weil, B.

    2002-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF, Neuherberg, Germany) continues to provide genome-related information in a systematic way. MIPS supports both national and European sequencing and functional analysis projects, develops and maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences, and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the databases for the comprehensive set of genomes (PEDANT genomes), the database of annotated human EST clusters (HIB), the database of complete cDNAs from the DHGP (German Human Genome Project), as well as the project specific databases for the GABI (Genome Analysis in Plants) and HNB (Helmholtz–Netzwerk Bioinformatik) networks. The Arabidospsis thaliana database (MATDB), the database of mitochondrial proteins (MITOP) and our contribution to the PIR International Protein Sequence Database have been described elsewhere [Schoof et al. (2002) Nucleic Acids Res., 30, 91–93; Scharfe et al. (2000) Nucleic Acids Res., 28, 155–158; Barker et al. (2001) Nucleic Acids Res., 29, 29–32]. All databases described, the protein analysis tools provided and the detailed descriptions of our projects can be accessed through the MIPS World Wide Web server (http://mips.gsf.de). PMID:11752246

  10. MIPS: a database for genomes and protein sequences.

    PubMed

    Mewes, H W; Frishman, D; Güldener, U; Mannhaupt, G; Mayer, K; Mokrejs, M; Morgenstern, B; Münsterkötter, M; Rudd, S; Weil, B

    2002-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF, Neuherberg, Germany) continues to provide genome-related information in a systematic way. MIPS supports both national and European sequencing and functional analysis projects, develops and maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences, and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the databases for the comprehensive set of genomes (PEDANT genomes), the database of annotated human EST clusters (HIB), the database of complete cDNAs from the DHGP (German Human Genome Project), as well as the project specific databases for the GABI (Genome Analysis in Plants) and HNB (Helmholtz-Netzwerk Bioinformatik) networks. The Arabidospsis thaliana database (MATDB), the database of mitochondrial proteins (MITOP) and our contribution to the PIR International Protein Sequence Database have been described elsewhere [Schoof et al. (2002) Nucleic Acids Res., 30, 91-93; Scharfe et al. (2000) Nucleic Acids Res., 28, 155-158; Barker et al. (2001) Nucleic Acids Res., 29, 29-32]. All databases described, the protein analysis tools provided and the detailed descriptions of our projects can be accessed through the MIPS World Wide Web server (http://mips.gsf.de).

  11. A structured vocabulary for indexing dietary supplements in databases in the United States

    PubMed Central

    Saldanha, Leila G; Dwyer, Johanna T; Holden, Joanne M; Ireland, Jayne D.; Andrews, Karen W; Bailey, Regan L; Gahche, Jaime J.; Hardy, Constance J; Møller, Anders; Pilch, Susan M.; Roseland, Janet M

    2011-01-01

    Food composition databases are critical to assess and plan dietary intakes. Dietary supplement databases are also needed because dietary supplements make significant contributions to total nutrient intakes. However, no uniform system exists for classifying dietary supplement products and indexing their ingredients in such databases. Differing approaches to classifying these products make it difficult to retrieve or link information effectively. A consistent approach to classifying information within food composition databases led to the development of LanguaL™, a structured vocabulary. LanguaL™ is being adapted as an interface tool for classifying and retrieving product information in dietary supplement databases. This paper outlines proposed changes to the LanguaL™ thesaurus for indexing dietary supplement products and ingredients in databases. The choice of 12 of the original 14 LanguaL™ facets pertinent to dietary supplements, modifications to their scopes, and applications are described. The 12 chosen facets are: Product Type; Source; Part of Source; Physical State, Shape or Form; Ingredients; Preservation Method, Packing Medium, Container or Wrapping; Contact Surface; Consumer Group/Dietary Use/Label Claim; Geographic Places and Regions; and Adjunct Characteristics of food. PMID:22611303

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laramore, G.E.; Griffin, B.R.; Spence, A.

    The purpose of this work is to establish and maintain a database for patients from the United States who have received BNCT in Japan for malignant gliomas of the brain. This database will serve as a resource for the DOE to aid in decisions relating to BNCT research in the United States, as well as assisting the design and implementation of clinical trials of BNCT for brain cancer patients in this country. The database will also serve as an information resource for patients with brain tumors and their families who are considering this form of therapy.

  13. Visualization Component of Vehicle Health Decision Support System

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph; Turmon, Michael; Stough, Timothy; Siegel, Herbert; Walter, patrick; Kurt, Cindy

    2008-01-01

    The visualization front-end of a Decision Support System (DSS) also includes an analysis engine linked to vehicle telemetry, and a database of learned models for known behaviors. Because the display is graphical rather than text-based, the summarization it provides has a greater information density on one screen for evaluation by a flight controller.This tool provides a system-level visualization of the state of a vehicle, and drill-down capability for more details and interfaces to separate analysis algorithms and sensor data streams. The system-level view is a 3D rendering of the vehicle, with sensors represented as icons, tied to appropriate positions within the vehicle body and colored to indicate sensor state (e.g., normal, warning, anomalous state, etc.). The sensor data is received via an Information Sharing Protocol (ISP) client that connects to an external server for real-time telemetry. Users can interactively pan, zoom, and rotate this 3D view, as well as select sensors for a detail plot of the associated time series data. Subsets of the plotted data can be selected and sent to an external analysis engine to either search for a similar time series in an historical database, or to detect anomalous events. The system overview and plotting capabilities are completely general in that they can be applied to any vehicle instrumented with a collection of sensors. This visualization component can interface with the ISP for data streams used by NASA s Mission Control Center at Johnson Space Center. In addition, it can connect to, and display results from, separate analysis engine components that identify anomalies or that search for past instances of similar behavior. This software supports NASA's Software, Intelligent Systems, and Modeling element in the Exploration Systems Research and Technology Program by augmenting the capability of human flight controllers to make correct decisions, thus increasing safety and reliability. It was designed specifically as a tool for NASA's flight controllers to monitor the International Space Station and a future Crew Exploration Vehicle.

  14. Influence of Deployment on the Use of E-Cigarettes in the United States Army and Air Force

    DTIC Science & Technology

    2018-03-22

    the "Tobacco Use Among Service Members" survey sponsored by the Murtha Cancer Center and the Postgraduate Dental School of the Uniformed Services...the study period, and were willing to complete the survey . The survey was voluntary and anonymous; no personally identifiable information was...collected about participants. Statistical analysis of the data obtained from this survey database was performed using SAS. The independent variables were

  15. Integration of Sustainable Practices into Standard Army MILCON Designs

    DTIC Science & Technology

    2011-09-01

    Sustainable Installations Regional Resource Assessment (SIRRA™) web -based database analysis tool output, ERDC-CERL, 2010. 18 Roy, Sujoy, B. L. Chen, E...Water issues white paper . ERDC/CERL TR-11-27 25 streams. This policy states that in the absence of other flow limits as estab- lished by the...89 Note that flush quality in some efficient toilets is undermined by non-flushable recyclable toilet paper that builds up in the

  16. TRICARE Applied Behavior Analysis (ABA) Benefit

    PubMed Central

    Maglione, Margaret; Kadiyala, Srikanth; Kress, Amii; Hastings, Jaime L.; O'Hanlon, Claire E.

    2017-01-01

    Abstract This study compared the Applied Behavior Analysis (ABA) benefit provided by TRICARE as an early intervention for autism spectrum disorder with similar benefits in Medicaid and commercial health insurance plans. The sponsor, the Office of the Under Secretary of Defense for Personnel and Readiness, was particularly interested in how a proposed TRICARE reimbursement rate decrease from $125 per hour to $68 per hour for ABA services performed by a Board Certified Behavior Analyst compared with reimbursement rates (defined as third-party payment to the service provider) in Medicaid and commercial health insurance plans. Information on ABA coverage in state Medicaid programs was collected from Medicaid state waiver databases; subsequently, Medicaid provider reimbursement data were collected from state Medicaid fee schedules. Applied Behavior Analysis provider reimbursement in the commercial health insurance system was estimated using Truven Health MarketScan® data. A weighted mean U.S. reimbursement rate was calculated for several services using cross-state information on the number of children diagnosed with autism spectrum disorder. Locations of potential provider shortages were also identified. Medicaid and commercial insurance reimbursement rates varied considerably across the United States. This project concluded that the proposed $68-per-hour reimbursement rate for services provided by a board certified analyst was more than 25 percent below the U.S. mean. PMID:28845348

  17. Database-Centric Method for Automated High-Throughput Deconvolution and Analysis of Kinetic Antibody Screening Data.

    PubMed

    Nobrega, R Paul; Brown, Michael; Williams, Cody; Sumner, Chris; Estep, Patricia; Caffry, Isabelle; Yu, Yao; Lynaugh, Heather; Burnina, Irina; Lilov, Asparouh; Desroches, Jordan; Bukowski, John; Sun, Tingwan; Belk, Jonathan P; Johnson, Kirt; Xu, Yingda

    2017-10-01

    The state-of-the-art industrial drug discovery approach is the empirical interrogation of a library of drug candidates against a target molecule. The advantage of high-throughput kinetic measurements over equilibrium assessments is the ability to measure each of the kinetic components of binding affinity. Although high-throughput capabilities have improved with advances in instrument hardware, three bottlenecks in data processing remain: (1) intrinsic molecular properties that lead to poor biophysical quality in vitro are not accounted for in commercially available analysis models, (2) processing data through a user interface is time-consuming and not amenable to parallelized data collection, and (3) a commercial solution that includes historical kinetic data in the analysis of kinetic competition data does not exist. Herein, we describe a generally applicable method for the automated analysis, storage, and retrieval of kinetic binding data. This analysis can deconvolve poor quality data on-the-fly and store and organize historical data in a queryable format for use in future analyses. Such database-centric strategies afford greater insight into the molecular mechanisms of kinetic competition, allowing for the rapid identification of allosteric effectors and the presentation of kinetic competition data in absolute terms of percent bound to antigen on the biosensor.

  18. QuakeSim 2.0

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay W.; Lyzenga, Gregory A.; Granat, Robert A.; Norton, Charles D.; Rundle, John B.; Pierce, Marlon E.; Fox, Geoffrey C.; McLeod, Dennis; Ludwig, Lisa Grant

    2012-01-01

    QuakeSim 2.0 improves understanding of earthquake processes by providing modeling tools and integrating model applications and various heterogeneous data sources within a Web services environment. QuakeSim is a multisource, synergistic, data-intensive environment for modeling the behavior of earthquake faults individually, and as part of complex interacting systems. Remotely sensed geodetic data products may be explored, compared with faults and landscape features, mined by pattern analysis applications, and integrated with models and pattern analysis applications in a rich Web-based and visualization environment. Integration of heterogeneous data products with pattern informatics tools enables efficient development of models. Federated database components and visualization tools allow rapid exploration of large datasets, while pattern informatics enables identification of subtle, but important, features in large data sets. QuakeSim is valuable for earthquake investigations and modeling in its current state, and also serves as a prototype and nucleus for broader systems under development. The framework provides access to physics-based simulation tools that model the earthquake cycle and related crustal deformation. Spaceborne GPS and Inter ferometric Synthetic Aperture (InSAR) data provide information on near-term crustal deformation, while paleoseismic geologic data provide longerterm information on earthquake fault processes. These data sources are integrated into QuakeSim's QuakeTables database system, and are accessible by users or various model applications. UAVSAR repeat pass interferometry data products are added to the QuakeTables database, and are available through a browseable map interface or Representational State Transfer (REST) interfaces. Model applications can retrieve data from Quake Tables, or from third-party GPS velocity data services; alternatively, users can manually input parameters into the models. Pattern analysis of GPS and seismicity data has proved useful for mid-term forecasting of earthquakes, and for detecting subtle changes in crustal deformation. The GPS time series analysis has also proved useful as a data-quality tool, enabling the discovery of station anomalies and data processing and distribution errors. Improved visualization tools enable more efficient data exploration and understanding. Tools provide flexibility to science users for exploring data in new ways through download links, but also facilitate standard, intuitive, and routine uses for science users and end users such as emergency responders.

  19. A structured vocabulary for indexing dietary supplements in databases in the United States

    USDA-ARS?s Scientific Manuscript database

    Food composition databases are critical to assess and plan dietary intakes. Dietary supplement databases are also needed because dietary supplements make significant contributions to total nutrient intakes. However, no uniform system exists for classifying dietary supplement products and indexing ...

  20. DNA profiles, computer searches, and the Fourth Amendment.

    PubMed

    Kimel, Catherine W

    2013-01-01

    Pursuant to federal statutes and to laws in all fifty states, the United States government has assembled a database containing the DNA profiles of over eleven million citizens. Without judicial authorization, the government searches each of these profiles one-hundred thousand times every day, seeking to link database subjects to crimes they are not suspected of committing. Yet, courts and scholars that have addressed DNA databasing have focused their attention almost exclusively on the constitutionality of the government's seizure of the biological samples from which the profiles are generated. This Note fills a gap in the scholarship by examining the Fourth Amendment problems that arise when the government searches its vast DNA database. This Note argues that each attempt to match two DNA profiles constitutes a Fourth Amendment search because each attempted match infringes upon database subjects' expectations of privacy in their biological relationships and physical movements. The Note further argues that database searches are unreasonable as they are currently conducted, and it suggests an adaptation of computer-search procedures to remedy the constitutional deficiency.

  1. SAGEMAP: A web-based spatial dataset for sage grouse and sagebrush steppe management in the Intermountain West

    USGS Publications Warehouse

    Knick, Steven T.; Schueck, Linda

    2002-01-01

    The Snake River Field Station of the Forest and Rangeland Ecosystem Science Center has developed and now maintains a database of the spatial information needed to address management of sage grouse and sagebrush steppe habitats in the western United States. The SAGEMAP project identifies and collects infor-mation for the region encompassing the historical extent of sage grouse distribution. State and federal agencies, the primary entities responsible for managing sage grouse and their habitats, need the information to develop an objective assessment of the current status of sage grouse populations and their habitats, or to provide responses and recommendations for recovery if sage grouse are listed as a Threatened or Endangered Species. The spatial data on the SAGEMAP website (http://SAGEMAP.wr.usgs.gov) are an important component in documenting current habitat and other environmental conditions. In addition, the data can be used to identify areas that have undergone significant changes in land cover and to determine underlying causes. As such, the database permits an analysis for large-scale and range-wide factors that may be causing declines of sage grouse populations. The spatial data contained on this site also will be a critical component guiding the decision processes for restoration of habitats in the Great Basin. Therefore, development of this database and the capability to disseminate the information carries multiple benefits for land and wildlife management.

  2. United States Historical Climatology Network (US HCN) monthly temperature and precipitation data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, R.C.; Boden, T.A.; Easterling, D.R.

    1996-01-11

    This document describes a database containing monthly temperature and precipitation data for 1221 stations in the contiguous United States. This network of stations, known as the United States Historical Climatology Network (US HCN), and the resulting database were compiled by the National Climatic Data Center, Asheville, North Carolina. These data represent the best available data from the United States for analyzing long-term climate trends on a regional scale. The data for most stations extend through December 31, 1994, and a majority of the station records are serially complete for at least 80 years. Unlike many data sets that have beenmore » used in past climate studies, these data have been adjusted to remove biases introduced by station moves, instrument changes, time-of-observation differences, and urbanization effects. These monthly data are available free of charge as a numeric data package (NDP) from the Carbon Dioxide Information Analysis Center. The NDP includes this document and 27 machine-readable data files consisting of supporting data files, a descriptive file, and computer access codes. This document describes how the stations in the US HCN were selected and how the data were processed, defines limitations and restrictions of the data, describes the format and contents of the magnetic media, and provides reprints of literature that discuss the editing and adjustment techniques used in the US HCN.« less

  3. Rapid in silico cloning of genes using expressed sequence tags (ESTs).

    PubMed

    Gill, R W; Sanseau, P

    2000-01-01

    Expressed sequence tags (ESTs) are short single-pass DNA sequences obtained from either end of cDNA clones. These ESTs are derived from a vast number of cDNA libraries obtained from different species. Human ESTs are the bulk of the data and have been widely used to identify new members of gene families, as markers on the human chromosomes, to discover polymorphism sites and to compare expression patterns in different tissues or pathologies states. Information strategies have been devised to query EST databases. Since most of the analysis is performed with a computer, the term "in silico" strategy has been coined. In this chapter we will review the current status of EST databases, the pros and cons of EST-type data and describe possible strategies to retrieve meaningful information.

  4. Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals

    PubMed Central

    Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A.; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A.; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R.R.; Kasukawa, Takeya; Kawaji, Hideya

    2017-01-01

    Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. PMID:27794045

  5. The Ocean Carbon States Database: A Proof-of-Concept Application of Cluster Analysis in the Ocean Carbon Cycle

    NASA Technical Reports Server (NTRS)

    Latto, Rebecca; Romanou, Anastasia

    2018-01-01

    In this paper, we present a database of the basic regimes of the carbon cycle in the ocean, the 'ocean carbon states', as obtained using a data mining/pattern recognition technique in observation-based as well as model data. The goal of this study is to establish a new data analysis methodology, test it and assess its utility in providing more insights into the regional and temporal variability of the marine carbon cycle. This is important as advanced data mining techniques are becoming widely used in climate and Earth sciences and in particular in studies of the global carbon cycle, where the interaction of physical and biogeochemical drivers confounds our ability to accurately describe, understand, and predict CO2 concentrations and their changes in the major planetary carbon reservoirs. In this proof-of-concept study, we focus on using well-understood data that are based on observations, as well as model results from the NASA Goddard Institute for Space Studies (GISS) climate model. Our analysis shows that ocean carbon states are associated with the subtropical-subpolar gyre during the colder months of the year and the tropics during the warmer season in the North Atlantic basin. Conversely, in the Southern Ocean, the ocean carbon states can be associated with the subtropical and Antarctic convergence zones in the warmer season and the coastal Antarctic divergence zone in the colder season. With respect to model evaluation, we find that the GISS model reproduces the cold and warm season regimes more skillfully in the North Atlantic than in the Southern Ocean and matches the observed seasonality better than the spatial distribution of the regimes. Finally, the ocean carbon states provide useful information in the model error attribution. Model air-sea CO2 flux biases in the North Atlantic stem from wind speed and salinity biases in the subpolar region and nutrient and wind speed biases in the subtropics and tropics. Nutrient biases are shown to be most important in the Southern Ocean flux bias.

  6. 42 CFR 488.68 - State Agency responsibilities for OASIS collection and data base requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operating the OASIS system: (a) Establish and maintain an OASIS database. The State agency or other entity designated by CMS must— (1) Use a standard system developed or approved by CMS to collect, store, and analyze..., system back-up, and monitoring the status of the database; and (3) Obtain CMS approval before modifying...

  7. Alternative method to validate the seasonal land cover regions of the conterminous United States

    Treesearch

    Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan

    1996-01-01

    An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...

  8. Information Retrieval Center: A Proposal for the Implementation of CD-ROM Database Technology at Memphis State University Libraries.

    ERIC Educational Resources Information Center

    Evans, John; Park, Betsy

    This planning proposal recommends that Memphis State University Libraries make information on CD-ROM (compact disc--read only memory) available in the Reference Department by establishing an Information Retrieval Center (IRC). Following a brief introduction and statement of purpose, the library's databases, users, staffing, facilities, and…

  9. A Quantitative Analysis of the Extrinsic and Intrinsic Turnover Factors of Relational Database Support Professionals

    ERIC Educational Resources Information Center

    Takusi, Gabriel Samuto

    2010-01-01

    This quantitative analysis explored the intrinsic and extrinsic turnover factors of relational database support specialists. Two hundred and nine relational database support specialists were surveyed for this research. The research was conducted based on Hackman and Oldham's (1980) Job Diagnostic Survey. Regression analysis and a univariate ANOVA…

  10. 77 FR 12234 - Changes in Hydric Soils Database Selection Criteria

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-29

    ... Conservation Service [Docket No. NRCS-2011-0026] Changes in Hydric Soils Database Selection Criteria AGENCY... Changes to the National Soil Information System (NASIS) Database Selection Criteria for Hydric Soils of the United States. SUMMARY: The National Technical Committee for Hydric Soils (NTCHS) has updated the...

  11. Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.

    ERIC Educational Resources Information Center

    Gutmann, Myron P.; And Others

    1989-01-01

    Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)

  12. 76 FR 77504 - Notice of Submission for OMB Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... of Review: Extension. Title of Collection: Charter Schools Program Grand Award Database. OMB Control... collect data necessary for the Charter Schools Program (CSP) Grant Award Database. The CSP is authorized... award information from grantees (State agencies and some schools) for a database of current CSP-funded...

  13. XML technology planning database : lessons learned

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  14. Application Analysis and Decision with Dynamic Analysis

    DTIC Science & Technology

    2014-12-01

    pushes the application file and the JSON file containing the metadata from the database . When the 2 files are in place, the consumer thread starts...human analysts and stores it in a database . It would then use some of these data to generate a risk score for the application. However, static analysis...and store them in the primary A2D database for future analysis. 15. SUBJECT TERMS Android, dynamic analysis 16. SECURITY CLASSIFICATION OF: 17

  15. Digital Mapping Techniques '07 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2008-01-01

    The Digital Mapping Techniques '07 (DMT'07) workshop was attended by 85 technical experts from 49 agencies, universities, and private companies, including representatives from 27 state geological surveys. This year's meeting, the tenth in the annual series, was hosted by the South Carolina Geological Survey, from May 20-23, 2007, on the University of South Carolina campus in Columbia, South Carolina. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous year's meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; and 6) continued development of the National Geologic Map Database.

  16. The mini mental state examination at the time of Alzheimer's disease and related disorders diagnosis, according to age, education, gender and place of residence: a cross-sectional study among the French National Alzheimer database.

    PubMed

    Pradier, Christian; Sakarovitch, Charlotte; Le Duff, Franck; Layese, Richard; Metelkina, Asya; Anthony, Sabine; Tifratene, Karim; Robert, Philippe

    2014-01-01

    was firstly to describe the MMSE (Mini-Mental State Examination) score upon initial diagnosis of Alzheimer's disease and related disorders among the French population, according to age. Secondly, education, gender and place of residence were studied as factors potentially associated with delayed Alzheimer's disease diagnosis. we conducted a cross sectional analysis of the French National Alzheimer database (BNA). Data from 2008 to 2012 were extracted. Patients were selected at the moment of their first diagnosis of AD (n = 39,451). The MMSE score at initial diagnosis dropped significantly with increasing age. The test score increased with the degree of educational background regardless of age. Gender and place of residence were significantly related to the MMSE score, women and persons living in medical institutions having lower MMSE scores under the age of 90 years and at all educational levels. Health care professionals should be aware of these risk factors in order to maximize chances of earliest possible diagnosis of Alzheimer's disease and related disorders.

  17. GRAFLAB 2.3 for UNIX - A MATLAB database, plotting, and analysis tool: User`s guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunn, W.N.

    1998-03-01

    This report is a user`s manual for GRAFLAB, which is a new database, analysis, and plotting package that has been written entirely in the MATLAB programming language. GRAFLAB is currently used for data reduction, analysis, and archival. GRAFLAB was written to replace GRAFAID, which is a FORTRAN database, analysis, and plotting package that runs on VAX/VMS.

  18. Food Service Guideline Policies on State Government Controlled Properties

    PubMed Central

    Zaganjor, Hatidza; Bishop Kendrick, Katherine; Warnock, Amy Lowry; Onufrak, Stephen; Whitsel, Laurie P.; Ralston Aoki, Julie; Kimmons, Joel

    2017-01-01

    Purpose Food service guidelines (FSG) policies can impact millions of daily meals sold or provided to government employees, patrons, and institutionalized persons. This study describes a classification tool to assess FSG policy attributes and uses it to rate FSG policies. Design Quantitative content analysis. Setting State government facilities in the U.S. Subjects 50 states and District of Columbia. Measures Frequency of FSG policies and percent alignment to tool. Analysis State-level policies were identified using legal research databases to assess bills, statutes, regulations, and executive orders proposed or adopted by December 31, 2014. Full-text reviews were conducted to determine inclusion. Included policies were analyzed to assess attributes related to nutrition, behavioral supports, and implementation guidance. Results A total of 31 policies met inclusion criteria; 15 were adopted. Overall alignment ranged from 0% to 86%, and only 10 policies aligned with a majority of FSG policy attributes. Western States had the most FSG policy proposed or adopted (11 policies). The greatest number of FSG policies were proposed or adopted (8 policies) in 2011, followed by the years 2013 and 2014. Conclusion FSG policies proposed or adopted through 2014 that intended to improve the food and beverage environment on state government property vary considerably in their content. This analysis offers baseline data on the FSG landscape and information for future FSG policy assessments. PMID:27630113

  19. The Ocean Carbon States Database: a proof-of-concept application of cluster analysis in the ocean carbon cycle

    NASA Astrophysics Data System (ADS)

    Latto, Rebecca; Romanou, Anastasia

    2018-03-01

    In this paper, we present a database of the basic regimes of the carbon cycle in the ocean, the ocean carbon states, as obtained using a data mining/pattern recognition technique in observation-based as well as model data. The goal of this study is to establish a new data analysis methodology, test it and assess its utility in providing more insights into the regional and temporal variability of the marine carbon cycle. This is important as advanced data mining techniques are becoming widely used in climate and Earth sciences and in particular in studies of the global carbon cycle, where the interaction of physical and biogeochemical drivers confounds our ability to accurately describe, understand, and predict CO2 concentrations and their changes in the major planetary carbon reservoirs. In this proof-of-concept study, we focus on using well-understood data that are based on observations, as well as model results from the NASA Goddard Institute for Space Studies (GISS) climate model. Our analysis shows that ocean carbon states are associated with the subtropical-subpolar gyre during the colder months of the year and the tropics during the warmer season in the North Atlantic basin. Conversely, in the Southern Ocean, the ocean carbon states can be associated with the subtropical and Antarctic convergence zones in the warmer season and the coastal Antarctic divergence zone in the colder season. With respect to model evaluation, we find that the GISS model reproduces the cold and warm season regimes more skillfully in the North Atlantic than in the Southern Ocean and matches the observed seasonality better than the spatial distribution of the regimes. Finally, the ocean carbon states provide useful information in the model error attribution. Model air-sea CO2 flux biases in the North Atlantic stem from wind speed and salinity biases in the subpolar region and nutrient and wind speed biases in the subtropics and tropics. Nutrient biases are shown to be most important in the Southern Ocean flux bias. All data and analysis scripts are available at https://data.giss.nasa.gov/oceans/carbonstates/ (DOI: https://doi.org/10.5281/zenodo.996891).

  20. Business Faculty Research: Satisfaction with the Web versus Library Databases

    ERIC Educational Resources Information Center

    Dewald, Nancy H.; Silvius, Matthew A.

    2005-01-01

    Business faculty members teaching at undergraduate campuses of the Pennsylvania State University were surveyed in order to assess their satisfaction with free Web sources and with subscription databases for their professional research. Although satisfaction with the Web's ease of use was higher than that for databases, overall satisfaction for…

  1. Database Software Selection for the Egyptian National STI Network.

    ERIC Educational Resources Information Center

    Slamecka, Vladimir

    The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…

  2. 77 FR 21618 - 60-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... DEPARTMENT OF STATE [Public Notice 7843] 60-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096... Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB Control Number: 1405-0168...

  3. 77 FR 34211 - Modification of Multiple Compulsory Reporting Points; Continental United States, Alaska and Hawaii

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-11

    ... previously updated in the FAA aeronautical database without accompanying regulatory action being taken. The... to ensure it matches the information contained in the FAA's aeronautical database and to ensure the... position information contained in the FAA's aeronautical database for the reporting points. When these...

  4. 77 FR 47690 - 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... DEPARTMENT OF STATE [Public Notice 7976] 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096.... Title of Information Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB...

  5. Correlates of Access to Business Research Databases

    ERIC Educational Resources Information Center

    Gottfried, John C.

    2010-01-01

    This study examines potential correlates of business research database access through academic libraries serving top business programs in the United States. Results indicate that greater access to research databases is related to enrollment in graduate business programs, but not to overall enrollment or status as a public or private institution.…

  6. Correction to storm, Tressoldi, and di Risio (2010).

    PubMed

    2015-03-01

    Reports an error in "Meta-analysis of free-response studies, 1992-2008: Assessing the noise reduction model in parapsychology" by Lance Storm, Patrizio E. Tressoldi and Lorenzo Di Risio (Psychological Bulletin, 2010[Jul], Vol 136[4], 471-485). In the article, the sentence giving the formula in the second paragraph on p. 479 was stated incorrectly. The corrected sentence is included. (The following abstract of the original article appeared in record 2010-12718-001.) [Correction Notice: An erratum for this article was reported in Vol 136(5) of Psychological Bulletin (see record 2010-17510-009). In the article, the second to last sentence of the abstract (p. 471) was stated incorrectly. The sentence should read as follows: "The mean effect size value of the ganzfeld database was significantly higher than the mean effect size of the standard free-response database but was not higher than the effect size of the nonganzfeld noise reduction database."] We report the results of meta-analyses on 3 types of free-response study: (a) ganzfeld (a technique that enhances a communication anomaly referred to as "psi"); (b) nonganzfeld noise reduction using alleged psi-enhancing techniques such as dream psi, meditation, relaxation, or hypnosis; and (c) standard free response (nonganzfeld, no noise reduction). For the period 1997-2008, a homogeneous data set of 29 ganzfeld studies yielded a mean effect size of 0.142 (Stouffer Z = 5.48, p = 2.13 × 10-8). A homogeneous nonganzfeld noise reduction data set of 16 studies yielded a mean effect size of 0.110 (Stouffer Z = 3.35, p = 2.08 × 10-4), and a homogeneous data set of 14 standard free-response studies produced a weak negative mean effect size of -0.029 (Stouffer Z = -2.29, p = .989). The mean effect size value of the ganzfeld database were significantly higher than the mean effect size of the nonganzfeld noise reduction and the standard free-response databases. We also found that selected participants (believers in the paranormal, meditators, etc.) had a performance advantage over unselected participants, but only if they were in the ganzfeld condition. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  7. Analysis of benzonatate overdoses among adults and children from 1969-2010 by the United States Food and Drug Administration.

    PubMed

    McLawhorn, Melinda W; Goulding, Margie R; Gill, Rajdeep K; Michele, Theresa M

    2013-01-01

    To augment the December 2010 United States Food and Drug Administration (FDA) Drug Safety Communication on accidental ingestion of benzonatate in children less than 10 years old by summarizing data on emergency department visits, benzonatate exposure, and reports of benzonatate overdoses from several data sources. Retrospective review of adverse-event reports and drug utilization data of benzonatate. The FDA Adverse Event Reporting System (AERS) database (1969-2010), the National Electronic Injury Surveillance System-Cooperative Adverse Drug Event Surveillance Project (NEISS-CADES, 2004-2009), and the IMS commercial data vendor (2004-2009). Any patient who reported an adverse event with benzonatate captured in the AERS or NEISS-CADES database or received a prescription for benzonatate according to the IMS commercial data vendor. Postmarketing adverse events with benzonatate were collected from the AERS database, emergency department visits due to adverse events with benzonatate were collected from the NEISS-CADES database, and outpatient drug utilization data were collected from the IMS commercial data vendor. Of 31 overdose cases involving benzonatate reported in the AERS database, 20 had a fatal outcome, and five of these fatalities occurred from accidental ingestions in children 2 years of age and younger. The NEISS-CADES database captured emergency department visits involving 12 cases of overdose from accidental benzonatate ingestions in children aged 1-3 years. Signs and symptoms of overdose included seizures, cardiac arrest, coma, brain edema or anoxic encephalopathy, apnea, tachycardia, and respiratory arrest and occurred in some patients within 15 minutes of ingestion. Dispensed benzonatate prescriptions increased by approximately 52% from 2004 to 2009. Although benzonatate has a long history of safe use, accumulating cases of fatal overdose, especially in children, prompted the FDA to notify health care professionals about the risks of benzonatate overdose. Pharmacists may have a role in preventing benzonatate overdoses by counseling patients on signs and symptoms of benzonatate overdose, the need for immediate medical care, and safe storage and disposal of benzonatate. © 2013 Pharmacotherapy Publications, Inc.

  8. National Assessment of Oil and Gas Project: Areas of Historical Oil and Gas Exploration and Production in the United States

    USGS Publications Warehouse

    Biewick, Laura

    2008-01-01

    This report contains maps and associated spatial data showing historical oil and gas exploration and production in the United States. Because of the proprietary nature of many oil and gas well databases, the United States was divided into cells one-quarter square mile and the production status of all wells in a given cell was aggregated. Base-map reference data are included, using the U.S. Geological Survey (USGS) National Map, the USGS and American Geological Institute (AGI) Global GIS, and a World Shaded Relief map service from the ESRI Geography Network. A hardcopy map was created to synthesize recorded exploration data from 1859, when the first oil well was drilled in the U.S., to 2005. In addition to the hardcopy map product, the data have been refined and made more accessible through the use of Geographic Information System (GIS) tools. The cell data are included in a GIS database constructed for spatial analysis via the USGS Internet Map Service or by importing the data into GIS software such as ArcGIS. The USGS internet map service provides a number of useful and sophisticated geoprocessing and cartographic functions via an internet browser. Also included is a video clip of U.S. oil and gas exploration and production through time.

  9. FORAST Database: Forest Responses to Anthropogenic Stress (FORAST)

    DOE Data Explorer

    McLaughlin, S. B. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Downing, D. J. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Blasing, T. J. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Jackson, B. L. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Pack, D. J. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Duvick, D. N. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Mann, L. K. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Doyle, T. W. [ESD, Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)

    1995-01-01

    The Forest Responses to Anthropogenic Stress (FORAST) project was designed to determine whether evidence of alterations of long-term growth patterns of several species of eastern forest trees was apparent in tree-ring chronologies from within the region and to identify environmental variables that were temporally or spatially correlated with any observed changes. The project was supported principally by the U.S. Environmental Protection Agency (EPA) with additional support from the National Park Service. The FORAST project was initiated in 1982 as exploratory research to document patterns of radial growth of forest trees during the previous 50 or more years within 15 states in the northeastern United States. Radial growth measurements from more than 7,000 trees are provided along with data on a variety of measured and calculated indices of stand characteristics (basal area, density, and competitive indices); climate (temperature, precipitation, and drought); and anthropogenic pollutants (state and regional emissions of SO2 and NOX, ozone monitoring data, and frequency of atmospheric-stagnation episodes and atmospheric haze). These data were compiled into a single database to facilitate exploratory analysis of tree growth patterns and responses to local and regional environmental conditions. The project objectives, experimental design, and documentation of procedures for assessing data collected in the 3-year research project are reported in McLaughlin et al. (1986).

  10. Completion of the 2011 National Land Cover Database for the conterminous United States – Representing a decade of land cover change information

    USGS Publications Warehouse

    Homer, Collin G.; Dewitz, Jon; Yang, Limin; Jin, Suming; Danielson, Patrick; Xian, George Z.; Coulston, John; Herold, Nathaniel; Wickham, James; Megown, Kevin

    2015-01-01

    The National Land Cover Database (NLCD) provides nationwide data on land cover and land cover change at the native 30-m spatial resolution of the Landsat Thematic Mapper (TM). The database is designed to provide five-year cyclical updating of United States land cover and associated changes. The recent release of NLCD 2011 products now represents a decade of consistently produced land cover and impervious surface for the Nation across three periods: 2001, 2006, and 2011 (Homer et al., 2007; Fry et al., 2011). Tree canopy cover has also been produced for 2011 (Coluston et al., 2012; Coluston et al., 2013). With the release of NLCD 2011, the database provides the ability to move beyond simple change detection to monitoring and trend assessments. NLCD 2011 represents the latest evolution of NLCD products, continuing its focus on consistency, production, efficiency, and product accuracy. NLCD products are designed for widespread application in biology, climate, education, land management, hydrology, environmental planning, risk and disease analysis, telecommunications and visualization, and are available for no cost at http://www.mrlc.gov. NLCD is produced by a Federal agency consortium called the Multi-Resolution Land Characteristics Consortium (MRLC) (Wickham et al., 2014). In the consortium arrangement, the U.S. Geological Survey (USGS) leads NLCD land cover and imperviousness production for the bulk of the Nation; the National Oceanic and Atmospheric Administration (NOAA) completes NLCD land cover for the conterminous U.S. (CONUS) coastal zones; and the U.S. Forest Service (USFS) designs and produces the NLCD tree canopy cover product. Other MRLC partners collaborate through resource or data contribution to ensure NLCD products meet their respective program needs (Wickham et al., 2014).

  11. Database Creation and Statistical Analysis: Finding Connections Between Two or More Secondary Storage Device

    DTIC Science & Technology

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research

  12. The ClinicalTrials.gov Results Database — Update and Key Issues

    PubMed Central

    Zarin, Deborah A.; Tse, Tony; Williams, Rebecca J.; Califf, Robert M.; Ide, Nicholas C.

    2011-01-01

    BACKGROUND The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research. METHODS We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010. RESULTS As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants. CONCLUSIONS ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data. PMID:21366476

  13. Domain fusion analysis by applying relational algebra to protein sequence and domain databases

    PubMed Central

    Truong, Kevin; Ikura, Mitsuhiko

    2003-01-01

    Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020

  14. Resting-state abnormalities in amnestic mild cognitive impairment: a meta-analysis.

    PubMed

    Lau, W K W; Leung, M-K; Lee, T M C; Law, A C K

    2016-04-26

    Amnestic mild cognitive impairment (aMCI) is a prodromal stage of Alzheimer's disease (AD). As no effective drug can cure AD, early diagnosis and intervention for aMCI are urgently needed. The standard diagnostic procedure for aMCI primarily relies on subjective neuropsychological examinations that require the judgment of experienced clinicians. The development of other objective and reliable aMCI markers, such as neural markers, is therefore required. Previous neuroimaging findings revealed various abnormalities in resting-state activity in MCI patients, but the findings have been inconsistent. The current study provides an updated activation likelihood estimation meta-analysis of resting-state functional magnetic resonance imaging (fMRI) data on aMCI. The authors searched on the MEDLINE/PubMed databases for whole-brain resting-state fMRI studies on aMCI published until March 2015. We included 21 whole-brain resting-state fMRI studies that reported a total of 156 distinct foci. Significant regional resting-state differences were consistently found in aMCI patients relative to controls, including the posterior cingulate cortex, right angular gyrus, right parahippocampal gyrus, left fusiform gyrus, left supramarginal gyrus and bilateral middle temporal gyri. Our findings support that abnormalities in resting-state activities of these regions may serve as neuroimaging markers for aMCI.

  15. Academic Impact of a Public Electronic Health Database: Bibliometric Analysis of Studies Using the General Practice Research Database

    PubMed Central

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Background Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. Methodology and Findings A total of 749 studies published between 1995 and 2009 with ‘General Practice Research Database’ as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of “Pharmacology and Pharmacy”, “General and Internal Medicine”, and “Public, Environmental and Occupational Health”. The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. Conclusions A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research. PMID:21731733

  16. Using inventory-based tree-ring data as a proxy for historical climate: Investigating the Pacific decadal oscillation and teleconnections

    Treesearch

    J. DeRose; S. Wang; J. Shaw

    2014-01-01

    In 2009, the Interior West Forest Inventory and Analysis (FIA) program of the U.S. Forest Service started to archive approximately 11 000 increment cores collected in the Interior West states during the periodic inventories of the 1980s and 1990s. The two primary goals for use of the data were to provide a plot-linked database of radial growth to be used for growth...

  17. A “Cookbook” Cost Analysis Procedure for Medical Information Systems*

    PubMed Central

    Torrance, Janice L.; Torrance, George W.; Covvey, H. Dominic

    1983-01-01

    A costing procedure for medical information systems is described. The procedure incorporates state-of-the-art costing methods in an easy to follow “cookbook” format. Application of the procedure consists of filling out a series of Mac-Tor EZ-Cost forms. The procedure and forms have been field tested by application to a cardiovascular database system. This article describes the major features of the costing procedure. The forms and other details are available upon request.

  18. Sociocultural Behavior Sensemaking: State of the Art in Understanding the Operational Environment

    DTIC Science & Technology

    2014-01-01

    and environmental change have expanded the number and frequency of “game-changing moments” that a community can face. Now more than ever, we need...August 2011: Establishment of the rebel interim government in Tripoli (8/26/ 11 ). This analysis can also be applied to single communication ...information. The Psychological Review, 63, 81–89. Miller, G.A. (1995). WordNet: A lexical database for English. Communications of the ACM 38( 11 ), 39

  19. Reuse of the Cloud Analytics and Collaboration Environment within Tactical Applications (TacApps): A Feasibility Analysis

    DTIC Science & Technology

    2016-03-01

    Representational state transfer  Java messaging service  Java application programming interface (API)  Internet relay chat (IRC)/extensible messaging and...JBoss application server or an Apache Tomcat servlet container instance. The relational database management system can be either PostgreSQL or MySQL ... Java library called direct web remoting. This library has been part of the core CACE architecture for quite some time; however, there have not been

  20. NLCD - MODIS albedo data

    EPA Pesticide Factsheets

    The NLCD-MODIS land cover-albedo database integrates high-quality MODIS albedo observations with areas of homogeneous land cover from NLCD. The spatial resolution (pixel size) of the database is 480m-x-480m aligned to the standardized UGSG Albers Equal-Area projection. The spatial extent of the database is the continental United States. This dataset is associated with the following publication:Wickham , J., C.A. Barnes, and T. Wade. Combining NLCD and MODIS to Create a Land Cover-Albedo Dataset for the Continental United States. REMOTE SENSING OF ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 170(0): 143-153, (2015).

  1. 12-Digit Watershed Boundary Data 1:24,000 for EPA Region 2 and Surrounding States (NAT_HYDROLOGY.HUC12_NRCS_REG2)

    EPA Pesticide Factsheets

    12 digit Hydrologic Units (HUCs) for EPA Region 2 and surrounding states (Northeastern states, parts of the Great Lakes, Puerto Rico and the USVI) downloaded from the Natural Resources Conservation Service (NRCS) Geospatial Gateway and imported into the EPA Region 2 Oracle/SDE database. This layer reflects 2009 updates to the national Watershed Boundary Database (WBD) that included new boundary data for New York and New Jersey.

  2. Evaluation and Analysis of Regional Best Management Practices in San Diego, California (USA)

    NASA Astrophysics Data System (ADS)

    Flint, K.; Kinoshita, A. M.

    2017-12-01

    In urban areas, surface water quality is often impaired due to pollutants transported by stormwater runoff. To maintain and improve surface water quality, the United States Clean Water Act (CWA) requires an evaluation of available water quality information to develop a list of impaired water bodies and establish contaminant restrictions. Structural Best Management Practices (BMPs) are designed to reduce runoff volume and/or pollutant concentrations to comply with CWA requirements. Local level policy makers and managers require an improved understanding of the costs and benefits associated with BMP installation, performance, and maintenance. The International Stormwater BMP Database (Database) is an online platform for submittal of information about existing BMPs, such as cost, design details, and statistical analysis of influent and effluent pollutant concentrations. While the Database provides an aggregation of data which supports analysis of overall BMP performance at international and national scales, the sparse spatial distribution of the data is not suitable for regional and local analysis. This research conducts an extensive review of local inventory and spatial analysis of existing permanent BMPs throughout the San Diego River watershed in California, USA. Information collected from cities within the San Diego River watershed will include BMP types, locations, dates of installation, costs, expected removal efficiencies, monitoring data, and records of maintenance. Aggregating and mapping this information will facilitate BMP evaluation. Specifically, the identification of spatial trends, inconsistencies in BMP performances, and gaps in current records. Regression analysis will provide insight into the nature and significance of correlations between BMP performance and physical characteristics such as land use, soil type, and proximity to impaired waters. This analysis will also result in a metric of relative BMP performance and will provide a basis for future predictions of BMP effectiveness. Ultimately, results from this work will provide information to local governments and agencies for prioritizing, maintaining and monitoring BMPs, and improvement of hydrologic and water quality modeling in urban systems subject to compliance.

  3. Feminism and psychology: critiques of methods and epistemology.

    PubMed

    Eagly, Alice H; Riger, Stephanie

    2014-10-01

    Starting in the 1960s, many of the critiques of psychological science offered by feminist psychologists focused on its methods and epistemology. This article evaluates the current state of psychological science in relation to this feminist critique. The analysis relies on sources that include the PsycINFO database, the Publication Manual of the American Psychological Association (American Psychological Association, 2010), and popular psychology methods textbooks. After situating the feminist critique within the late-20th-century shift of science from positivism to postpositivism, the inquiry examines feminists' claims of androcentric bias in (a) the underrepresentation of women as researchers and research participants and (b) researchers' practices in comparing women and men and describing their research findings. In most of these matters, psychology manifests considerable change in directions advocated by feminists. However, change is less apparent in relation to some feminists' criticisms of psychology's reliance on laboratory experimentation and quantitative methods. In fact, the analyses documented the rarity in high-citation journals of qualitative research that does not include quantification. Finally, the analysis frames feminist methodological critiques by a consideration of feminist epistemologies that challenge psychology's dominant postpositivism. Scrutiny of methods textbooks and journal content suggests that within psychological science, especially as practiced in the United States, these alternative epistemologies have not yet gained substantial influence. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Danilo Dragoni; Hans Peter Schmid; C.S.B. Grimmond

    During the project period we continued to conduct long-term (multi-year) measurements, analysis, and modeling of energy and mass exchange in and over a deciduous forest in the Midwestern United States, to enhance the understanding of soil-vegetation-atmosphere exchange of carbon. At the time when this report was prepared, results from nine years of measurements (1998 - 2006) of above canopy CO2 and energy fluxes at the AmeriFlux site in the Morgan-Monroe State Forest, Indiana, USA (see Table 1), were available on the Fluxnet database, and the hourly CO2 fluxes for 2007 are presented here (see Figure 1). The annual sequestration ofmore » atmospheric carbon by the forest is determined to be between 240 and 420 g C m-2 a-1 for the first ten years. These estimates are based on eddy covariance measurements above the forest, with a gap-filling scheme based on soil temperature and photosynthetically active radiation. Data gaps result from missing data or measurements that were rejected in qua)lity control (e.g., during calm nights). Complementary measurements of ecological variables (i.e. inventory method), provided an alternative method to quantify net carbon uptake by the forest, partition carbon allocation in each ecosystem components, and reduce uncertainty on annual net ecosystem productivity (NEP). Biometric datasets are available on the Fluxnext database since 1998 (with the exclusion of 2006). Analysis for year 2007 is under completion.« less

  5. Linking U.S. School District Test Score Distributions to a Common Scale. CEPA Working Paper No. 16-09

    ERIC Educational Resources Information Center

    Reardon, Sean F.; Kalogrides, Demetra; Ho, Andrew D.

    2017-01-01

    There is no comprehensive database of U.S. district-level test scores that is comparable across states. We describe and evaluate a method for constructing such a database. First, we estimate linear, reliability-adjusted linking transformations from state test score scales to the scale of the National Assessment of Educational Progress (NAEP). We…

  6. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    PubMed

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Colorado Late Cenozoic Fault and Fold Database and Internet Map Server: User-friendly technology for complex information

    USGS Publications Warehouse

    Morgan, K.S.; Pattyn, G.J.; Morgan, M.L.

    2005-01-01

    Internet mapping applications for geologic data allow simultaneous data delivery and collection, enabling quick data modification while efficiently supplying the end user with information. Utilizing Web-based technologies, the Colorado Geological Survey's Colorado Late Cenozoic Fault and Fold Database was transformed from a monothematic, nonspatial Microsoft Access database into a complex information set incorporating multiple data sources. The resulting user-friendly format supports easy analysis and browsing. The core of the application is the Microsoft Access database, which contains information compiled from available literature about faults and folds that are known or suspected to have moved during the late Cenozoic. The database contains nonspatial fields such as structure type, age, and rate of movement. Geographic locations of the fault and fold traces were compiled from previous studies at 1:250,000 scale to form a spatial database containing information such as length and strike. Integration of the two databases allowed both spatial and nonspatial information to be presented on the Internet as a single dataset (http://geosurvey.state.co.us/pubs/ceno/). The user-friendly interface enables users to view and query the data in an integrated manner, thus providing multiple ways to locate desired information. Retaining the digital data format also allows continuous data updating and quick delivery of newly acquired information. This dataset is a valuable resource to anyone interested in earthquake hazards and the activity of faults and folds in Colorado. Additional geologic hazard layers and imagery may aid in decision support and hazard evaluation. The up-to-date and customizable maps are invaluable tools for researchers or the public.

  8. MIPS: analysis and annotation of proteins from whole genomes

    PubMed Central

    Mewes, H. W.; Amid, C.; Arnold, R.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Münsterkötter, M.; Pagel, P.; Strack, N.; Stümpflen, V.; Warfsmann, J.; Ruepp, A.

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein–protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de). PMID:14681354

  9. MIPS: analysis and annotation of proteins from whole genomes.

    PubMed

    Mewes, H W; Amid, C; Arnold, R; Frishman, D; Güldener, U; Mannhaupt, G; Münsterkötter, M; Pagel, P; Strack, N; Stümpflen, V; Warfsmann, J; Ruepp, A

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).

  10. U.S. states and territories national tsunami hazard assessment, historic record and sources for waves

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Weaver, C.

    2007-12-01

    In 2005, the U.S. National Science and Technology Council (NSTC) released a joint report by the sub-committee on Disaster Reduction and the U.S. Group on Earth Observations titled Tsunami Risk Reduction for the United States: A Framework for Action (Framework). The Framework outlines the President's&pstrategy for reducing the United States tsunami risk. The first specific action called for in the Framework is to "Develop standardized and coordinated tsunami hazard and risk assessments for all coastal regions of the United States and its territories." Since NOAA is the lead agency for providing tsunami forecasts and warnings and NOAA's National Geophysical Data Center (NGDC) catalogs information on global historic tsunamis, NOAA/NGDC was asked to take the lead in conducting the first national tsunami hazard assessment. Earthquakes or earthquake-generated landslides caused more than 85% of the tsunamis in the NGDC tsunami database. Since the United States Geological Survey (USGS) conducts research on earthquake hazards facing all of the United States and its territories, NGDC and USGS partnered together to conduct the first tsunami hazard assessment for the United States and its territories. A complete tsunami hazard and risk assessment consists of a hazard assessment, exposure and vulnerability assessment of buildings and people, and loss assessment. This report is an interim step towards a tsunami risk assessment. The goal of this report is provide a qualitative assessment of the United States tsunami hazard at the national level. Two different methods are used to assess the U.S. tsunami hazard. The first method involves a careful examination of the NGDC historical tsunami database. This resulted in a qualitative national tsunami hazard assessment based on the distribution of runup heights and the frequency of runups. Although tsunami deaths are a measure of risk rather than hazard, the known tsunami deaths found in the NGDC database search were compared with the qualitative assessments based on frequency and amplitude. The second method to assess tsunami hazard involved using the USGS earthquake databases to search for possible earthquake sources near American coastlines to extend the NOAA/NGDC tsunami databases backward in time. The qualitative tsunami hazard assessment based on the results of the NGDC and USGS database searches will be presented.

  11. MatProps: Material Properties Database and Associated Access Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrenberger, J K; Becker, R C; Goto, D M

    2007-08-13

    Coefficients for analytic constitutive and equation of state models (EOS), which are used by many hydro codes at LLNL, are currently stored in a legacy material database (Steinberg, UCRL-MA-106349). Parameters for numerous materials are available through this database, and include Steinberg-Guinan and Steinberg-Lund constitutive models for metals, JWL equations of state for high explosives, and Mie-Gruniesen equations of state for metals. These constitutive models are used in most of the simulations done by ASC codes today at Livermore. Analytic EOSs are also still used, but have been superseded in many cases by tabular representations in LEOS (http://leos.llnl.gov). Numerous advanced constitutivemore » models have been developed and implemented into ASC codes over the past 20 years. These newer models have more physics and better representations of material strength properties than their predecessors, and therefore more model coefficients. However, a material database of these coefficients is not readily available. Therefore incorporating these coefficients with those of the legacy models into a portable database that could be shared amongst codes would be most welcome. The goal of this paper is to describe the MatProp effort at LLNL to create such a database and associated access library that could be used by codes throughout the DOE complex and beyond. We have written an initial version of the MatProp database and access library and our DOE/ASC code ALE3D (Nichols et. al., UCRL-MA-152204) is able to import information from the database. The database, a link to which exists on the Sourceforge server at LLNL, contains coefficients for many materials and models (see Appendix), and includes material parameters in the following categories--flow stress, shear modulus, strength, damage, and equation of state. Future versions of the Matprop database and access library will include the ability to read and write material descriptions that can be exchanged between codes. It will also include an ability to do unit changes, i.e. have the library return parameters in user-specified unit systems. In addition to these, additional material categories can be added (e.g., phase change kinetics, etc.). The Matprop database and access library is part of a larger set of tools used at LLNL for assessing material model behavior. One of these is MSlib, a shared constitutive material model library. Another is the Material Strength Database (MSD), which allows users to compare parameter fits for specific constitutive models to available experimental data. Together with Matprop, these tools create a suite of capabilities that provide state-of-the-art models and parameters for those models to integrated simulation codes. This document is broken into several appendices. Appendix A contains a code example to retrieve several material coefficients. Appendix B contains the API for the Matprop data access library. Appendix C contains a list of the material names and model types currently available in the Matprop database. Appendix D contains a list of the parameter names for the currently recognized model types. Appendix E contains a full xml description of the material Tantalum.« less

  12. Databases for the Global Dynamics of Multiparameter Nonlinear Systems

    DTIC Science & Technology

    2014-03-05

    AFRL-OSR-VA-TR-2014-0078 DATABASES FOR THE GLOBAL DYNAMICS OF MULTIPARAMETER NONLINEAR SYSTEMS Konstantin Mischaikow RUTGERS THE STATE UNIVERSITY OF...University of New Jersey ASB III, Rutgers Plaza New Brunswick, NJ 08807 DATABASES FOR THE GLOBAL DYNAMICS OF MULTIPARAMETER NONLINEAR SYSTEMS ...dynamical systems . We refer to the output as a Database for Global Dynamics since it allows the user to query for information about the existence and

  13. mESAdb: microRNA Expression and Sequence Analysis Database

    PubMed Central

    Kaya, Koray D.; Karakülah, Gökhan; Yakıcıer, Cengiz M.; Acar, Aybar C.; Konu, Özlen

    2011-01-01

    microRNA expression and sequence analysis database (http://konulab.fen.bilkent.edu.tr/mirna/) (mESAdb) is a regularly updated database for the multivariate analysis of sequences and expression of microRNAs from multiple taxa. mESAdb is modular and has a user interface implemented in PHP and JavaScript and coupled with statistical analysis and visualization packages written for the R language. The database primarily comprises mature microRNA sequences and their target data, along with selected human, mouse and zebrafish expression data sets. mESAdb analysis modules allow (i) mining of microRNA expression data sets for subsets of microRNAs selected manually or by motif; (ii) pair-wise multivariate analysis of expression data sets within and between taxa; and (iii) association of microRNA subsets with annotation databases, HUGE Navigator, KEGG and GO. The use of existing and customized R packages facilitates future addition of data sets and analysis tools. Furthermore, the ability to upload and analyze user-specified data sets makes mESAdb an interactive and expandable analysis tool for microRNA sequence and expression data. PMID:21177657

  14. mESAdb: microRNA expression and sequence analysis database.

    PubMed

    Kaya, Koray D; Karakülah, Gökhan; Yakicier, Cengiz M; Acar, Aybar C; Konu, Ozlen

    2011-01-01

    microRNA expression and sequence analysis database (http://konulab.fen.bilkent.edu.tr/mirna/) (mESAdb) is a regularly updated database for the multivariate analysis of sequences and expression of microRNAs from multiple taxa. mESAdb is modular and has a user interface implemented in PHP and JavaScript and coupled with statistical analysis and visualization packages written for the R language. The database primarily comprises mature microRNA sequences and their target data, along with selected human, mouse and zebrafish expression data sets. mESAdb analysis modules allow (i) mining of microRNA expression data sets for subsets of microRNAs selected manually or by motif; (ii) pair-wise multivariate analysis of expression data sets within and between taxa; and (iii) association of microRNA subsets with annotation databases, HUGE Navigator, KEGG and GO. The use of existing and customized R packages facilitates future addition of data sets and analysis tools. Furthermore, the ability to upload and analyze user-specified data sets makes mESAdb an interactive and expandable analysis tool for microRNA sequence and expression data.

  15. Rationale and operational plan to upgrade the U.S. gravity database

    USGS Publications Warehouse

    Hildenbrand, Thomas G.; Briesacher, Allen; Flanagan, Guy; Hinze, William J.; Hittelman, A.M.; Keller, Gordon R.; Kucks, R.P.; Plouff, Donald; Roest, Walter; Seeley, John; Stith, David A.; Webring, Mike

    2002-01-01

    A concerted effort is underway to prepare a substantially upgraded digital gravity anomaly database for the United States and to make this data set and associated usage tools available on the internet. This joint effort, spearheaded by the geophysics groups at the National Imagery and Mapping Agency (NIMA), University of Texas at El Paso (UTEP), U.S. Geological Survey (USGS), and National Oceanic and Atmospheric Administration (NOAA), is an outgrowth of the new geoscientific community initiative called Geoinformatics (www.geoinformaticsnetwork.org). This dominantly geospatial initiative reflects the realization by Earth scientists that existing information systems and techniques are inadequate to address the many complex scientific and societal issues. Currently, inadequate standardization and chaotic distribution of geoscience data, inadequate accompanying documentation, and the lack of easy-to-use access tools and computer codes for analysis are major obstacles for scientists, government agencies, and educators. An example of the type of activities envisioned, within the context of Geoinformatics, is the construction, maintenance, and growth of a public domain gravity database and development of the software tools needed to access, implement, and expand it. This product is far more than a high quality database; it is a complete data system for a specific type of geophysical measurement that includes, for example, tools to manipulate the data and tutorials to understand and properly utilize the data. On August 9, 2002, twenty-one scientists from the federal, private and academic sectors met at a workshop to discuss the rationale for upgrading both the United States and North American gravity databases (including offshore regions) and, more importantly, to begin developing an operational plan to effectively create a new gravity data system. We encourage anyone interested in contributing data or participating in this effort to contact G.R. Keller or T.G. Hildenbrand. This workshop was the first step in building a web-based data system for sharing quality gravity data and methodology, and it builds on existing collaborative efforts. This compilation effort will result in significant additions to and major refinement of the U.S. database that is currently released publicly by NOAA’s National Geophysical Data Center and will also include an additional objective to substantially upgrade the North American database, released over 15 years ago (Committee for the Gravity Anomaly Map of North America, 1987).

  16. Incidence of Rocky Mountain spotted fever among American Indians in Oklahoma.

    PubMed Central

    McQuiston, J H; Holman, R C; Groom, A V; Kaufman, S F; Cheek, J E; Childs, J E

    2000-01-01

    OBJECTIVE: Although the state of Oklahoma has traditionally reported very high incidence rates of Rocky Mountain spotted fever (RMSF) cases, the incidence of RMSF among the American Indian population of the state has not been studied. The authors used data from several sources to estimate the incidence of RMSF among American Indians in Oklahoma. METHODS: The authors retrospectively reviewed an Indian Health Service (IHS) hospital discharge database for 1980-1996 and available medical charts from four IHS hospitals. The authors also reviewed RMSF case report forms submitted to the Centers for Disease Control and Prevention (CDC) for 1981-1996. RESULTS: The study data show that American Indians in the IHS Oklahoma City Area were hospitalized with RMSF at an annual rate of 48.2 per million population, compared with an estimated hospitalization rate of 16.9 per million Oklahoma residents. The majority of cases in the IHS database (69%) were diagnosed based on clinical suspicion rather than laboratory confirmation. The incidence of RMSF for Oklahoma American Indians as reported to the CDC was 37.4 cases per million, compared with 21.6 per million for all Oklahoma residents (RR 1.7, 95% confidence interval [CI] 1.5, 2.1). CONCLUSIONS: Rates derived from the IHS database may not be comparable to state and national rates because of differences in case inclusion criteria. However, an analysis of case report forms indicates that American Indians n Oklahoma have a significantly higher incidence of RMSF than that of the overall Oklahoma population. Oklahoma American Indians may benefit from educationa campaigns emphasizing prevention of tick bites and exposure to tick habitats. PMID:11236019

  17. Pediatric Spine Trauma in the United States--Analysis of the HCUP Kid'S Inpatient Database (KID) 1997-2009.

    PubMed

    Mendoza-Lattes, Sergio; Besomi, Javier; O'Sullivan, Cormac; Ries, Zachary; Gnanapradeep, Gnanapragasam; Nash, Rachel; Gao, Yubo; Weinstein, Stuart

    2015-01-01

    Few references are available describing the epidemiology of pediatric spine injuries. The purpose of this study is to examine the prevalence, risk factors and trends during the period from 1997 to 2009 of pediatric spine injuries in the United States using a large national database. Data was obtained from the Kid's Inpatient Database (KID) developed by the Healthcare Cost and Utilization Project (HCUP), for the years 1997-2009. This data includes >3 million discharges from 44 states and 4121 hospitals on children younger than 20 years. Weighted variables are provided which allow for the calculation of national prevalence rates. The Nationwide Emergency Department Sample (NEDS), HCUP. net, and National Highway Traffic Safety Administration (NHTSA) data were used for verification and comparison. A prevalence of 107.96 pmp (per million population) spine injuries in children and adolescents was found in 2009, which is increased from the 77.07 pmp observed in 1997. The group 15 to 19 years old had the highest prevalence of all age groups in (345.44 pmp). Neurological injury was present in 14.6% of the cases, for a prevalence of 15.82 pmp. The majority (86.7%) of these injuries occurred in children >15 years. Motor vehicle collisions accounted for 52.9% of all spine injuries, particularly in children >15 years. Between 1997 and 2009 the hospital length of stay decreased, but hospital charges demonstrated a significant increase. Pediatric Spine Injuries continue to be a relevant problem, with rates exceeding those of other industrialized nations. Teenagers >15 years of age were at greatest risk, and motor vehicle collisions accounted for the most common mechanism. An increase in prevalence was observed between 1997 and 2009, and this was matched by a similar increase in hospital charges. III.

  18. FirebrowseR: an R client to the Broad Institute’s Firehose Pipeline

    PubMed Central

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute’s RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package’s features are demonstrated by an example analysis of cancer gene expression data. Database URL: https://github.com/mariodeng/ PMID:28062517

  19. FirebrowseR: an R client to the Broad Institute's Firehose Pipeline.

    PubMed

    Deng, Mario; Brägelmann, Johannes; Kryukov, Ivan; Saraiva-Agostinho, Nuno; Perner, Sven

    2017-01-01

    With its Firebrowse service (http://firebrowse.org/) the Broad Institute is making large-scale multi-platform omics data analysis results publicly available through a Representational State Transfer (REST) Application Programmable Interface (API). Querying this database through an API client from an arbitrary programming environment is an essential task, allowing other developers and researchers to focus on their analysis and avoid data wrangling. Hence, as a first result, we developed a workflow to automatically generate, test and deploy such clients for rapid response to API changes. Its underlying infrastructure, a combination of free and publicly available web services, facilitates the development of API clients. It decouples changes in server software from the client software by reacting to changes in the RESTful service and removing direct dependencies on a specific implementation of an API. As a second result, FirebrowseR, an R client to the Broad Institute's RESTful Firehose Pipeline, is provided as a working example, which is built by the means of the presented workflow. The package's features are demonstrated by an example analysis of cancer gene expression data.Database URL: https://github.com/mariodeng/. © The Author(s) 2017. Published by Oxford University Press.

  20. Livestock Anaerobic Digester Database

    EPA Pesticide Factsheets

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  1. Bibliometric trend and patent analysis in nano-alloys research for period 2000-2013.

    PubMed

    Živković, Dragana; Niculović, Milica; Manasijević, Dragan; Minić, Duško; Ćosović, Vladan; Sibinović, Maja

    2015-05-04

    This paper presents an overview of current situation in nano-alloys investigations based on bibliometric and patent analysis. Bibliometric analysis data, for period from 2000 to September 2013, were obtained using Scopus database as selected index database, whereas analyzed parameters were: number of scientific papers per years, authors, countries, affiliations, subject areas and document types. Analysis of nano-alloys patents was done with specific database, using the International Patent Classification and Patent Scope for the period from 2003 to 2013 year. Information found in this database was the number of patents, patent classification by country, patent applicators, main inventors and pub date.

  2. Bibliometric trend and patent analysis in nano-alloys research for period 2000-2013.

    PubMed

    Živković, Dragana; Niculović, Milica; Manasijević, Dragan; Minić, Duško; Ćosović, Vladan; Sibinović, Maja

    2015-01-01

    This paper presents an overview of current situation in nano-alloys investigations based on bibliometric and patent analysis. Bibliometric analysis data, for the period 2000 to 2013, were obtained using Scopus database as selected index database, whereas analyzed parameters were: number of scientific papers per year, authors, countries, affiliations, subject areas and document types. Analysis of nano-alloys patents was done with specific database, using the International Patent Classification and Patent Scope for the period 2003 to 2013. Information found in this database was the number of patents, patent classification by country, patent applicators, main inventors and publication date.

  3. Completion of the National Land Cover Database (NLCD) 1992–2001 Land Cover Change Retrofit product

    USGS Publications Warehouse

    Fry, J.A.; Coan, Michael; Homer, Collin G.; Meyer, Debra K.; Wickham, J.D.

    2009-01-01

    The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods between these two land cover products must be overcome in order to support direct comparison. The NLCD 1992-2001 Land Cover Change Retrofit product was developed to provide more accurate and useful land cover change data than would be possible by direct comparison of NLCD 1992 and NLCD 2001. For the change analysis method to be both national in scale and timely, implementation required production across many Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) path/rows simultaneously. To meet these requirements, a hybrid change analysis process was developed to incorporate both post-classification comparison and specialized ratio differencing change analysis techniques. At a resolution of 30 meters, the completed NLCD 1992-2001 Land Cover Change Retrofit product contains unchanged pixels from the NLCD 2001 land cover dataset that have been cross-walked to a modified Anderson Level I class code, and changed pixels labeled with a 'from-to' class code. Analysis of the results for the conterminous United States indicated that about 3 percent of the land cover dataset changed between 1992 and 2001.

  4. An Interactive Online Database for Potato Varieties Evaluated in the Eastern U.S.

    USDA-ARS?s Scientific Manuscript database

    Online databases are no longer a novelty. However, for the potato growing and research community little effort has been put into collecting data from multiple states and provinces, and presenting it in a web-based database format for researchers and end users to utilize. The NE1031 regional potato v...

  5. 49 CFR 384.229 - Skills test examiner auditing and monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... overt monitoring must be performed at least once every year; (c) Establish and maintain a database to...; (d) Establish and maintain a database of all third party testers and examiners, which at a minimum... examiner; (e) Establish and maintain a database of all State CDL skills examiners, which at a minimum...

  6. 49 CFR 384.229 - Skills test examiner auditing and monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... overt monitoring must be performed at least once every year; (c) Establish and maintain a database to...; (d) Establish and maintain a database of all third party testers and examiners, which at a minimum... examiner; (e) Establish and maintain a database of all State CDL skills examiners, which at a minimum...

  7. Specification and Enforcement of Semantic Integrity Constraints in Microsoft Access

    ERIC Educational Resources Information Center

    Dadashzadeh, Mohammad

    2007-01-01

    Semantic integrity constraints are business-specific rules that limit the permissible values in a database. For example, a university rule dictating that an "incomplete" grade cannot be changed to an A constrains the possible states of the database. To maintain database integrity, business rules should be identified in the course of database…

  8. Social Science Data Bases and Data Banks in the United States and Canada.

    ERIC Educational Resources Information Center

    Black, John B.

    This overview of North American social science databases, including scope and services, identifies five trends: (1) growth--in the number of databases, subjects covered, and system availability; (2) increased competition in the retrieval systems marketplace with more databases being offered on multiple systems, improvements being made to the…

  9. Pulotu: Database of Austronesian Supernatural Beliefs and Practices

    PubMed Central

    Watts, Joseph; Sheehan, Oliver; Greenhill, Simon J.; Gomes-Ng, Stephanie; Atkinson, Quentin D.; Bulbulia, Joseph; Gray, Russell D.

    2015-01-01

    Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton’s Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island–a region covering over half the world’s longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton’s Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable “headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential to revolutionise the field of comparative religious studies in the same way that genetic databases have revolutionised the field of evolutionary biology. PMID:26398231

  10. Pulotu: Database of Austronesian Supernatural Beliefs and Practices.

    PubMed

    Watts, Joseph; Sheehan, Oliver; Greenhill, Simon J; Gomes-Ng, Stephanie; Atkinson, Quentin D; Bulbulia, Joseph; Gray, Russell D

    2015-01-01

    Scholars have debated naturalistic theories of religion for thousands of years, but only recently have scientists begun to test predictions empirically. Existing databases contain few variables on religion, and are subject to Galton's Problem because they do not sufficiently account for the non-independence of cultures or systematically differentiate the traditional states of cultures from their contemporary states. Here we present Pulotu: the first quantitative cross-cultural database purpose-built to test evolutionary hypotheses of supernatural beliefs and practices. The Pulotu database documents the remarkable diversity of the Austronesian family of cultures, which originated in Taiwan, spread west to Madagascar and east to Easter Island-a region covering over half the world's longitude. The focus of Austronesian beliefs range from localised ancestral spirits to powerful creator gods. A wide range of practices also exist, such as headhunting, elaborate tattooing, and the construction of impressive monuments. Pulotu is freely available, currently contains 116 cultures, and has 80 variables describing supernatural beliefs and practices, as well as social and physical environments. One major advantage of Pulotu is that it has separate sections on the traditional states of cultures, the post-contact history of cultures, and the contemporary states of cultures. A second major advantage is that cultures are linked to a language-based family tree, enabling the use phylogenetic methods, which can be used to address Galton's Problem by accounting for common ancestry, to infer deep prehistory, and to model patterns of trait evolution over time. We illustrate the power of phylogenetic methods by performing an ancestral state reconstruction on the Pulotu variable "headhunting", finding evidence that headhunting was practiced in proto-Austronesian culture. Quantitative cross-cultural databases explicitly linking cultures to a phylogeny have the potential to revolutionise the field of comparative religious studies in the same way that genetic databases have revolutionised the field of evolutionary biology.

  11. Evaluation and comparison of bioinformatic tools for the enrichment analysis of metabolomics data.

    PubMed

    Marco-Ramell, Anna; Palau-Rodriguez, Magali; Alay, Ania; Tulipani, Sara; Urpi-Sarda, Mireia; Sanchez-Pla, Alex; Andres-Lacueva, Cristina

    2018-01-02

    Bioinformatic tools for the enrichment of 'omics' datasets facilitate interpretation and understanding of data. To date few are suitable for metabolomics datasets. The main objective of this work is to give a critical overview, for the first time, of the performance of these tools. To that aim, datasets from metabolomic repositories were selected and enriched data were created. Both types of data were analysed with these tools and outputs were thoroughly examined. An exploratory multivariate analysis of the most used tools for the enrichment of metabolite sets, based on a non-metric multidimensional scaling (NMDS) of Jaccard's distances, was performed and mirrored their diversity. Codes (identifiers) of the metabolites of the datasets were searched in different metabolite databases (HMDB, KEGG, PubChem, ChEBI, BioCyc/HumanCyc, LipidMAPS, ChemSpider, METLIN and Recon2). The databases that presented more identifiers of the metabolites of the dataset were PubChem, followed by METLIN and ChEBI. However, these databases had duplicated entries and might present false positives. The performance of over-representation analysis (ORA) tools, including BioCyc/HumanCyc, ConsensusPathDB, IMPaLA, MBRole, MetaboAnalyst, Metabox, MetExplore, MPEA, PathVisio and Reactome and the mapping tool KEGGREST, was examined. Results were mostly consistent among tools and between real and enriched data despite the variability of the tools. Nevertheless, a few controversial results such as differences in the total number of metabolites were also found. Disease-based enrichment analyses were also assessed, but they were not found to be accurate probably due to the fact that metabolite disease sets are not up-to-date and the difficulty of predicting diseases from a list of metabolites. We have extensively reviewed the state-of-the-art of the available range of tools for metabolomic datasets, the completeness of metabolite databases, the performance of ORA methods and disease-based analyses. Despite the variability of the tools, they provided consistent results independent of their analytic approach. However, more work on the completeness of metabolite and pathway databases is required, which strongly affects the accuracy of enrichment analyses. Improvements will be translated into more accurate and global insights of the metabolome.

  12. Enhancing the American Society of Clinical Oncology workforce information system with geographic distribution of oncologists and comparison of data sources for the number of practicing oncologists.

    PubMed

    Kirkwood, M Kelsey; Bruinooge, Suanna S; Goldstein, Michael A; Bajorin, Dean F; Kosty, Michael P

    2014-01-01

    The American Society of Clinical Oncology (ASCO) 2007 workforce report projected US oncologist shortages by 2020. Intervening years have witnessed shifting trends in both supply and demand, demonstrating the need to capture data in a dynamic manner. The ASCO Workforce Information System (WIS) provides an infrastructure to update annually emerging characteristics of US oncologists (medical oncologists, hematologist/oncologists, and hematologists). Several possible data sources exist to capture the number of oncologists in the United States. The WIS primarily uses the American Medical Association Physician Masterfile database because it provides detailed demographics. This analysis also compares total counts of oncologists from American Board of Internal Medicine (ABIM) certification reports, the National Provider Identifier (NPI) database, and Medicare Physician Compare data. The analysis also examines geographic distribution of oncologists by age and US population data. For each of the data sources, we pulled 2013 data. The Masterfile identified 13,409 oncologists. ABIM reported 13,757 oncologists. NPI listed 11,664 oncologists. Physician Compare identified 11,343 oncologists. Mapping of these data identifies distinct areas (primarily in central United States, Alaska, and Hawaii) that seem to lack ready access to oncologists. Efforts to survey oncologists about practice patterns will help determine if productivity and service delivery will change significantly. ASCO is committed to tracking oncologist supply and demand, as well as to providing timely analysis of strategies that will help address any shortages that may occur in specific regions or practice settings.

  13. Bibliometric Analysis of Palliative Care-Related Publication Trends During 2001 to 2016.

    PubMed

    Liu, Chia-Jen; Yeh, Te-Chun; Hsu, Su-Hsuan; Chu, Chao-Mei; Liu, Chih-Kuang; Chen, Mingchih; Huang, Sheng-Jean

    2018-01-01

    The scientific contributions (publications) and international influence (citations) from authors providing the palliative care (PC)-related literature has a limited number of bibliometric reports. We aimed to analyze PC-related literature using the Institute for Scientific Information Web of Science (WoS) database. WoS database was used to retrieve publications with the following key words with title: "palliative care" OR "End of Life care" OR "terminal care.". The statistical analysis of the documents published during 2001 to 2016 was performed. The quantity and quality of research were assessed by the number of total publications and citation analysis. In addition, we also analyzed whether there were possible correlations between publication and socioeconomic factors. The total research output was 6273 articles for PC. There was a 3-fold increase in the number of publications during the period and strong correlation between the year and number of PC-related publications ( R 2 = .96). The United States took a leading position in PC research (2448, 39.0%). The highest average citations was reported for the Norway (21.8). Australia had gained the highest productive ability in PC research (24.9 of articles per million populations). The annual impact factor rose progressively with time and increased 1.13 to 2.24 from 2003 to 2016. The number of publications correlated with gross domestic product ( r = .74; P < .001). The United States and United Kingdom contributed most of the publications, but some East Asian countries also had a great performance. According to the socioeconomic factors, the publication capacity of top 20 countries is correlated with their economic scale.

  14. [Public scientific knowledge distribution in health information, communication and information technology indexed in MEDLINE and LILACS databases].

    PubMed

    Packer, Abel Laerte; Tardelli, Adalberto Otranto; Castro, Regina Célia Figueiredo

    2007-01-01

    This study explores the distribution of international, regional and national scientific output in health information and communication, indexed in the MEDLINE and LILACS databases, between 1996 and 2005. A selection of articles was based on the hierarchical structure of Information Science in MeSH vocabulary. Four specific domains were determined: health information, medical informatics, scientific communications on healthcare and healthcare communications. The variables analyzed were: most-covered subjects and journals, author affiliation and publication countries and languages, in both databases. The Information Science category is represented in nearly 5% of MEDLINE and LILACS articles. The four domains under analysis showed a relative annual increase in MEDLINE. The Medical Informatics domain showed the highest number of records in MEDLINE, representing about half of all indexed articles. The importance of Information Science as a whole is more visible in publications from developed countries and the findings indicate the predominance of the United States, with significant growth in scientific output from China and South Korea and, to a lesser extent, Brazil.

  15. Molecular Oxygen in the Thermosphere: Issues and Measurement Strategies

    NASA Astrophysics Data System (ADS)

    Picone, J. M.; Hedin, A. E.; Drob, D. P.; Meier, R. R.; Bishop, J.; Budzien, S. A.

    2002-05-01

    We review the state of empirical knowledge regarding the distribution of molecular oxygen in the lower thermosphere (100-200 km), as embodied by the new NRLMSISE-00 empirical atmospheric model, its predecessors, and the underlying databases. For altitudes above 120 km, the two major classes of data (mass spectrometer and solar ultraviolet [UV] absorption) disagree significantly regarding the magnitude of the O2 density and the dependence on solar activity. As a result, the addition of the Solar Maximum Mission (SMM) data set (based on solar UV absorption) to the NRLMSIS database has directly impacted the new model, increasing the complexity of the model's formulation and generally reducing the thermospheric O2 density relative to MSISE-90. Beyond interest in the thermosphere itself, this issue materially affects detailed models of ionospheric chemistry and dynamics as well as modeling of the upper atmospheric airglow. Because these are key elements of both experimental and operational systems which measure and forecast the near-Earth space environment, we present strategies for augmenting the database through analysis of existing data and through future measurements in order to resolve this issue.

  16. Combining De Novo Peptide Sequencing Algorithms, A Synergistic Approach to Boost Both Identifications and Confidence in Bottom-up Proteomics.

    PubMed

    Blank-Landeshammer, Bernhard; Kollipara, Laxmikanth; Biß, Karsten; Pfenninger, Markus; Malchow, Sebastian; Shuvaev, Konstantin; Zahedi, René P; Sickmann, Albert

    2017-09-01

    Complex mass spectrometry based proteomics data sets are mostly analyzed by protein database searches. While this approach performs considerably well for sequenced organisms, direct inference of peptide sequences from tandem mass spectra, i.e., de novo peptide sequencing, oftentimes is the only way to obtain information when protein databases are absent. However, available algorithms suffer from drawbacks such as lack of validation and often high rates of false positive hits (FP). Here we present a simple method of combining results from commonly available de novo peptide sequencing algorithms, which in conjunction with minor tweaks in data acquisition ensues lower empirical FDR compared to the analysis using single algorithms. Results were validated using state-of-the art database search algorithms as well specifically synthesized reference peptides. Thus, we could increase the number of PSMs meeting a stringent FDR of 5% more than 3-fold compared to the single best de novo sequencing algorithm alone, accounting for an average of 11 120 PSMs (combined) instead of 3476 PSMs (alone) in triplicate 2 h LC-MS runs of tryptic HeLa digestion.

  17. HomeBank: An Online Repository of Daylong Child-Centered Audio Recordings

    PubMed Central

    VanDam, Mark; Warlaumont, Anne S.; Bergelson, Elika; Cristia, Alejandrina; Soderstrom, Melanie; De Palma, Paul; MacWhinney, Brian

    2017-01-01

    HomeBank is introduced here. It is a public, permanent, extensible, online database of daylong audio recorded in naturalistic environments. HomeBank serves two primary purposes. First, it is a repository for raw audio and associated files: one database requires special permissions, and another redacted database allows unrestricted public access. Associated files include metadata such as participant demographics and clinical diagnostics, automated annotations, and human-generated transcriptions and annotations. Many recordings use the child-perspective LENA recorders (LENA Research Foundation, Boulder, Colorado, United States), but various recordings and metadata can be accommodated. The HomeBank database can have both vetted and unvetted recordings, with different levels of accessibility. Additionally, HomeBank is an open repository for processing and analysis tools for HomeBank or similar data sets. HomeBank is flexible for users and contributors, making primary data available to researchers, especially those in child development, linguistics, and audio engineering. HomeBank facilitates researchers’ access to large-scale data and tools, linking the acoustic, auditory, and linguistic characteristics of children’s environments with a variety of variables including socioeconomic status, family characteristics, language trajectories, and disorders. Automated processing applied to daylong home audio recordings is now becoming widely used in early intervention initiatives, helping parents to provide richer speech input to at-risk children. PMID:27111272

  18. Risk of cardiac death among cancer survivors in the United States: a SEER database analysis.

    PubMed

    Abdel-Rahman, Omar

    2017-09-01

    Population-based data on the risk of cardiac death among cancer survivors are needed. This scenario was evaluated in cancer survivors (>5 years) registered within the Surveillance, Epidemiology and End Results (SEER) database. The SEER database was queried using SEER*Stat to determine the frequency of cardiac death compared to other causes of death; and to determine heart disease-specific and cancer-specific survival rates in survivors of each of the 10 most common cancers in men and women in the SEER database. For cancer-specific survival rate, the highest rates were related to thyroid cancer survivors; while the lowest rates were related to lung cancer survivors. For heart disease-specific survival rate, the highest rates were related to thyroid cancer survivors; while the lowest rates were related to both lung cancer survivors and urinary bladder cancer survivors. The following factors were associated with a higher likelihood of cardiac death: male gender, old age at diagnosis, black race and local treatment with radiotherapy rather than surgery (P < 0.0001 for all parameters). Among cancer survivors (>5 years), cardiac death is a significant cause of death and there is a wide variability among different cancers in the relative importance of cardiac death vs. cancer-related death.

  19. The database search problem: a question of rational decision making.

    PubMed

    Gittelson, S; Biedermann, A; Bozza, S; Taroni, F

    2012-10-10

    This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Face antispoofing based on frame difference and multilevel representation

    NASA Astrophysics Data System (ADS)

    Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad

    2017-07-01

    Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.

  1. A new version of the RDP (Ribosomal Database Project)

    NASA Technical Reports Server (NTRS)

    Maidak, B. L.; Cole, J. R.; Parker, C. T. Jr; Garrity, G. M.; Larsen, N.; Li, B.; Lilburn, T. G.; McCaughey, M. J.; Olsen, G. J.; Overbeek, R.; hide

    1999-01-01

    The Ribosomal Database Project (RDP-II), previously described by Maidak et al. [ Nucleic Acids Res. (1997), 25, 109-111], is now hosted by the Center for Microbial Ecology at Michigan State University. RDP-II is a curated database that offers ribosomal RNA (rRNA) nucleotide sequence data in aligned and unaligned forms, analysis services, and associated computer programs. During the past two years, data alignments have been updated and now include >9700 small subunit rRNA sequences. The recent development of an ObjectStore database will provide more rapid updating of data, better data accuracy and increased user access. RDP-II includes phylogenetically ordered alignments of rRNA sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software programs for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (ftp.cme.msu. edu) and WWW (http://www.cme.msu.edu/RDP). The WWW server provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for possible chimeric rRNA sequences, automated alignment, and a suggested placement of an unknown sequence on an existing phylogenetic tree. Additional utilities also exist at RDP-II, including distance matrix, T-RFLP, and a Java-based viewer of the phylogenetic trees that can be used to create subtrees.

  2. ISAAC - InterSpecies Analysing Application using Containers.

    PubMed

    Baier, Herbert; Schultz, Jörg

    2014-01-15

    Information about genes, transcripts and proteins is spread over a wide variety of databases. Different tools have been developed using these databases to identify biological signals in gene lists from large scale analysis. Mostly, they search for enrichments of specific features. But, these tools do not allow an explorative walk through different views and to change the gene lists according to newly upcoming stories. To fill this niche, we have developed ISAAC, the InterSpecies Analysing Application using Containers. The central idea of this web based tool is to enable the analysis of sets of genes, transcripts and proteins under different biological viewpoints and to interactively modify these sets at any point of the analysis. Detailed history and snapshot information allows tracing each action. Furthermore, one can easily switch back to previous states and perform new analyses. Currently, sets can be viewed in the context of genomes, protein functions, protein interactions, pathways, regulation, diseases and drugs. Additionally, users can switch between species with an automatic, orthology based translation of existing gene sets. As todays research usually is performed in larger teams and consortia, ISAAC provides group based functionalities. Here, sets as well as results of analyses can be exchanged between members of groups. ISAAC fills the gap between primary databases and tools for the analysis of large gene lists. With its highly modular, JavaEE based design, the implementation of new modules is straight forward. Furthermore, ISAAC comes with an extensive web-based administration interface including tools for the integration of third party data. Thus, a local installation is easily feasible. In summary, ISAAC is tailor made for highly explorative interactive analyses of gene, transcript and protein sets in a collaborative environment.

  3. Evaluation of protein spectra cluster analysis for Streptococcus spp. identification from various swine clinical samples.

    PubMed

    Matajira, Carlos E C; Moreno, Luisa Z; Gomes, Vasco T M; Silva, Ana Paula S; Mesquita, Renan E; Doto, Daniela S; Calderaro, Franco F; de Souza, Fernando N; Christ, Ana Paula G; Sato, Maria Inês Z; Moreno, Andrea M

    2017-03-01

    Traditional microbiological methods enable genus-level identification of Streptococcus spp. isolates. However, as the species of this genus show broad phenotypic variation, species-level identification or even differentiation within the genus is difficult. Herein we report the evaluation of protein spectra cluster analysis for the identification of Streptococcus species associated with disease in swine by means of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). A total of 250 S. suis-like isolates obtained from pigs with clinical signs of encephalitis, arthritis, pneumonia, metritis, and urinary or septicemic infection were studied. The isolates came from pigs in different Brazilian states from 2001 to 2014. The MALDI-TOF MS analysis identified 86% (215 of 250) as S. suis and 14% (35 of 250) as S. alactolyticus, S. dysgalactiae, S. gallinaceus, S. gallolyticus, S. gordonii, S. henryi, S. hyointestinalis, S. hyovaginalis, S. mitis, S. oralis, S. pluranimalium, and S. sanguinis. The MALDI-TOF MS identification was confirmed in 99.2% of the isolates by 16S rDNA sequencing, with MALDI-TOF MS misidentifying 2 S. pluranimalium as S. hyovaginalis. Isolates were also tested by a biochemical automated system that correctly identified all isolates of 8 of the 10 species in the database. Neither the isolates of the 3 species not in the database ( S. gallinaceus, S. henryi, and S. hyovaginalis) nor the isolates of 2 species that were in the database ( S. oralis and S. pluranimalium) could be identified. The topology of the protein spectra cluster analysis appears to sustain the species phylogenetic similarities, further supporting identification by MALDI-TOF MS examination as a rapid and accurate alternative to 16S rDNA sequencing.

  4. Accuracy of a state immunization registry in the pediatric emergency department.

    PubMed

    Stecher, Dawn S; Adelman, Raymond; Brinkman, Traci; Bulloch, Blake

    2008-02-01

    The purpose of this study was to ascertain whether either parental recall or a state immunization registry was as accurate as the medical record in determining immunization status in the emergency department (ED). A convenience sample of children younger than 5 years who presented to the ED between July 2004 and May 2005 were enrolled prospectively. After obtaining informed consent, parents were asked about their child's immunization status. All children then had their immunization data accessed in the Arizona State Immunization Information System. The information obtained from the state registry, as well as the information from the parental interview, was then compared with the information on the medical record obtained from the primary care physician (PCP). Data were analyzed using simple descriptive statistics. A total of 332 children were enrolled in the study. A total of 302 (91%) children enrolled were found in the state database, and 222 (74%) of these had a medical record available for comparison. The database agreed with the PCP record in 130 (59%) cases; parental report agreed with the PCP record in 149 (62%) cases. Although most children can be found in the state immunization registry, it seems to be similar in accuracy to parental recall of immunization status when each is compared with the medical record. This may have been due to either underreporting of immunizations from the community or a delay in updating the state database. At this time, neither parental recall nor the database would accurately determine a child's immunization status during an ED visit.

  5. A Brief Review of RNA–Protein Interaction Database Resources

    PubMed Central

    Yi, Ying; Zhao, Yue; Huang, Yan; Wang, Dong

    2017-01-01

    RNA–Protein interactions play critical roles in various biological processes. By collecting and analyzing the RNA–Protein interactions and binding sites from experiments and predictions, RNA–Protein interaction databases have become an essential resource for the exploration of the transcriptional and post-transcriptional regulatory network. Here, we briefly review several widely used RNA–Protein interaction database resources developed in recent years to provide a guide of these databases. The content and major functions in databases are presented. The brief description of database helps users to quickly choose the database containing information they interested. In short, these RNA–Protein interaction database resources are continually updated, but the current state shows the efforts to identify and analyze the large amount of RNA–Protein interactions. PMID:29657278

  6. Prototype Packaged Databases and Software in Health

    PubMed Central

    Gardenier, Turkan K.

    1980-01-01

    This paper describes the recent demand for packaged databases and software for health applications in light of developments in mini-and micro-computer technology. Specific features for defining prospective user groups are discussed; criticisms generated for large-scale epidemiological data use as a means of replacing clinical trials and associated controls are posed to the reader. The available collaborative efforts for access and analysis of jointly structured health data are stressed, with recommendations for new analytical techniques specifically geared to monitoring data such as the CTSS (Cumulative Transitional State Score) generated for tacking ongoing patient status over time in clinical trials. Examples of graphic display are given from the Domestic Information Display System (DIDS) which is a collaborative multi-agency effort to computerize and make accessible user-specified U.S. and local maps relating to health, environment, socio-economic and energy data.

  7. Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals.

    PubMed

    Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R R; Kasukawa, Takeya; Kawaji, Hideya

    2017-01-04

    Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Process-based organization design and hospital efficiency.

    PubMed

    Vera, Antonio; Kuntz, Ludwig

    2007-01-01

    The central idea of process-based organization design is that organizing a firm around core business processes leads to cost reductions and quality improvements. We investigated theoretically and empirically whether the implementation of a process-based organization design is advisable in hospitals. The data came from a database compiled by the Statistical Office of the German federal state of Rheinland-Pfalz and from a written questionnaire, which was sent to the chief executive officers (CEOs) of all 92 hospitals in this federal state. We used data envelopment analysis (DEA) to measure hospital efficiency, and factor analysis and regression analysis to test our hypothesis. Our principal finding is that a high degree of process-based organization has a moderate but significant positive effect on the efficiency of hospitals. The main implication is that hospitals should implement a process-based organization to improve their efficiency. However, to actually achieve positive effects on efficiency, it is of paramount importance to observe some implementation rules, in particular to mobilize physician participation and to create an adequate organizational culture.

  9. Smartphone-Based Accurate Analysis of Retinal Vasculature towards Point-of-Care Diagnostics

    PubMed Central

    Xu, Xiayu; Ding, Wenxiang; Wang, Xuemin; Cao, Ruofan; Zhang, Maiye; Lv, Peilin; Xu, Feng

    2016-01-01

    Retinal vasculature analysis is important for the early diagnostics of various eye and systemic diseases, making it a potentially useful biomarker, especially for resource-limited regions and countries. Here we developed a smartphone-based retinal image analysis system for point-of-care diagnostics that is able to load a fundus image, segment retinal vessels, analyze individual vessel width, and store or uplink results. The proposed system was not only evaluated on widely used public databases and compared with the state-of-the-art methods, but also validated on clinical images directly acquired with a smartphone. An Android app is also developed to facilitate on-site application of the proposed methods. Both visual assessment and quantitative assessment showed that the proposed methods achieved comparable results to the state-of-the-art methods that require high-standard workstations. The proposed system holds great potential for the early diagnostics of various diseases, such as diabetic retinopathy, for resource-limited regions and countries. PMID:27698369

  10. Eta photoproduction in a combined analysis of pion- and photon-induced reactions

    DOE PAGES

    Ronchen, D.; Doring, M.; Haberzettl, H.; ...

    2015-06-25

    Themore » $$\\eta N$$ final state is isospin-selective and thus provides access to the spectrum of excited nucleons without being affected by excited $$\\Delta$$ states. To this end, the world database on eta photoproduction off the proton up to a center-of-mass energy of $$E\\sim 2.3$$ GeV is analyzed, including data on differential cross sections, and single and double polarization observables. resonance spectrum and its properties are determined in a combined analysis of eta and pion photoproduction off the proton together with the reactions $$\\pi N\\to \\pi N$$, $$\\eta N$$, $$K\\Lambda$$ and $$K\\Sigma$$. For the analysis, the so-called J\\"ulich coupled-channel framework is used, incorporating unitarity, analyticity, and effective three-body channels. Parameters tied to photoproduction and hadronic interactions are varied simultaneously. Furthermore, the influence of recent MAMI $T$ and $F$ asymmetry data on the eta photoproduction amplitude is discussed in detail.« less

  11. Evaluating the national land cover database tree canopy and impervious cover estimates across the conterminous United States: a comparison with photo-interpreted estimates

    Treesearch

    David J. Nowak; Eric J. Greenfield

    2010-01-01

    The 2001 National Land Cover Database (NLCD) provides 30-m resolution estimates of percentage tree canopy and percentage impervious cover for the conterminous United States. Previous estimates that compared NLCD tree canopy and impervious cover estimates with photo-interpreted cover estimates within selected counties and places revealed that NLCD underestimates tree...

  12. Ground sample data for the Conterminous U.S. Land Cover Characteristics Database

    Treesearch

    Robert Burgan; Colin Hardy; Donald Ohlen; Gene Fosnight; Robert Treder

    1999-01-01

    Ground sample data were collected for a land cover database and raster map that portray 159 vegetation classes at 1 km2 resolution for the conterminous United States. Locations for 3,500 1 km2 ground sample plots were selected randomly across the United States. The number of plots representing each vegetation class was weighted by the proportionate coverage of each...

  13. The NCBI BioSystems database.

    PubMed

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  14. IAC - INTEGRATED ANALYSIS CAPABILITY

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1994-01-01

    The objective of the Integrated Analysis Capability (IAC) system is to provide a highly effective, interactive analysis tool for the integrated design of large structures. With the goal of supporting the unique needs of engineering analysis groups concerned with interdisciplinary problems, IAC was developed to interface programs from the fields of structures, thermodynamics, controls, and system dynamics with an executive system and database to yield a highly efficient multi-disciplinary system. Special attention is given to user requirements such as data handling and on-line assistance with operational features, and the ability to add new modules of the user's choice at a future date. IAC contains an executive system, a data base, general utilities, interfaces to various engineering programs, and a framework for building interfaces to other programs. IAC has shown itself to be effective in automatic data transfer among analysis programs. IAC 2.5, designed to be compatible as far as possible with Level 1.5, contains a major upgrade in executive and database management system capabilities, and includes interfaces to enable thermal, structures, optics, and control interaction dynamics analysis. The IAC system architecture is modular in design. 1) The executive module contains an input command processor, an extensive data management system, and driver code to execute the application modules. 2) Technical modules provide standalone computational capability as well as support for various solution paths or coupled analyses. 3) Graphics and model generation interfaces are supplied for building and viewing models. Advanced graphics capabilities are provided within particular analysis modules such as INCA and NASTRAN. 4) Interface modules provide for the required data flow between IAC and other modules. 5) User modules can be arbitrary executable programs or JCL procedures with no pre-defined relationship to IAC. 6) Special purpose modules are included, such as MIMIC (Model Integration via Mesh Interpolation Coefficients), which transforms field values from one model to another; LINK, which simplifies incorporation of user specific modules into IAC modules; and DATAPAC, the National Bureau of Standards statistical analysis package. The IAC database contains structured files which provide a common basis for communication between modules and the executive system, and can contain unstructured files such as NASTRAN checkpoint files, DISCOS plot files, object code, etc. The user can define groups of data and relations between them. A full data manipulation and query system operates with the database. The current interface modules comprise five groups: 1) Structural analysis - IAC contains a NASTRAN interface for standalone analysis or certain structural/control/thermal combinations. IAC provides enhanced structural capabilities for normal modes and static deformation analysis via special DMAP sequences. IAC 2.5 contains several specialized interfaces from NASTRAN in support of multidisciplinary analysis. 2) Thermal analysis - IAC supports finite element and finite difference techniques for steady state or transient analysis. There are interfaces for the NASTRAN thermal analyzer, SINDA/SINFLO, and TRASYS II. FEMNET, which converts finite element structural analysis models to finite difference thermal analysis models, is also interfaced with the IAC database. 3) System dynamics - The DISCOS simulation program which allows for either nonlinear time domain analysis or linear frequency domain analysis, is fully interfaced to the IAC database management capability. 4) Control analysis - Interfaces for the ORACLS, SAMSAN, NBOD2, and INCA programs allow a wide range of control system analyses and synthesis techniques. Level 2.5 includes EIGEN, which provides tools for large order system eigenanalysis, and BOPACE, which allows for geometric capabilities and finite element analysis with nonlinear material. Also included in IAC level 2.5 is SAMSAN 3.1, an engineering analysis program which contains a general purpose library of over 600 subroutin

  15. Conceptions of systemic reform: California science education as an investigative example

    NASA Astrophysics Data System (ADS)

    Sachse, Thomas Paul

    This study explored three perspectives of systemic reform in the context of the California state strategies for improving science education. The three perspectives are those of conceptualizers, implementers, and government administrators. The California case study is examined during the ten-year period from 1983 to 1993. This study is of particular significance, because it examines science education reforms during the ten-year period of Bill Honig's state superintendency in the largest and most diverse state. By examining the facets of state science reforms from three rather different perspectives, the study contrasts how systemic reform definitions vary with role. This qualitative study employs document analysis, archival reviews, and participant interviews as the primary data collection methods. Document analysis included key curriculum frameworks, project proposals and reports, relevant legislation, and professional correspondence. Archival reviews included databases (such as the California Basic Educational Data System), assessment reports (such as the California Assessment Program---Rationale and Content), and policy analyses (such as the Policy Analysis for California Education---Conditions of Education). Interviews were conducted for each of the three perspectives across five segments of the reform strategy for a total of fifteen interviews. Data analysis consisted of combining detailed reviews of documents, archives, and interview information with an examination of perspectives, by role group. The study concludes with an analysis of how each role group perceived the facets of systemic reform in the context of the California case study of science education reform. In addition, the research points to "lessons learned", the strengths and weaknesses of systemic reform strategies at the state level. The study offers recommendations to other large-scale (state level) policy reformers interested in creating, sustaining, and maintaining lasting change.

  16. Using statistical process control to make data-based clinical decisions.

    PubMed

    Pfadt, A; Wheeler, D J

    1995-01-01

    Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.

  17. A complete database for the Einstein imaging proportional counter

    NASA Technical Reports Server (NTRS)

    Helfand, David J.

    1991-01-01

    A complete database for the Einstein Imaging Proportional Counter (IPC) was completed. The original data that makes up the archive is described as well as the structure of the database, the Op-Ed analysis system, the technical advances achieved relative to the analysis of (IPC) data, the data products produced, and some uses to which the database has been put by scientists outside Columbia University over the past year.

  18. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  19. Domain fusion analysis by applying relational algebra to protein sequence and domain databases.

    PubMed

    Truong, Kevin; Ikura, Mitsuhiko

    2003-05-06

    Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.

  20. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  1. Reliability database development for use with an object-oriented fault tree evaluation program

    NASA Technical Reports Server (NTRS)

    Heger, A. Sharif; Harringtton, Robert J.; Koen, Billy V.; Patterson-Hine, F. Ann

    1989-01-01

    A description is given of the development of a fault-tree analysis method using object-oriented programming. In addition, the authors discuss the programs that have been developed or are under development to connect a fault-tree analysis routine to a reliability database. To assess the performance of the routines, a relational database simulating one of the nuclear power industry databases has been constructed. For a realistic assessment of the results of this project, the use of one of existing nuclear power reliability databases is planned.

  2. Nonlinear and progressive failure aspects of transport composite fuselage damage tolerance

    NASA Technical Reports Server (NTRS)

    Walker, Tom; Ilcewicz, L.; Murphy, Dan; Dopker, Bernhard

    1993-01-01

    The purpose is to provide an end-user's perspective on the state of the art in life prediction and failure analysis by focusing on subsonic transport fuselage issues being addressed in the NASA/Boeing Advanced Technology Composite Aircraft Structure (ATCAS) contract and a related task-order contract. First, some discrepancies between the ATCAS tension-fracture test database and classical prediction methods is discussed, followed by an overview of material modeling work aimed at explaining some of these discrepancies. Finally, analysis efforts associated with a pressure-box test fixture are addressed, as an illustration of modeling complexities required to model and interpret tests.

  3. Fractures in Kidney Transplant Recipients: A Comparative Study Between England and New York State.

    PubMed

    Arnold, Julia; Mytton, Jemma; Evison, Felicity; Gill, Paramjit S; Cockwell, Paul; Sharif, Adnan; Ferro, Charles J

    2017-11-15

    Fractures are associated with high morbidity and are a major concern for kidney transplant recipients. No comparative analysis has yet been conducted between countries in the contemporary era to inform future international prevention trials. Data were obtained from the Hospital Episode Statistics and the Statewide Planning and Research Cooperative databases on all adult kidney transplants performed in England and New York State from 2003 to 2013, respectively, and on posttransplant fracture-related hospitalization from 2003 to 2014. Our analysis included 18 493 English and 11 602 New York State kidney transplant recipients. Overall, 637 English recipients (3.4%) and 398 New York State recipients (3.4%) sustained a fracture, giving an unadjusted event rate of 7.0 and 5.9 per 1000 years, respectively (P = .948). Of these, 147 English (0.8%) and 101 New York State recipients (0.9%) sustained a hip fracture, giving an unadjusted event rate of 1.6 and 1.5 per 1000 years, respectively (P = .480). There were no differences in the cumulative incidence of all fractures or hip fractures. One-year mortality rates after any fracture (9% and 11%) or after a hip fracture (15% and 17%) were not different between cohorts. Contemporaneous English and New York State kidney transplant recipients have similar fracture rates and mortality rates postfracture.

  4. The Fatality Analysis Reporting System as a tool for investigating racial and ethnic determinants of motor vehicle crash fatalities.

    PubMed

    Briggs, Nathaniel C; Levine, Robert S; Haliburton, William P; Schlundt, David G; Goldzweig, Irwin; Warren, Rueben C

    2005-07-01

    The Fatality Analysis Reporting System (FARS) is a Department of Transportation database in the public domain that contains detailed information about fatalities resulting from motor vehicle crashes on public roadways in the United States since 1975. However, data on race and Hispanic ethnicity were not collected by FARS until 1999. Since then, completeness of reported racial and ethnic information has varied from State to State. To assess utility of FARS for investigating race- and ethnicity-specific risk factors associated with motor vehicle crash mortality, we examined yearly national and State-specific reporting rates of race and Hispanic ethnicity for 168,863 motor vehicle crash fatalities from 1999 to 2002. In 1999, national reporting was 85% for race and 78% for Hispanic ethnicity. Over the 4-year study period, a significant linear increase in annual reporting for both race and Hispanic ethnicity was evident at the national level, as reporting by individual States improved over time. In 2002, national reporting rates reached 90% for race and 88% for Hispanic ethnicity. Our findings indicate that FARS has become a valuable resource for population-based studies of motor vehicle crash mortality disparities that exist among racial and ethnic subpopulations in the United States.

  5. Documentation of a spatial data-base management system for monitoring pesticide application in Washington

    USGS Publications Warehouse

    Schurr, K.M.; Cox, S.E.

    1994-01-01

    The Pesticide-Application Data-Base Management System was created as a demonstration project and was tested with data submitted to the Washington State Department of Agriculture by pesticide applicators from a small geographic area. These data were entered into the Department's relational data-base system and uploaded into the system's ARC/INFO files. Locations for pesticide applica- tions are assigned within the Public Land Survey System grids, and ARC/INFO programs in the Pesticide-Application Data-Base Management System can subdivide each survey section into sixteen idealized quarter-quarter sections for display map grids. The system provides data retrieval and geographic information system plotting capabilities from a menu of seven basic retrieval options. Additionally, ARC/INFO coverages can be created from the retrieved data when required for particular applications. The Pesticide-Application Data-Base Management System, or the general principles used in the system, could be adapted to other applica- tions or to other states.

  6. Radiation Embrittlement Archive Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klasky, Hilda B; Bass, Bennett Richard; Williams, Paul T

    2013-01-01

    The Radiation Embrittlement Archive Project (REAP), which is being conducted by the Probabilistic Integrity Safety Assessment (PISA) Program at Oak Ridge National Laboratory under funding from the U.S. Nuclear Regulatory Commission s (NRC) Office of Nuclear Regulatory Research, aims to provide an archival source of information about the effect of neutron radiation on the properties of reactor pressure vessel (RPV) steels. Specifically, this project is an effort to create an Internet-accessible RPV steel embrittlement database. The project s website, https://reap.ornl.gov, provides information in two forms: (1) a document archive with surveillance capsule(s) reports and related technical reports, in PDF format,more » for the 104 commercial nuclear power plants (NPPs) in the United States, with similar reports from other countries; and (2) a relational database archive with detailed information extracted from the reports. The REAP project focuses on data collected from surveillance capsule programs for light-water moderated, nuclear power reactor vessels operated in the United States, including data on Charpy V-notch energy testing results, tensile properties, composition, exposure temperatures, neutron flux (rate of irradiation damage), and fluence, (Fast Neutron Fluence a cumulative measure of irradiation for E>1 MeV). Additionally, REAP contains data from surveillance programs conducted in other countries. REAP is presently being extended to focus on embrittlement data analysis, as well. This paper summarizes the current status of the REAP database and highlights opportunities to access the data and to participate in the project.« less

  7. PROFESS: a PROtein Function, Evolution, Structure and Sequence database

    PubMed Central

    Triplet, Thomas; Shortridge, Matthew D.; Griep, Mark A.; Stark, Jaime L.; Powers, Robert; Revesz, Peter

    2010-01-01

    The proliferation of biological databases and the easy access enabled by the Internet is having a beneficial impact on biological sciences and transforming the way research is conducted. There are ∼1100 molecular biology databases dispersed throughout the Internet. To assist in the functional, structural and evolutionary analysis of the abundant number of novel proteins continually identified from whole-genome sequencing, we introduce the PROFESS (PROtein Function, Evolution, Structure and Sequence) database. Our database is designed to be versatile and expandable and will not confine analysis to a pre-existing set of data relationships. A fundamental component of this approach is the development of an intuitive query system that incorporates a variety of similarity functions capable of generating data relationships not conceived during the creation of the database. The utility of PROFESS is demonstrated by the analysis of the structural drift of homologous proteins and the identification of potential pancreatic cancer therapeutic targets based on the observation of protein–protein interaction networks. Database URL: http://cse.unl.edu/∼profess/ PMID:20624718

  8. Identity Recognition Algorithm Using Improved Gabor Feature Selection of Gait Energy Image

    NASA Astrophysics Data System (ADS)

    Chao, LIANG; Ling-yao, JIA; Dong-cheng, SHI

    2017-01-01

    This paper describes an effective gait recognition approach based on Gabor features of gait energy image. In this paper, the kernel Fisher analysis combined with kernel matrix is proposed to select dominant features. The nearest neighbor classifier based on whitened cosine distance is used to discriminate different gait patterns. The approach proposed is tested on the CASIA and USF gait databases. The results show that our approach outperforms other state of gait recognition approaches in terms of recognition accuracy and robustness.

  9. Calibration and Validation of the Sage Software Cost/Schedule Estimating System to United States Air Force Databases

    DTIC Science & Technology

    1997-09-01

    factor values are identified. For SASET, revised cost estimating relationships are provided ( Apgar et al., 1991). A 1991 AFIT thesis by Gerald Ourada...description of the model is a paragraph directly quoted from the user’s manual . This is not to imply that a lack of a thorough analysis indicates...constraints imposed by the system. The effective technology rating is computed from the basic technology rating by the following equation ( Apgar et al., 1991

  10. An Analysis of the United States Naval Aviation Schedule Removal Component (SRC) Card Process

    DTIC Science & Technology

    2009-12-01

    JSF has the ability to communicate in flight with its maintenance system , ALIS. Its Prognostic Health Management (PHM) System abilities allow it to...end-users. PLCS allows users of the system , through a central database, visibility of a component’s history and lifecycle data . Since both OOMA...used in PLM systems .2 This research recommends a PLM system that is Web-based and uses DoD- mandated UID technology as the future for data

  11. Cyber Incidents Involving Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert J. Turk

    2005-10-01

    The Analysis Function of the US-CERT Control Systems Security Center (CSSC) at the Idaho National Laboratory (INL) has prepared this report to document cyber security incidents for use by the CSSC. The description and analysis of incidents reported herein support three CSSC tasks: establishing a business case; increasing security awareness and private and corporate participation related to enhanced cyber security of control systems; and providing informational material to support model development and prioritize activities for CSSC. The stated mission of CSSC is to reduce vulnerability of critical infrastructure to cyber attack on control systems. As stated in the Incident Managementmore » Tool Requirements (August 2005) ''Vulnerability reduction is promoted by risk analysis that tracks actual risk, emphasizes high risk, determines risk reduction as a function of countermeasures, tracks increase of risk due to external influence, and measures success of the vulnerability reduction program''. Process control and Supervisory Control and Data Acquisition (SCADA) systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. New research indicates this confidence is misplaced--the move to open standards such as Ethernet, Transmission Control Protocol/Internet Protocol, and Web technologies is allowing hackers to take advantage of the control industry's unawareness. Much of the available information about cyber incidents represents a characterization as opposed to an analysis of events. The lack of good analyses reflects an overall weakness in reporting requirements as well as the fact that to date there have been very few serious cyber attacks on control systems. Most companies prefer not to share cyber attack incident data because of potential financial repercussions. Uniform reporting requirements will do much to make this information available to Department of Homeland Security (DHS) and others who require it. This report summarizes the rise in frequency of cyber attacks, describes the perpetrators, and identifies the means of attack. This type of analysis, when used in conjunction with vulnerability analyses, can be used to support a proactive approach to prevent cyber attacks. CSSC will use this document to evolve a standardized approach to incident reporting and analysis. This document will be updated as needed to record additional event analyses and insights regarding incident reporting. This report represents 120 cyber security incidents documented in a number of sources, including: the British Columbia Institute of Technology (BCIT) Industrial Security Incident Database, the 2003 CSI/FBI Computer Crime and Security Survey, the KEMA, Inc., Database, Lawrence Livermore National Laboratory, the Energy Incident Database, the INL Cyber Incident Database, and other open-source data. The National Memorial Institute for the Prevention of Terrorism (MIPT) database was also interrogated but, interestingly, failed to yield any cyber attack incidents. The results of this evaluation indicate that historical evidence provides insight into control system related incidents or failures; however, that the limited available information provides little support to future risk estimates. The documented case history shows that activity has increased significantly since 1988. The majority of incidents come from the Internet by way of opportunistic viruses, Trojans, and worms, but a surprisingly large number are directed acts of sabotage. A substantial number of confirmed, unconfirmed, and potential events that directly or potentially impact control systems worldwide are also identified. Twelve selected cyber incidents are presented at the end of this report as examples of the documented case studies (see Appendix B).« less

  12. Food Service Guideline Policies on State Government-Controlled Properties.

    PubMed

    Zaganjor, Hatidza; Bishop Kendrick, Katherine; Warnock, Amy Lowry; Onufrak, Stephen; Whitsel, Laurie P; Ralston Aoki, Julie; Kimmons, Joel

    2016-09-13

    Food service guideline (FSG) policies can impact millions of daily meals sold or provided to government employees, patrons, and institutionalized persons. This study describes a classification tool to assess FSG policy attributes and uses it to rate FSG policies. Quantitative content analysis. State government facilities in the United States. Participants were from 50 states and District of Columbia in the United States. Frequency of FSG policies and percentage alignment to tool. State-level policies were identified using legal research databases to assess bills, statutes, regulations, and executive orders proposed or adopted by December 31, 2014. Full-text reviews were conducted to determine inclusion. Included policies were analyzed to assess attributes related to nutrition, behavioral supports, and implementation guidance. A total of 31 policies met the inclusion criteria; 15 were adopted. Overall alignment ranged from 0% to 86%, and only 10 policies aligned with a majority of the FSG policy attributes. Western states had the most FSG policies proposed or adopted (11 policies). The greatest number of FSG policies were proposed or adopted (8 policies) in 2011, followed by the years 2013 and 2014. The FSG policies proposed or adopted through 2014 that intended to improve the food and beverage environment on state government property vary considerably in their content. This analysis offers baseline data on the FSG landscape and information for future FSG policy assessments. © The Author(s) 2016.

  13. Nonequilibrium shock-heated nitrogen flows using a rovibrational state-to-state method

    NASA Astrophysics Data System (ADS)

    Panesi, M.; Munafò, A.; Magin, T. E.; Jaffe, R. L.

    2014-07-01

    A rovibrational collisional model is developed to study the internal energy excitation and dissociation processes behind a strong shock wave in a nitrogen flow. The reaction rate coefficients are obtained from the ab initio database of the NASA Ames Research Center. The master equation is coupled with a one-dimensional flow solver to study the nonequilibrium phenomena encountered in the gas during a hyperbolic reentry into Earth's atmosphere. The analysis of the populations of the rovibrational levels demonstrates how rotational and vibrational relaxation proceed at the same rate. This contrasts with the common misconception that translational and rotational relaxation occur concurrently. A significant part of the relaxation process occurs in non-quasi-steady-state conditions. Exchange processes are found to have a significant impact on the relaxation of the gas, while predissociation has a negligible effect. The results obtained by means of the full rovibrational collisional model are used to assess the validity of reduced order models (vibrational collisional and multitemperature) which are based on the same kinetic database. It is found that thermalization and dissociation are drastically overestimated by the reduced order models. The reasons of the failure differ in the two cases. In the vibrational collisional model the overestimation of the dissociation is a consequence of the assumption of equilibrium between the rotational energy and the translational energy. The multitemperature model fails to predict the correct thermochemical relaxation due to the failure of the quasi-steady-state assumption, used to derive the phenomenological rate coefficient for dissociation.

  14. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.

  15. 49 CFR 384.229 - Skills test examiner auditing and monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... must be performed at least once every year; (c) Establish and maintain a database to track pass/fail... maintain a database of all third party testers and examiners, which at a minimum tracks the dates and... and maintain a database of all State CDL skills examiners, which at a minimum tracks the dates and...

  16. 49 CFR 384.229 - Skills test examiner auditing and monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... must be performed at least once every year; (c) Establish and maintain a database to track pass/fail... maintain a database of all third party testers and examiners, which at a minimum tracks the dates and... and maintain a database of all State CDL skills examiners, which at a minimum tracks the dates and...

  17. 49 CFR 1570.13 - False statements regarding security background checks by public transportation agency or railroad...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., national security, or of terrorism: (i) Relevant criminal history databases; (ii) In the case of an alien... databases to determine the status of the alien under the immigration laws of the United States; and (iii) Other relevant information or databases, as determined by the Secretary of Homeland Security. (c...

  18. The 2002 RPA Plot Summary database users manual

    Treesearch

    Patrick D. Miles; John S. Vissage; W. Brad Smith

    2004-01-01

    Describes the structure of the RPA 2002 Plot Summary database and provides information on generating estimates of forest statistics from these data. The RPA 2002 Plot Summary database provides a consistent framework for storing forest inventory data across all ownerships across the entire United States. The data represents the best available data as of October 2001....

  19. 49 CFR 1570.13 - False statements regarding security background checks by public transportation agency or railroad...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., national security, or of terrorism: (i) Relevant criminal history databases; (ii) In the case of an alien... databases to determine the status of the alien under the immigration laws of the United States; and (iii) Other relevant information or databases, as determined by the Secretary of Homeland Security. (c...

  20. Checkpointing and Recovery in Distributed and Database Systems

    ERIC Educational Resources Information Center

    Wu, Jiang

    2011-01-01

    A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…

  1. NATIONAL URBAN DATABASE AND ACCESS PORTAL TOOL (NUDAPT): FACILITATING ADVANCEMENTS IN URBAN METEOROLOGY AND CLIMATE MODELING WITH COMMUNITY-BASED URBAN DATABASES

    EPA Science Inventory

    We discuss the initial design and application of the National Urban Database and Access Portal Tool (NUDAPT). This new project is sponsored by the USEPA and involves collaborations and contributions from many groups from federal and state agencies, and from private and academic i...

  2. 49 CFR 1570.13 - False statements regarding security background checks by public transportation agency or railroad...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., national security, or of terrorism: (i) Relevant criminal history databases; (ii) In the case of an alien... databases to determine the status of the alien under the immigration laws of the United States; and (iii) Other relevant information or databases, as determined by the Secretary of Homeland Security. (c...

  3. 49 CFR 1570.13 - False statements regarding security background checks by public transportation agency or railroad...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., national security, or of terrorism: (i) Relevant criminal history databases; (ii) In the case of an alien... databases to determine the status of the alien under the immigration laws of the United States; and (iii) Other relevant information or databases, as determined by the Secretary of Homeland Security. (c...

  4. Bio-optical data integration based on a 4 D database system approach

    NASA Astrophysics Data System (ADS)

    Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.

    2015-04-01

    Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.

  5. Identification and Analysis of Critical Gaps in Nuclear Fuel Cycle Codes Required by the SINEMA Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adrian Miron; Joshua Valentine; John Christenson

    2009-10-01

    The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less

  6. Framing Child Sexual Abuse: A Longitudinal Content Analysis of Newspaper and Television Coverage, 2002-2012.

    PubMed

    Weatherred, Jane Long

    2017-01-01

    The way in which the news media frame child sexual abuse can influence public perception. This content analysis of the child sexual abuse coverage of eight national news organizations in the United States from 2002 to 2012 includes the two dominant events of the Catholic Church and Pennsylvania State University child sexual abuse scandals. Census and systematic stratified sampling techniques were applied to articles obtained from the Lexis/Nexis Academic database, resulting in a sample of 503 articles. Intercoder reliability was ensured by double coding a randomly selected sample. Study findings indicate a shift in the attribution of responsibility of child sexual abuse among news organizations over the past decade from an individual-level problem with individual-level solutions to a societal-level problem with institutional culpability. Nevertheless, individual-level solutions continue to be framed as the best possible solution.

  7. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  8. Exploring the single-cell RNA-seq analysis landscape with the scRNA-tools database.

    PubMed

    Zappia, Luke; Phipson, Belinda; Oshlack, Alicia

    2018-06-25

    As single-cell RNA-sequencing (scRNA-seq) datasets have become more widespread the number of tools designed to analyse these data has dramatically increased. Navigating the vast sea of tools now available is becoming increasingly challenging for researchers. In order to better facilitate selection of appropriate analysis tools we have created the scRNA-tools database (www.scRNA-tools.org) to catalogue and curate analysis tools as they become available. Our database collects a range of information on each scRNA-seq analysis tool and categorises them according to the analysis tasks they perform. Exploration of this database gives insights into the areas of rapid development of analysis methods for scRNA-seq data. We see that many tools perform tasks specific to scRNA-seq analysis, particularly clustering and ordering of cells. We also find that the scRNA-seq community embraces an open-source and open-science approach, with most tools available under open-source licenses and preprints being extensively used as a means to describe methods. The scRNA-tools database provides a valuable resource for researchers embarking on scRNA-seq analysis and records the growth of the field over time.

  9. Fullerene data mining using bibliometrics and database tomography

    PubMed

    Kostoff; Braun; Schubert; Toothman; Humenik

    2000-01-01

    Database tomography (DT) is a textual database analysis system consisting of two major components: (1) algorithms for extracting multiword phrase frequencies and phrase proximities (physical closeness of the multiword technical phrases) from any type of large textual database, to augment (2) interpretative capabilities of the expert human analyst. DT was used to derive technical intelligence from a fullerenes database derived from the Science Citation Index and the Engineering Compendex. Phrase frequency analysis by the technical domain experts provided the pervasive technical themes of the fullerenes database, and phrase proximity analysis provided the relationships among the pervasive technical themes. Bibliometric analysis of the fullerenes literature supplemented the DT results with author/journal/institution publication and citation data. Comparisons of fullerenes results with past analyses of similarly structured near-earth space, chemistry, hypersonic/supersonic flow, aircraft, and ship hydrodynamics databases are made. One important finding is that many of the normalized bibliometric distribution functions are extremely consistent across these diverse technical domains and could reasonably be expected to apply to broader chemical topics than fullerenes that span multiple structural classes. Finally, lessons learned about integrating the technical domain experts with the data mining tools are presented.

  10. Post flood damage data collection and assessment in Albania based on DesInventar methodology

    NASA Astrophysics Data System (ADS)

    Toto, Emanuela; Massabo, Marco; Deda, Miranda; Rossello, Laura

    2015-04-01

    In 2013 in Albania was implemented a collection of disaster losses based on Desinventar. The DesInventar system consists in a methodology and software tool that lead to the systematic collection, documentation and analysis of loss data on disasters. The main sources of information about disasters used for the Albanian database were the Albanian Ministry of Internal Affairs, the National Library and the State archive. Specifically for floods the database created contains nearly 900 datasets, for a period of 148 years (from 1865 to 2013). The data are georeferenced on the administrative units of Albania: Region, Provinces and Municipalities. The datasets describe the events by reporting the date of occurrence, the duration, the localization in administrative units and the cause. Additional information regards the effects and damage that the event caused on people (deaths, injured, missing, affected, relocated, evacuated, victims) and on houses (houses damaged or destroyed). Other quantitative indicators are the losses in local currency or US dollars, the damage on roads, the crops affected , the lost cattle and the involvement of social elements over the territory such as education and health centers. Qualitative indicators simply register the sectors (e.g. transportations, communications, relief, agriculture, water supply, sewerage, power and energy, industries, education, health sector, other sectors) that were affected. Through the queries and analysis of the data collected it was possible to identify the most affected areas, the economic loss, the damage in agriculture, the houses and people affected and many other variables. The most vulnerable Regions for the past floods in Albania were studied and individuated, as well as the rivers that cause more damage in the country. Other analysis help to estimate the damage and losses during the main flood events of the recent years, occurred in 2010 and 2011, and to recognize the most affected sectors. The database was used to find the most frequent drivers that cause floods and to identify the areas with a higher priority for intervention and the areas with a higher economic loss. In future the loss and damage database could address interventions for risk mitigation and decision making processes. Using the database is also possible to build Empirical Loss Exceedance Curves, that permit to find the average number of times for year that a certain level of loss happened. The users of the database information can be researchers, students, citizens and policy makers. The operators of the National Operative Center for Civil Emergencies (Albanian Ministry of Internal Affairs) use the database daily to insert new data. Nowadays in Albania there isn't an entity in charge for the registration of damage and consequences of floods in a systematic and organized way. In this sense, the database DesInventar provides a basis for the future and helps to identify priorities to create a national database.

  11. Localization of thermal anomalies in electrical equipment using Infrared Thermography and support vector machine

    NASA Astrophysics Data System (ADS)

    Laib dit Leksir, Y.; Mansour, M.; Moussaoui, A.

    2018-03-01

    Analysis and processing of databases obtained from infrared thermal inspections made on electrical installations require the development of new tools to obtain more information to visual inspections. Consequently, methods based on the capture of thermal images show a great potential and are increasingly employed in this field. However, there is a need for the development of effective techniques to analyse these databases in order to extract significant information relating to the state of the infrastructures. This paper presents a technique explaining how this approach can be implemented and proposes a system that can help to detect faults in thermal images of electrical installations. The proposed method classifies and identifies the region of interest (ROI). The identification is conducted using support vector machine (SVM) algorithm. The aim here is to capture the faults that exist in electrical equipments during an inspection of some machines using A40 FLIR camera. After that, binarization techniques are employed to select the region of interest. Later the comparative analysis of the obtained misclassification errors using the proposed method with Fuzzy c means and Ostu, has also be addressed.

  12. Development of an Integrated Hydrologic Modeling System for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    Lu, B.; Piasecki, M.

    2008-12-01

    This paper aims to present the development of an integrated hydrological model which involves functionalities of digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. The proposed system is intended to work as a back end to the CUAHSI HIS cyberinfrastructure developments. As a first step into developing this system, a physics-based distributed hydrologic model PIHM (Penn State Integrated Hydrologic Model) is wrapped into OpenMI(Open Modeling Interface and Environment ) environment so as to seamlessly interact with OpenMI compliant meteorological models. The graphical user interface is being developed from the openGIS application called MapWindows which permits functionality expansion through the addition of plug-ins. . Modules required to set up through the GUI workboard include those for retrieving meteorological data from existing database or meteorological prediction models, obtaining geospatial data from the output of digital watershed processing, and importing initial condition and boundary condition. They are connected to the OpenMI compliant PIHM to simulate rainfall-runoff processes and includes a module for automatically displaying output after the simulation. Online databases are accessed through the WaterOneFlow web services, and the retrieved data are either stored in an observation database(OD) following the schema of Observation Data Model(ODM) in case for time series support, or a grid based storage facility which may be a format like netCDF or a grid-based-data database schema . Specific development steps include the creation of a bridge to overcome interoperability issue between PIHM and the ODM, as well as the embedding of TauDEM (Terrain Analysis Using Digital Elevation Models) into the model. This module is responsible for developing watershed and stream network using digital elevation models. Visualizing and editing geospatial data is achieved by the usage of MapWinGIS, an ActiveX control developed by MapWindow team. After applying to the practical watershed, the performance of the model can be tested by the post-event analysis module.

  13. Database searching and accounting of multiplexed precursor and product ion spectra from the data independent analysis of simple and complex peptide mixtures.

    PubMed

    Li, Guo-Zhong; Vissers, Johannes P C; Silva, Jeffrey C; Golick, Dan; Gorenstein, Marc V; Geromanos, Scott J

    2009-03-01

    A novel database search algorithm is presented for the qualitative identification of proteins over a wide dynamic range, both in simple and complex biological samples. The algorithm has been designed for the analysis of data originating from data independent acquisitions, whereby multiple precursor ions are fragmented simultaneously. Measurements used by the algorithm include retention time, ion intensities, charge state, and accurate masses on both precursor and product ions from LC-MS data. The search algorithm uses an iterative process whereby each iteration incrementally increases the selectivity, specificity, and sensitivity of the overall strategy. Increased specificity is obtained by utilizing a subset database search approach, whereby for each subsequent stage of the search, only those peptides from securely identified proteins are queried. Tentative peptide and protein identifications are ranked and scored by their relative correlation to a number of models of known and empirically derived physicochemical attributes of proteins and peptides. In addition, the algorithm utilizes decoy database techniques for automatically determining the false positive identification rates. The search algorithm has been tested by comparing the search results from a four-protein mixture, the same four-protein mixture spiked into a complex biological background, and a variety of other "system" type protein digest mixtures. The method was validated independently by data dependent methods, while concurrently relying on replication and selectivity. Comparisons were also performed with other commercially and publicly available peptide fragmentation search algorithms. The presented results demonstrate the ability to correctly identify peptides and proteins from data independent acquisition strategies with high sensitivity and specificity. They also illustrate a more comprehensive analysis of the samples studied; providing approximately 20% more protein identifications, compared to a more conventional data directed approach using the same identification criteria, with a concurrent increase in both sequence coverage and the number of modified peptides.

  14. The NCBI BioSystems database

    PubMed Central

    Geer, Lewis Y.; Marchler-Bauer, Aron; Geer, Renata C.; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H.

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI’s Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets. PMID:19854944

  15. Fine-structure resolved rotational transitions and database for CN+H2 collisions

    NASA Astrophysics Data System (ADS)

    Burton, Hannah; Mysliwiec, Ryan; Forrey, Robert C.; Yang, B. H.; Stancil, P. C.; Balakrishnan, N.

    2018-06-01

    Cross sections and rate coefficients for CN+H2 collisions are calculated using the coupled states (CS) approximation. The calculations are benchmarked against more accurate close-coupling (CC) calculations for transitions between low-lying rotational states. Comparisons are made between the two formulations for collision energies greater than 10 cm-1. The CS approximation is used to construct a database which includes highly excited rotational states that are beyond the practical limitations of the CC method. The database includes fine-structure resolved rotational quenching transitions for v = 0 and j ≤ 40, where v and j are the vibrational and rotational quantum numbers of the initial state of the CN molecule. Rate coefficients are computed for both para-H2 and ortho-H2 colliders. The results are shown to be in good agreement with previous calculations, however, the rates are substantially different from mass-scaled CN+He rates that are often used in astrophysical models.

  16. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  17. U.S. Quaternary Fault and Fold Database Released

    NASA Astrophysics Data System (ADS)

    Haller, Kathleen M.; Machette, Michael N.; Dart, Richard L.; Rhea, B. Susan

    2004-06-01

    A comprehensive online compilation of Quaternary-age faults and folds throughout the United States was recently released by the U.S. Geological Survey, with cooperation from state geological surveys, academia, and the private sector. The Web site at http://Qfaults.cr.usgs.gov/ contains searchable databases and related geo-spatial data that characterize earthquake-related structures that could be potential seismic sources for large-magnitude (M > 6) earthquakes.

  18. United States paper, paperboard, and market pulp capacity trends by process and location, 1970-2000

    Treesearch

    Peter J. Ince; Xiaolei Li; Mo Zhou; Joseph Buongiorno; Mary Reuter

    This report presents a relational database with estimates of annual production capacity for all mill locations in the United States where paper, paperboard, or market pulp were produced from 1970 to 2000. Data for more than 500 separate mill locations are included in the database, with annual capacity data for each year from 1970 to 2000 (more than 17, 000 individual...

  19. Use of Hip Arthroscopy and Risk of Conversion to Total Hip Arthroplasty: A Population-Based Analysis.

    PubMed

    Schairer, William W; Nwachukwu, Benedict U; McCormick, Frank; Lyman, Stephen; Mayman, David

    2016-04-01

    To use population-level data to (1) evaluate the conversion rate of total hip arthroplasty (THA) within 2 years of hip arthroscopy and (2) assess the influence of age, arthritis, and obesity on the rate of conversion to THA. We used the State Ambulatory Surgery Databases and State Inpatient Databases for California and Florida from 2005 through 2012, which contain 100% of patient visits. Hip arthroscopy patients were tracked for subsequent primary THA within 2 years. Out-of-state patients and patients with less than 2 years follow-up were excluded. Multivariate analysis identified risks for subsequent hip arthroplasty after arthroscopy. We identified 7,351 patients who underwent hip arthroscopy with 2 years follow-up. The mean age was 43.9 ± 13.7 years, and 58.8% were female patients. Overall, 11.7% of patients underwent THA conversion within 2 years. The conversion rate was lowest in patients aged younger than 40 years (3.0%) and highest in the 60- to 69-year-old group (35.0%) (P < .001). We found an increased risk of THA conversion in older patients and in patients with osteoarthritis or obesity at the time of hip arthroscopy. Patients treated at high-volume hip arthroscopy centers had a lower THA conversion rate than those treated at low-volume centers (15.1% v 9.7%, P < .001). Hip arthroscopy is performed in patients of various ages, including middle-aged and elderly patients. Older patients have a higher rate of conversion to THA, as do patients with osteoarthritis or obesity. Level III, retrospective comparative study. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  20. Analysis of absence seizure generation using EEG spatial-temporal regularity measures.

    PubMed

    Mammone, Nadia; Labate, Domenico; Lay-Ekuakille, Aime; Morabito, Francesco C

    2012-12-01

    Epileptic seizures are thought to be generated and to evolve through an underlying anomaly of synchronization in the activity of groups of neuronal populations. The related dynamic scenario of state transitions is revealed by detecting changes in the dynamical properties of Electroencephalography (EEG) signals. The recruitment procedure ending with the crisis can be explored through a spatial-temporal plot from which to extract suitable descriptors that are able to monitor and quantify the evolving synchronization level from the EEG tracings. In this paper, a spatial-temporal analysis of EEG recordings based on the concept of permutation entropy (PE) is proposed. The performance of PE are tested on a database of 24 patients affected by absence (generalized) seizures. The results achieved are compared to the dynamical behavior of the EEG of 40 healthy subjects. Being PE a feature which is dependent on two parameters, an extensive study of the sensitivity of the performance of PE with respect to the parameters' setting was carried out on scalp EEG. Once the optimal PE configuration was determined, its ability to detect the different brain states was evaluated. According to the results here presented, it seems that the widely accepted model of "jump" transition to absence seizure should be in some cases coupled (or substituted) by a gradual transition model characteristic of self-organizing networks. Indeed, it appears that the transition to the epileptic status is heralded before the preictal state, ever since the interictal stages. As a matter of fact, within the limits of the analyzed database, the frontal-temporal scalp areas appear constantly associated to PE levels higher compared to the remaining electrodes, whereas the parieto-occipital areas appear associated to lower PE values. The EEG of healthy subjects neither shows any similar dynamic behavior nor exhibits any recurrent portrait in PE topography.

  1. A Descriptive Analysis of Overviews of Reviews Published between 2000 and 2011

    PubMed Central

    Hartling, Lisa; Chisholm, Annabritt; Thomson, Denise; Dryden, Donna M.

    2012-01-01

    Background Overviews of systematic reviews compile data from multiple systematic reviews (SRs) and are a new method of evidence synthesis. Objectives To describe the methodological approaches in overviews of interventions. Design Descriptive study. Methods We searched 4 databases from 2000 to July 2011; we handsearched Evidence-based Child Health: A Cochrane Review Journal. We defined an overview as a study that: stated a clear objective; examined an intervention; used explicit methods to identify SRs; collected and synthesized outcome data from the SRs; and intended to include only SRs. We did not restrict inclusion by population characteristics (e.g., adult or children only). Two researchers independently screened studies and applied eligibility criteria. One researcher extracted data with verification by a second. We conducted a descriptive analysis. Results From 2,245 citations, 75 overviews were included. The number of overviews increased from 1 in 2000 to 14 in 2010. The interventions were pharmacological (n = 20, 26.7%), non-pharmacological (n = 26, 34.7%), or both (n = 29, 38.7%). Inclusion criteria were clearly stated in 65 overviews. Thirty-three (44%) overviews searched at least 2 databases. The majority reported the years and databases searched (n = 46, 61%), and provided key words (n = 58, 77%). Thirty-nine (52%) overviews included Cochrane SRs only. Two reviewers independently screened and completed full text review in 29 overviews (39%). Methods of data extraction were reported in 45 (60%). Information on quality of individual studies was extracted from the original SRs in 27 (36%) overviews. Quality assessment of the SRs was performed in 28 (37%) overviews; at least 9 different tools were used. Quality of the body of evidence was assessed in 13 (17%) overviews. Most overviews provided a narrative or descriptive analysis of the included SRs. One overview conducted indirect analyses and the other conducted mixed treatment comparisons. Publication bias was discussed in 18 (24%) overviews. Conclusions This study shows considerable variation in the methods used for overviews. There is a need for methodological rigor and consistency in overviews, as well as empirical evidence to support the methods employed. PMID:23166744

  2. A descriptive analysis of overviews of reviews published between 2000 and 2011.

    PubMed

    Hartling, Lisa; Chisholm, Annabritt; Thomson, Denise; Dryden, Donna M

    2012-01-01

    Overviews of systematic reviews compile data from multiple systematic reviews (SRs) and are a new method of evidence synthesis. To describe the methodological approaches in overviews of interventions. Descriptive study. We searched 4 databases from 2000 to July 2011; we handsearched Evidence-based Child Health: A Cochrane Review Journal. We defined an overview as a study that: stated a clear objective; examined an intervention; used explicit methods to identify SRs; collected and synthesized outcome data from the SRs; and intended to include only SRs. We did not restrict inclusion by population characteristics (e.g., adult or children only). Two researchers independently screened studies and applied eligibility criteria. One researcher extracted data with verification by a second. We conducted a descriptive analysis. From 2,245 citations, 75 overviews were included. The number of overviews increased from 1 in 2000 to 14 in 2010. The interventions were pharmacological (n = 20, 26.7%), non-pharmacological (n = 26, 34.7%), or both (n = 29, 38.7%). Inclusion criteria were clearly stated in 65 overviews. Thirty-three (44%) overviews searched at least 2 databases. The majority reported the years and databases searched (n = 46, 61%), and provided key words (n = 58, 77%). Thirty-nine (52%) overviews included Cochrane SRs only. Two reviewers independently screened and completed full text review in 29 overviews (39%). Methods of data extraction were reported in 45 (60%). Information on quality of individual studies was extracted from the original SRs in 27 (36%) overviews. Quality assessment of the SRs was performed in 28 (37%) overviews; at least 9 different tools were used. Quality of the body of evidence was assessed in 13 (17%) overviews. Most overviews provided a narrative or descriptive analysis of the included SRs. One overview conducted indirect analyses and the other conducted mixed treatment comparisons. Publication bias was discussed in 18 (24%) overviews. This study shows considerable variation in the methods used for overviews. There is a need for methodological rigor and consistency in overviews, as well as empirical evidence to support the methods employed.

  3. Analysing inter-relationships among water, governance, human development variables in developing countries

    NASA Astrophysics Data System (ADS)

    Dondeynaz, C.; Carmona Moreno, C.; Céspedes Lorente, J. J.

    2012-10-01

    The "Integrated Water Resources Management" principle was formally laid down at the International Conference on Water and Sustainable development in Dublin 1992. One of the main results of this conference is that improving Water and Sanitation Services (WSS), being a complex and interdisciplinary issue, passes through collaboration and coordination of different sectors (environment, health, economic activities, governance, and international cooperation). These sectors influence or are influenced by the access to WSS. The understanding of these interrelations appears as crucial for decision makers in the water sector. In this framework, the Joint Research Centre (JRC) of the European Commission (EC) has developed a new database (WatSan4Dev database) containing 42 indicators (called variables in this paper) from environmental, socio-economic, governance and financial aid flows data in developing countries. This paper describes the development of the WatSan4Dev dataset, the statistical processes needed to improve the data quality, and finally, the analysis to verify the database coherence is presented. Based on 25 relevant variables, the relationships between variables are described and organised into five factors (HDP - Human Development against Poverty, AP - Human Activity Pressure on water resources, WR - Water Resources, ODA - Official Development Aid, CEC - Country Environmental Concern). Linear regression methods are used to identify key variables having influence on water supply and sanitation. First analysis indicates that the informal urbanisation development is an important factor negatively influencing the percentage of the population having access to WSS. Health, and in particular children's health, benefits from the improvement of WSS. Irrigation is also enhancing Water Supply service thanks to multi-purpose infrastructure. Five country profiles are also created to deeper understand and synthetize the amount of information gathered. This new classification of countries is useful in identifying countries with a less advanced position and weaknesses to be tackled. The relevance of indicators gathered to represent environmental and water resources state is questioned in the discussion section. The paper concludes with the necessity to increase the reliability of current indicators and calls for further research on specific indicators, in particular on water quality at national scale, in order to better include environmental state in analysis to WSS.

  4. Histoplasma capsulatum proteome response to decreased iron availability

    PubMed Central

    Winters, Michael S; Spellman, Daniel S; Chan, Qilin; Gomez, Francisco J; Hernandez, Margarita; Catron, Brittany; Smulian, Alan G; Neubert, Thomas A; Deepe, George S

    2008-01-01

    Background A fundamental pathogenic feature of the fungus Histoplasma capsulatum is its ability to evade innate and adaptive immune defenses. Once ingested by macrophages the organism is faced with several hostile environmental conditions including iron limitation. H. capsulatum can establish a persistent state within the macrophage. A gap in knowledge exists because the identities and number of proteins regulated by the organism under host conditions has yet to be defined. Lack of such knowledge is an important problem because until these proteins are identified it is unlikely that they can be targeted as new and innovative treatment for histoplasmosis. Results To investigate the proteomic response by H. capsulatum to decreasing iron availability we have created H. capsulatum protein/genomic databases compatible with current mass spectrometric (MS) search engines. Databases were assembled from the H. capsulatum G217B strain genome using gene prediction programs and expressed sequence tag (EST) libraries. Searching these databases with MS data generated from two dimensional (2D) in-gel digestions of proteins resulted in over 50% more proteins identified compared to searching the publicly available fungal databases alone. Using 2D gel electrophoresis combined with statistical analysis we discovered 42 H. capsulatum proteins whose abundance was significantly modulated when iron concentrations were lowered. Altered proteins were identified by mass spectrometry and database searching to be involved in glycolysis, the tricarboxylic acid cycle, lysine metabolism, protein synthesis, and one protein sequence whose function was unknown. Conclusion We have created a bioinformatics platform for H. capsulatum and demonstrated the utility of a proteomic approach by identifying a shift in metabolism the organism utilizes to cope with the hostile conditions provided by the host. We have shown that enzyme transcripts regulated by other fungal pathogens in response to lowering iron availability are also regulated in H. capsulatum at the protein level. We also identified H. capsulatum proteins sensitive to iron level reductions which have yet to be connected to iron availability in other pathogens. These data also indicate the complexity of the response by H. capsulatum to nutritional deprivation. Finally, we demonstrate the importance of a strain specific gene/protein database for H. capsulatum proteomic analysis. PMID:19108728

  5. Fossil-Fuel C02 Emissions Database and Exploration System

    NASA Astrophysics Data System (ADS)

    Krassovski, M.; Boden, T.

    2012-04-01

    Fossil-Fuel C02 Emissions Database and Exploration System Misha Krassovski and Tom Boden Carbon Dioxide Information Analysis Center Oak Ridge National Laboratory The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL) quantifies the release of carbon from fossil-fuel use and cement production each year at global, regional, and national spatial scales. These estimates are vital to climate change research given the strong evidence suggesting fossil-fuel emissions are responsible for unprecedented levels of carbon dioxide (CO2) in the atmosphere. The CDIAC fossil-fuel emissions time series are based largely on annual energy statistics published for all nations by the United Nations (UN). Publications containing historical energy statistics make it possible to estimate fossil-fuel CO2 emissions back to 1751 before the Industrial Revolution. From these core fossil-fuel CO2 emission time series, CDIAC has developed a number of additional data products to satisfy modeling needs and to address other questions aimed at improving our understanding of the global carbon cycle budget. For example, CDIAC also produces a time series of gridded fossil-fuel CO2 emission estimates and isotopic (e.g., C13) emissions estimates. The gridded data are generated using the methodology described in Andres et al. (2011) and provide monthly and annual estimates for 1751-2008 at 1° latitude by 1° longitude resolution. These gridded emission estimates are being used in the latest IPCC Scientific Assessment (AR4). Isotopic estimates are possible thanks to detailed information for individual nations regarding the carbon content of select fuels (e.g., the carbon signature of natural gas from Russia). CDIAC has recently developed a relational database to house these baseline emissions estimates and associated derived products and a web-based interface to help users worldwide query these data holdings. Users can identify, explore and download desired CDIAC fossil-fuel CO2 emissions data. This presentation introduces the architecture and design of the new relational database and web interface, summarizes the present state and functionality of the Fossil-Fuel CO2 Emissions Database and Exploration System, and highlights future plans for expansion of the relational database and interface.

  6. The structure and dipole moment of globular proteins in solution and crystalline states: use of NMR and X-ray databases for the numerical calculation of dipole moment.

    PubMed

    Takashima, S

    2001-04-05

    The large dipole moment of globular proteins has been well known because of the detailed studies using dielectric relaxation and electro-optical methods. The search for the origin of these dipolemoments, however, must be based on the detailed knowledge on protein structure with atomic resolutions. At present, we have two sources of information on the structure of protein molecules: (1) x-ray databases obtained in crystalline state; (2) NMR databases obtained in solution state. While x-ray databases consist of only one model, NMR databases, because of the fluctuation of the protein folding in solution, consist of a number of models, thus enabling the computation of dipole moment repeated for all these models. The aim of this work, using these databases, is the detailed investigation on the interdependence between the structure and dipole moment of protein molecules. The dipole moment of protein molecules has roughly two components: one dipole moment is due to surface charges and the other, core dipole moment, is due to polar groups such as N--H and C==O bonds. The computation of surface charge dipole moment consists of two steps: (A) calculation of the pK shifts of charged groups for electrostatic interactions and (B) calculation of the dipole moment using the pK corrected for electrostatic shifts. The dipole moments of several proteins were computed using both NMR and x-ray databases. The dipole moments of these two sets of calculations are, with a few exceptions, in good agreement with one another and also with measured dipole moments.

  7. 49 CFR 384.209 - Notification of traffic violations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS STATE... that conviction to the Federal Convictions and Withdrawal Database. (b) Required notification with... Convictions and Withdrawal Database. [78 FR 60232, Oct. 1, 2013] ...

  8. 49 CFR 384.209 - Notification of traffic violations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY REGULATIONS STATE... that conviction to the Federal Convictions and Withdrawal Database. (b) Required notification with... Convictions and Withdrawal Database. [78 FR 60232, Oct. 1, 2013] ...

  9. WaveNet: A Web-Based Metocean Data Access, Processing, and Analysis Tool. Part 3 - CDIP Database

    DTIC Science & Technology

    2014-06-01

    and Analysis Tool; Part 3 – CDIP Database by Zeki Demirbilek, Lihwa Lin, and Derek Wilson PURPOSE: This Coastal and Hydraulics Engineering...Technical Note (CHETN) describes coupling of the Coastal Data Information Program ( CDIP ) database to WaveNet, the first module of MetOcnDat (Meteorological...provides a step-by-step procedure to access, process, and analyze wave and wind data from the CDIP database. BACKGROUND: WaveNet addresses a basic

  10. The Development and Evaluation of a User-Friendly Database Describing PA's School Districts. Pennsylvania Educational Policy Studies, Number 13.

    ERIC Educational Resources Information Center

    George, Carole A.

    This document describes a study that designed, developed, and evaluated the Pennsylvania school-district database program for use by educational decision makers. The database contains current information developed from data provided by the Pennsylvania Department of Education and describes each of the 500 active school districts in the state. PEP…

  11. Databases on Vocational Qualifications and Courses Accredited. Report on the Workshop Organised by CEDEFOP (Nurnberg, Germany, November 25-26, 1992).

    ERIC Educational Resources Information Center

    CEDEFOP Flash, 1993

    1993-01-01

    During 1992, CEDEFOP (the European Centre for the Development of Vocational Training) commissioned two projects to investigate the current situation with regard to databases on vocational qualifications in Member States of the European Community (EC) and possibilities for networking such databases. Results of these two studies were presented and…

  12. Development of the method and U.S. normalization database for Life Cycle Impact Assessment and sustainability metrics.

    PubMed

    Bare, Jane; Gloria, Thomas; Norris, Gregory

    2006-08-15

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well.

  13. Mobile Source Observation Database (MSOD)

    EPA Pesticide Factsheets

    The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental Protection Agency Office of Transportation and Air Quality (formerly the Office of Mobile Sources). The MSOD contains emission test data from in-use mobile air- pollution sources such as cars, trucks, and engines from trucks and nonroad vehicles. Data in the database was collected from 1982 to the present. The data is intended to be representative of in-use vehicle emissions in the United States.

  14. Investigating Mesoscale Convective Systems and their Predictability Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Daher, H.; Duffy, D.; Bowen, M. K.

    2016-12-01

    A mesoscale convective system (MCS) is a thunderstorm region that lasts several hours long and forms near weather fronts and can often develop into tornadoes. Here we seek to answer the question of whether these tornadoes are "predictable" by looking for a defining characteristic(s) separating MCSs that evolve into tornadoes versus those that do not. Using NASA's Modern Era Retrospective-analysis for Research and Applications 2 reanalysis data (M2R12K), we apply several state of the art machine learning techniques to investigate this question. The spatial region examined in this experiment is Tornado Alley in the United States over the peak tornado months. A database containing select variables from M2R12K is created using PostgreSQL. This database is then analyzed using machine learning methods such as Symbolic Aggregate approXimation (SAX) and DBSCAN (an unsupervised density-based data clustering algorithm). The incentive behind using these methods is to mathematically define a MCS so that association rule mining techniques can be used to uncover some sort of signal or teleconnection that will help us forecast which MCSs will result in tornadoes and therefore give society more time to prepare and in turn reduce casualties and destruction.

  15. Digital Mapping Techniques '08—Workshop Proceedings, Moscow, Idaho, May 18–21, 2008

    USGS Publications Warehouse

    Soller, David R.

    2009-01-01

    The Digital Mapping Techniques '08 (DMT'08) workshop was attended by more than 100 technical experts from 40 agencies, universities, and private companies, including representatives from 24 State geological surveys. This year's meeting, the twelfth in the annual series, was hosted by the Idaho Geological Survey, from May 18-21, 2008, on the University of Idaho campus in Moscow, Idaho. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  16. Dielectronic recombination data for dynamic finite-density plasmas. XV. The silicon isoelectronic sequence

    NASA Astrophysics Data System (ADS)

    Kaur, Jagjit; Gorczyca, T. W.; Badnell, N. R.

    2018-02-01

    Context. We aim to present a comprehensive theoretical investigation of dielectronic recombination (DR) of the silicon-like isoelectronic sequence and provide DR and radiative recombination (RR) data that can be used within a generalized collisional-radiative modelling framework. Aims: Total and final-state level-resolved DR and RR rate coefficients for the ground and metastable initial levels of 16 ions between P+ and Zn16+ are determined. Methods: We carried out multi-configurational Breit-Pauli DR calculations for silicon-like ions in the independent processes, isolated resonance, distorted wave approximation. Both Δnc = 0 and Δnc = 1 core excitations are included using LS and intermediate coupling schemes. Results: Results are presented for a selected number of ions and compared to all other existing theoretical and experimental data. The total dielectronic and radiative recombination rate coefficients for the ground state are presented in tabulated form for easy implementation into spectral modelling codes. These data can also be accessed from the Atomic Data and Analysis Structure (ADAS) OPEN-ADAS database. This work is a part of an assembly of a dielectronic recombination database for the modelling of dynamic finite-density plasmas.

  17. Deep Recurrent Neural Network-Based Autoencoders for Acoustic Novelty Detection

    PubMed Central

    Vesperini, Fabio; Schuller, Björn

    2017-01-01

    In the emerging field of acoustic novelty detection, most research efforts are devoted to probabilistic approaches such as mixture models or state-space models. Only recent studies introduced (pseudo-)generative models for acoustic novelty detection with recurrent neural networks in the form of an autoencoder. In these approaches, auditory spectral features of the next short term frame are predicted from the previous frames by means of Long-Short Term Memory recurrent denoising autoencoders. The reconstruction error between the input and the output of the autoencoder is used as activation signal to detect novel events. There is no evidence of studies focused on comparing previous efforts to automatically recognize novel events from audio signals and giving a broad and in depth evaluation of recurrent neural network-based autoencoders. The present contribution aims to consistently evaluate our recent novel approaches to fill this white spot in the literature and provide insight by extensive evaluations carried out on three databases: A3Novelty, PASCAL CHiME, and PROMETHEUS. Besides providing an extensive analysis of novel and state-of-the-art methods, the article shows how RNN-based autoencoders outperform statistical approaches up to an absolute improvement of 16.4% average F-measure over the three databases. PMID:28182121

  18. Concepts to Support HRP Integration Using Publications and Modeling

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Lumpkins, S.; Shelhamer, M.

    2014-01-01

    Initial efforts are underway to enhance the Human Research Program (HRP)'s identification and support of potential cross-disciplinary scientific collaborations. To increase the emphasis on integration in HRP's science portfolio management, concepts are being explored through the development of a set of tools. These tools are intended to enable modeling, analysis, and visualization of the state of the human system in the spaceflight environment; HRP's current understanding of that state with an indication of uncertainties; and how that state changes due to HRP programmatic progress and design reference mission definitions. In this talk, we will discuss proof-of-concept work performed using a subset of publications captured in the HRP publications database. The publications were tagged in the database with words representing factors influencing health and performance in spaceflight, as well as with words representing the risks HRP research is reducing. Analysis was performed on the publication tag data to identify relationships between factors and between risks. Network representations were then created as one type of visualization of these relationships. This enables future analyses of the structure of the networks based on results from network theory. Such analyses can provide insights into HRP's current human system knowledge state as informed by the publication data. The network structure analyses can also elucidate potential improvements by identifying network connections to establish or strengthen for maximized information flow. The relationships identified in the publication data were subsequently used as inputs to a model captured in the Systems Modeling Language (SysML), which functions as a repository for relationship information to be gleaned from multiple sources. Example network visualization outputs from a simple SysML model were then also created to compare to the visualizations based on the publication data only. We will also discuss ideas for building upon this proof-of-concept work to further support an integrated approach to human spaceflight risk reduction.

  19. Time-patterns of antibiotic exposure in poultry production--a Markov chains exploratory study of nature and consequences.

    PubMed

    Chauvin, C; Clement, C; Bruneau, M; Pommeret, D

    2007-07-16

    This article describes the use of Markov chains to explore the time-patterns of antimicrobial exposure in broiler poultry. The transition in antimicrobial exposure status (exposed/not exposed to an antimicrobial, with a distinction between exposures to the different antimicrobial classes) in extensive data collected in broiler chicken flocks from November 2003 onwards, was investigated. All Markov chains were first-order chains. Mortality rate, geographical location and slaughter semester were sources of heterogeneity between transition matrices. Transitions towards a 'no antimicrobial' exposure state were highly predominant, whatever the initial state. From a 'no antimicrobial' exposure state, the transition to beta-lactams was predominant among transitions to an antimicrobial exposure state. Transitions between antimicrobial classes were rare and variable. Switches between antimicrobial classes and repeats of a particular class were both observed. Application of Markov chains analysis to the database of the nation-wide antimicrobial resistance monitoring programme pointed out that transition probabilities between antimicrobial exposure states increased with the number of resistances in Escherichia coli strains.

  20. The effect of age and gender on motor vehicle driver injury severity at highway-rail grade crossings in the United States.

    PubMed

    Hao, Wei; Kamga, Camille; Daniel, Janice

    2015-12-01

    Based on the Federal Railway Administration (FRA) database, there were 25,945 highway-rail crossing accidents in the United States between 2002 and 2011. With an extensive database of highway-rail grade crossing accidents in the United States from 2002 to 2011, estimation results showed that there were substantial differences across age/gender groups for driver's injury severity. The study applied an ordered probit model to explore the determinants of driver injury severity for motor vehicle drivers at highway-rail grade crossings. The analysis found that there are important behavioral and physical differences between male and female drivers given a highway-rail grade crossing accident happened. Older drivers have higher fatality probabilities when driving in open space under passive control especially during bad weather condition. Younger male drivers are found to be more likely to have severe injuries at rush hour with high vehicle speed passing unpaved highway-rail grade crossings under passive control. Synthesizing these results led to the conclusion that the primary problem with young is risk-taking and lack of vehicle handling skills. The strength of older drivers lies in their aversion to risk, but physical degradation issues which result in longer reaction/perception times and degradation in vision and hearing often counterbalance this attribute. Copyright © 2015 Elsevier Ltd and National Safety Council. All rights reserved.

Top