Sample records for database pc database

  1. Electron Inelastic-Mean-Free-Path Database

    National Institute of Standards and Technology Data Gateway

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  2. ThermoData Engine Database

    National Institute of Standards and Technology Data Gateway

    SRD 103a NIST ThermoData Engine Database (PC database for purchase)   ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.

  3. ThermoData Engine Database - Pure Compounds and Binary Mixtures

    National Institute of Standards and Technology Data Gateway

    SRD 103b NIST ThermoData Engine Version 6.0 - Pure CompoThermoData Engine Database - Pure Compounds and Binary Mixtures (PC database for purchase)   This database contains property data for more than 21,000 pure compounds, 37,500 binary mixtures, 10,000 ternary mixtures, and 6,000 chemical reactions.

  4. Reference Fluid Thermodynamic and Transport Properties Database (REFPROP)

    National Institute of Standards and Technology Data Gateway

    SRD 23 NIST Reference Fluid Thermodynamic and Transport Properties Database (REFPROP) (PC database for purchase)   NIST 23 contains revised data in a Windows version of the database, including 105 pure fluids and allowing mixtures of up to 20 components. The fluids include the environmentally acceptable HFCs, traditional HFCs and CFCs and 'natural' refrigerants like ammonia

  5. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  6. Inorganic Crystal Structure Database (ICSD)

    National Institute of Standards and Technology Data Gateway

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  7. CHAD USER’S GUIDE: Extracting Human Activity Information from CHAD on the PC

    EPA Science Inventory

    The Consolidated Human Activity Database (CHAD) User Guide offers a short tutorial about CHAD Access; background on the CHAD Databases; background on individual studies in CHAD; and information about using CHAD data, caveats, known problems, notes, and database design and develop...

  8. Digital Video of Live-Scan Fingerprint Data

    National Institute of Standards and Technology Data Gateway

    NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase)   NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.

  9. Go Figure: Computer Database Adds the Personal Touch.

    ERIC Educational Resources Information Center

    Gaffney, Jean; Crawford, Pat

    1992-01-01

    A database for recordkeeping for a summer reading club was developed for a public library system using an IBM PC and Microsoft Works. Use of the database resulted in more efficient program management, giving librarians more time to spend with patrons and enabling timely awarding of incentives. (LAE)

  10. Bluetooth wireless database for scoliosis clinics.

    PubMed

    Lou, E; Fedorak, M V; Hill, D L; Raso, J V; Moreau, M J; Mahood, J K

    2003-05-01

    A database system with Bluetooth wireless connectivity has been developed so that scoliosis clinics can be run more efficiently and data can be mined for research studies without significant increases in equipment cost. The wireless database system consists of a Bluetooth-enabled laptop or PC and a Bluetooth-enabled handheld personal data assistant (PDA). Each patient has a profile in the database, which has all of his or her clinical history. Immediately prior to the examination, the orthopaedic surgeon selects a patient's profile from the database and uploads that data to the PDA over a Bluetooth wireless connection. The surgeon can view the entire clinical history of the patient while in the examination room and, at the same time, enter in any new measurements and comments from the current examination. After seeing the patient, the surgeon synchronises the newly entered information with the database wirelessly and prints a record for the chart. This combination of the database and the PDA both improves efficiency and accuracy and can save significant time, as there is less duplication of work, and no dictation is required. The equipment required to implement this solution is a Bluetooth-enabled PDA and a Bluetooth wireless transceiver for the PC or laptop.

  11. Dietary Exposure Potential Model

    EPA Science Inventory

    Existing food consumption and contaminant residue databases, typically products of nutrition and regulatory monitoring, contain useful information to characterize dietary intake of environmental chemicals. A PC-based model with resident database system, termed the Die...

  12. CHAD User's Guide: Extracting Human Activity Information from CHAD on the PC

    EPA Pesticide Factsheets

    User manual that includes tutorials, what's inside the CHAD databases, background on individuals studies in CHAD, using data form individual studies, caveats, problems, notes, and database design and development.

  13. Microcomputer-Based Genetics Office Database System

    PubMed Central

    Cutts, James H.; Mitchell, Joyce A.

    1985-01-01

    A database management system (Genetics Office Automation System, GOAS) has been developed for the Medical Genetics Unit of the University of Missouri. The system, which records patients' visits to the Unit's genetic and prenatal clinics, has been implemented on an IBM PC/XT microcomputer. A description of the system, the reasons for implementation, its databases, and uses are presented.

  14. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    National Institute of Standards and Technology Data Gateway

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  15. Scoring Package

    National Institute of Standards and Technology Data Gateway

    NIST Scoring Package (PC database for purchase)   The NIST Scoring Package (Special Database 1) is a reference implementation of the draft Standard Method for Evaluating the Performance of Systems Intended to Recognize Hand-printed Characters from Image Data Scanned from Forms.

  16. NIST/ASME Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  17. 75 FR 53000 - Privacy Act of 1974; Report of an Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-30

    ..., and PC-18-- Former Peace Corps Volunteers Database. The first revision adds a specific routine use to both PC-17 and PC-18. This specific routine use indicates that the Peace Corps may share Peace Corps... information. The second revision adds another specific routine use to both PC-17 and PC- 18 indicating that...

  18. The integration of digital orthophotographs with GISs in a microcomputer environment

    NASA Technical Reports Server (NTRS)

    Steiner, David R.

    1992-01-01

    The issues involved in the use of orthoimages as a data source for GIS databases are examined. The integration of digital photographs into a GIS is discussed. A prototype PC-based program for the production of GIS databases using orthoimages is described.

  19. Free text databases in an Integrated Academic Information System (IAIMS) at Columbia Presbyterian Medical Center.

    PubMed Central

    Clark, A. S.; Shea, S.

    1991-01-01

    The use of Folio Views, a PC DOS based product for free text databases, is explored in three applications in an Integrated Academic Information System (IAIMS): (1) a telephone directory, (2) a grants and contracts newsletter, and (3) nursing care plans. PMID:1666967

  20. BIOREMEDIATION IN THE FIELD SEARCH SYSTEM (BFSS) - VERSION 2.0 (DISKETTE)

    EPA Science Inventory

    BFSS is a PC-based software product that provides access to a database of information on waste sites in the United States and Canada where bioremediation is being tested or implemented, or has been completed. BFSS allows users to search the database electronically, view data on s...

  1. NIST Libraries of Peptide Fragmentation Mass Spectra Databass

    National Institute of Standards and Technology Data Gateway

    SRD 4 NIST Libraries of Peptide Fragmentation Mass Spectra Databass (PC database for purchase)   Interactive computer program for predicting thermodynamic and transport properties of pure fluids and fluid mixtures containing up to 20 components. The components are selected from a database of 196 components, mostly hydrocarbons.

  2. Advanced instrumentation: Technology database enhancement, volume 4, appendix G

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The purpose of this task was to add to the McDonnell Douglas Space Systems Company's Sensors Database, including providing additional information on the instruments and sensors applicable to physical/chemical Environmental Control and Life Support System (P/C ECLSS) or Closed Ecological Life Support System (CELSS) which were not previously included. The Sensors Database was reviewed in order to determine the types of data required, define the data categories, and develop an understanding of the data record structure. An assessment of the MDSSC Sensors Database identified limitations and problems in the database. Guidelines and solutions were developed to address these limitations and problems in order that the requirements of the task could be fulfilled.

  3. Hawaii bibliographic database

    USGS Publications Warehouse

    Wright, T.L.; Takahashi, T.J.

    1998-01-01

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  4. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  5. Computerization of the Arkansas Fishes Database

    Treesearch

    Henry W. Robison; L. Gayle Henderson; Melvin L. Warren; Janet S. Rader

    2004-01-01

    Abstract - Until recently, distributional data for the fishes of Arkansas existed in the form of museum records, field notebooks of various ichthyologists, and published fish survey data; none of which was in a digital format. In 1995, a relational database system was used to design a PC platform data entry module for the capture of information on...

  6. Vapor Compression Cycle Design Program (CYCLE_D)

    National Institute of Standards and Technology Data Gateway

    SRD 49 NIST Vapor Compression Cycle Design Program (CYCLE_D) (PC database for purchase)   The CYCLE_D database package simulates the vapor compression refrigeration cycles. It is fully compatible with REFPROP 9.0 and covers the 62 single-compound refrigerants . Fluids can be used in mixtures comprising up to five components.

  7. Reliability Information Analysis Center 1st Quarter 2007, Technical Area Task (TAT) Report

    DTIC Science & Technology

    2007-02-05

    34* Created new SQL server database for "PC Configuration" web application. Added roles for security closed 4235 and posted application to production. "e Wrote...and ran SQL Server scripts to migrate production databases to new server . "e Created backup jobs for new SQL Server databases. "* Continued...second phase of the TENA demo. Extensive tasking was established and assigned. A TENA interface to EW Server was reaffirmed after some uncertainty about

  8. An application of a relational database system for high-throughput prediction of elemental compositions from accurate mass values.

    PubMed

    Sakurai, Nozomu; Ara, Takeshi; Kanaya, Shigehiko; Nakamura, Yukiko; Iijima, Yoko; Enomoto, Mitsuo; Motegi, Takeshi; Aoki, Koh; Suzuki, Hideyuki; Shibata, Daisuke

    2013-01-15

    High-accuracy mass values detected by high-resolution mass spectrometry analysis enable prediction of elemental compositions, and thus are used for metabolite annotations in metabolomic studies. Here, we report an application of a relational database to significantly improve the rate of elemental composition predictions. By searching a database of pre-calculated elemental compositions with fixed kinds and numbers of atoms, the approach eliminates redundant evaluations of the same formula that occur in repeated calculations with other tools. When our approach is compared with HR2, which is one of the fastest tools available, our database search times were at least 109 times shorter than those of HR2. When a solid-state drive (SSD) was applied, the search time was 488 times shorter at 5 ppm mass tolerance and 1833 times at 0.1 ppm. Even if the search by HR2 was performed with 8 threads in a high-spec Windows 7 PC, the database search times were at least 26 and 115 times shorter without and with the SSD. These improvements were enhanced in a low spec Windows XP PC. We constructed a web service 'MFSearcher' to query the database in a RESTful manner. Available for free at http://webs2.kazusa.or.jp/mfsearcher. The web service is implemented in Java, MySQL, Apache and Tomcat, with all major browsers supported. sakurai@kazusa.or.jp Supplementary data are available at Bioinformatics online.

  9. Electron and Positron Stopping Powers of Materials

    National Institute of Standards and Technology Data Gateway

    SRD 7 NIST Electron and Positron Stopping Powers of Materials (PC database for purchase)   The EPSTAR database provides rapid calculations of stopping powers (collisional, radiative, and total), CSDA ranges, radiation yields and density effect corrections for incident electrons or positrons with kinetic energies from 1 keV to 10 GeV, and for any chemically defined target material.

  10. Reviews.

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 1988

    1988-01-01

    Reviews two computer programs: "Molecular Graphics," which allows molecule manipulation in three-dimensional space (requiring IBM PC with 512K, EGA monitor, and math coprocessor); and "Periodic Law," a database which contains up to 20 items of information on each of the first 103 elements (Apple II or IBM PC). (MVL)

  11. Building an Integrated Environment for Multimedia

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Multimedia courseware on the solar system and earth science suitable for use in elementary, middle, and high schools was developed under this grant. The courseware runs on Silicon Graphics, Incorporated (SGI) workstations and personal computers (PCs). There is also a version of the courseware accessible via the World Wide Web. Accompanying multimedia database systems were also developed to enhance the multimedia courseware. The database systems accompanying the PC software are based on the relational model, while the database systems accompanying the SGI software are based on the object-oriented model.

  12. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  13. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

    1995-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

  14. A data analysis expert system for large established distributed databases

    NASA Technical Reports Server (NTRS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  15. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    PubMed

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.

  16. OAP- OFFICE AUTOMATION PILOT GRAPHICS DATABASE SYSTEM

    NASA Technical Reports Server (NTRS)

    Ackerson, T.

    1994-01-01

    The Office Automation Pilot (OAP) Graphics Database system offers the IBM PC user assistance in producing a wide variety of graphs and charts. OAP uses a convenient database system, called a chartbase, for creating and maintaining data associated with the charts, and twelve different graphics packages are available to the OAP user. Each of the graphics capabilities is accessed in a similar manner. The user chooses creation, revision, or chartbase/slide show maintenance options from an initial menu. The user may then enter or modify data displayed on a graphic chart. The cursor moves through the chart in a "circular" fashion to facilitate data entries and changes. Various "help" functions and on-screen instructions are available to aid the user. The user data is used to generate the graphics portion of the chart. Completed charts may be displayed in monotone or color, printed, plotted, or stored in the chartbase on the IBM PC. Once completed, the charts may be put in a vector format and plotted for color viewgraphs. The twelve graphics capabilities are divided into three groups: Forms, Structured Charts, and Block Diagrams. There are eight Forms available: 1) Bar/Line Charts, 2) Pie Charts, 3) Milestone Charts, 4) Resources Charts, 5) Earned Value Analysis Charts, 6) Progress/Effort Charts, 7) Travel/Training Charts, and 8) Trend Analysis Charts. There are three Structured Charts available: 1) Bullet Charts, 2) Organization Charts, and 3) Work Breakdown Structure (WBS) Charts. The Block Diagram available is an N x N Chart. Each graphics capability supports a chartbase. The OAP graphics database system provides the IBM PC user with an effective means of managing data which is best interpreted as a graphic display. The OAP graphics database system is written in IBM PASCAL 2.0 and assembler for interactive execution on an IBM PC or XT with at least 384K of memory, and a color graphics adapter and monitor. Printed charts require an Epson, IBM, OKIDATA, or HP Laser printer (or equivalent). Plots require the Tektronix 4662 Penplotter. Source code is supplied to the user for modification and customizing. Executables are also supplied for all twelve graphics capabilities. This system was developed in 1983, and Version 3.1 was released in 1986.

  17. [International bibliographic databases--Current Contents on disk and in FTP format (Internet): presentation and guide].

    PubMed

    Bloch-Mouillet, E

    1999-01-01

    This paper aims to provide technical and practical advice about finding references using Current Contents on disk (Macintosh or PC) or via the Internet (FTP). Seven editions are published each week. They are all organized in the same way and have the same search engine. The Life Sciences edition, extensively used in medical research, is presented here in detail, as an example. This methodological note explains, in French, how to use this reference database. It is designed to be a practical guide for browsing and searching the database, and particularly for creating search profiles adapted to the needs of researchers.

  18. Ocean Optical Database

    DTIC Science & Technology

    1992-05-01

    ocean color for retrieving ocean k(490) values are examined. The validation of the optical database from the satellite is accessed through comparison...for sharing results of this validation study. We wish to thank J. Mueller for helpful discussions in optics and satellite processing and for sharing his...of these data products are displayable as 512 x 512 8-bit image maps compatible with the PC-SeaPak image format. Valid data ranges are from 1 to 255

  19. A multivariate and stochastic approach to identify key variables to rank dairy farms on profitability.

    PubMed

    Atzori, A S; Tedeschi, L O; Cannas, A

    2013-05-01

    The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21 farms had dRI<-1 SD, 32 farms were between -1 SD and 0, 67 farms were between 0 and +1 SD, and 15 farms had dRI>+1 SD. The top 10% of the farms had a dRI greater than 0.170 €/L, whereas the bottom 10% farms had a dRI lower than 0.116 €/L. This stochastic approach allowed us to understand the relationships among the inputs of the studied dairy farms and to develop a ranking index for comparison purposes. The developed methodology may be improved by using more inputs at the dairy farm level and considering the actual cost to measure profitability. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  20. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Choung

    1994-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self-explanatory; however, a sample run of the CCIR rain attenuation model is presented.

  1. REPDOSE: A database on repeated dose toxicity studies of commercial chemicals--A multifunctional tool.

    PubMed

    Bitsch, A; Jacobi, S; Melber, C; Wahnschaffe, U; Simetska, N; Mangelsdorf, I

    2006-12-01

    A database for repeated dose toxicity data has been developed. Studies were selected by data quality. Review documents or risk assessments were used to get a pre-screened selection of available valid data. The structure of the chemicals should be rather simple for well defined chemical categories. The database consists of three core data sets for each chemical: (1) structural features and physico-chemical data, (2) data on study design, (3) study results. To allow consistent queries, a high degree of standardization categories and glossaries were developed for relevant parameters. At present, the database consists of 364 chemicals investigated in 1018 studies which resulted in a total of 6002 specific effects. Standard queries have been developed, which allow analyzing the influence of structural features or PC data on LOELs, target organs and effects. Furthermore, it can be used as an expert system. First queries have shown that the database is a very valuable tool.

  2. 76 FR 77575 - Privacy Act of 1974; Report of an Altered System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... the Privacy Act of 1974, (5 U.S.C. 552a), PC-21-- Peace Corps Response Database. The first revision... Corps' General Routine Uses apply to PC-21. DATES: This proposed action will be effective without..., provides that the public will be given a 30-day period in which to comment on a revised routine use. The...

  3. Tribology and Friction of Soft Materials: Mississippi State Case Study

    DTIC Science & Technology

    2010-03-18

    elastomers , foams, and fabrics. B. Develop internal state variable (ISV) material model. Model will be calibrated using database and verified...Rubbers Natural rubber Santoprene (Vulcanized Elastomer ) Styrene Butadiene Rubber (SBR) Foams Polypropylene Foam Polyurethane Foam Fabrics Kevlar...Axially symmetric model PC Disk PC Numerical Implementation in FEM Codes Experiment SEM Optical methods ISV Model Void Nucleation FEM Analysis

  4. Using CLIPS in a distributed system: The Network Control Center (NCC) expert system

    NASA Technical Reports Server (NTRS)

    Wannemacher, Tom

    1990-01-01

    This paper describes an intelligent troubleshooting system for the Help Desk domain. It was developed on an IBM-compatible 80286 PC using Microsoft C and CLIPS and an AT&T 3B2 minicomputer using the UNIFY database and a combination of shell script, C programs and SQL queries. The two computers are linked by a lan. The functions of this system are to help non-technical NCC personnel handle trouble calls, to keep a log of problem calls with complete, concise information, and to keep a historical database of problems. The database helps identify hardware and software problem areas and provides a source of new rules for the troubleshooting knowledge base.

  5. REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures

    National Institute of Standards and Technology Data Gateway

    SRD 73 NIST REFLEAK: NIST Leak/Recharge Simulation Program for Refrigerant Mixtures (PC database for purchase)   REFLEAK estimates composition changes of zeotropic mixtures in leak and recharge processes.

  6. A SLAM II simulation model for analyzing space station mission processing requirements

    NASA Technical Reports Server (NTRS)

    Linton, D. G.

    1985-01-01

    Space station mission processing is modeled via the SLAM 2 simulation language on an IBM 4381 mainframe and an IBM PC microcomputer with 620K RAM, two double-sided disk drives and an 8087 coprocessor chip. Using a time phased mission (payload) schedule and parameters associated with the mission, orbiter (space shuttle) and ground facility databases, estimates for ground facility utilization are computed. Simulation output associated with the science and applications database is used to assess alternative mission schedules.

  7. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  8. Gas-Phase Infrared; JCAMP Format

    National Institute of Standards and Technology Data Gateway

    SRD 35 NIST/EPA Gas-Phase Infrared; JCAMP Format (PC database for purchase)   This data collection contains 5,228 infrared spectra in the JCAMP-DX (Joint Committee for Atomic and Molecular Physical Data "Data Exchange") format.

  9. A proposed computer diagnostic system for malignant melanoma (CDSMM).

    PubMed

    Shao, S; Grams, R R

    1994-04-01

    This paper describes a computer diagnostic system for malignant melanoma. The diagnostic system is a rule base system based on image analyses and works under the PC windows environment. It consists of seven modules: I/O module, Patient/Clinic database, image processing module, classification module, rule base module and system control module. In the system, the image analyses are automatically carried out, and database management is efficient and fast. Both final clinic results and immediate results from various modules such as measured features, feature pictures and history records of the disease lesion can be presented on screen or printed out from each corresponding module or from the I/O module. The system can also work as a doctor's office-based tool to aid dermatologists with details not perceivable by the human eye. Since the system operates on a general purpose PC, it can be made portable if the I/O module is disconnected.

  10. T-LECS: The Control Software System for MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.

    2006-07-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.

  11. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  12. Concentrations of indoor pollutants database: User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author's last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user's needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  13. Concentrations of indoor pollutants database: User`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author`s last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user`s needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  14. Bibliometric Analysis of Palliative Care-Related Publication Trends During 2001 to 2016.

    PubMed

    Liu, Chia-Jen; Yeh, Te-Chun; Hsu, Su-Hsuan; Chu, Chao-Mei; Liu, Chih-Kuang; Chen, Mingchih; Huang, Sheng-Jean

    2018-01-01

    The scientific contributions (publications) and international influence (citations) from authors providing the palliative care (PC)-related literature has a limited number of bibliometric reports. We aimed to analyze PC-related literature using the Institute for Scientific Information Web of Science (WoS) database. WoS database was used to retrieve publications with the following key words with title: "palliative care" OR "End of Life care" OR "terminal care.". The statistical analysis of the documents published during 2001 to 2016 was performed. The quantity and quality of research were assessed by the number of total publications and citation analysis. In addition, we also analyzed whether there were possible correlations between publication and socioeconomic factors. The total research output was 6273 articles for PC. There was a 3-fold increase in the number of publications during the period and strong correlation between the year and number of PC-related publications ( R 2 = .96). The United States took a leading position in PC research (2448, 39.0%). The highest average citations was reported for the Norway (21.8). Australia had gained the highest productive ability in PC research (24.9 of articles per million populations). The annual impact factor rose progressively with time and increased 1.13 to 2.24 from 2003 to 2016. The number of publications correlated with gross domestic product ( r = .74; P < .001). The United States and United Kingdom contributed most of the publications, but some East Asian countries also had a great performance. According to the socioeconomic factors, the publication capacity of top 20 countries is correlated with their economic scale.

  15. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  16. The Computer Bulletin Board.

    ERIC Educational Resources Information Center

    Batt, Russell H., Ed.

    1989-01-01

    Describes two chemistry computer programs: (1) "Eureka: A Chemistry Problem Solver" (problem files may be written by the instructor, MS-DOS 2.0, IBM with 384K); and (2) "PC-File+" (database management, IBM with 416K and two floppy drives). (MVL)

  17. Automatic comparison of striation marks and automatic classification of shoe prints

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Keijzer, Jan; Keereweer, Isaac

    1995-09-01

    A database for toolmarks (named TRAX) and a database for footwear outsole designs (named REBEZO) have been developed on a PC. The databases are filled with video-images and administrative data about the toolmarks and the footwear designs. An algorithm for the automatic comparison of the digitized striation patterns has been developed for TRAX. The algorithm appears to work well for deep and complete striation marks and will be implemented in TRAX. For REBEZO some efforts have been made to the automatic classification of outsole patterns. The algorithm first segments the shoeprofile. Fourier-features are selected for the separate elements and are classified with a neural network. In future developments information on invariant moments of the shape and rotation angle will be included in the neural network.

  18. Development and Experimental Verification of a High Resolution, Tunable LIDAR Computer Simulation Model for Atmospheric Laser Remote Sensing

    NASA Astrophysics Data System (ADS)

    Wilcox, William Edward, Jr.

    1995-01-01

    A computer program (LIDAR-PC) and associated atmospheric spectral databases have been developed which accurately simulate the laser remote sensing of the atmosphere and the system performance of a direct-detection Lidar or tunable Differential Absorption Lidar (DIAL) system. This simulation program allows, for the first time, the use of several different large atmospheric spectral databases to be coupled with Lidar parameter simulations on the same computer platform to provide a real-time, interactive, and easy to use design tool for atmospheric Lidar simulation and modeling. LIDAR -PC has been used for a range of different Lidar simulations and compared to experimental Lidar data. In general, the simulations agreed very well with the experimental measurements. In addition, the simulation offered, for the first time, the analysis and comparison of experimental Lidar data to easily determine the range-resolved attenuation coefficient of the atmosphere and the effect of telescope overlap factor. The software and databases operate on an IBM-PC or compatible computer platform, and thus are very useful to the research community for Lidar analysis. The complete Lidar and atmospheric spectral transmission modeling program uses the HITRAN database for high-resolution molecular absorption lines of the atmosphere, the BACKSCAT/LOWTRAN computer databases and models for the effects of aerosol and cloud backscatter and attenuation, and the range-resolved Lidar equation. The program can calculate the Lidar backscattered signal-to-noise for a slant path geometry from space and simulate the effect of high resolution, tunable, single frequency, and moderate line width lasers on the Lidar/DIAL signal. The program was used to model and analyze the experimental Lidar data obtained from several measurements. A fixed wavelength, Ho:YSGG aerosol Lidar (Sugimoto, 1990) developed at USF and a tunable Ho:YSGG DIAL system (Cha, 1991) for measuring atmospheric water vapor at 2.1 μm were analyzed. The simulations agreed very well with the measurements, and also yielded, for the first time, the ability to easily deduce the atmospheric attentuation coefficient, alpha, from the Lidar data. Simulations and analysis of other Lidar measurements included that of a 1.57 mu m OPO aerosol Lidar system developed at USF (Harrell, 1995) and of the NASA LITE (Laser-in-Space Technology Experiment) Lidar recently flown in the Space shuttle. Finally, an extensive series of laboratory experiments were made with the 1.57 μm OPO Lidar system to test calculations of the telescope/laser overlap and the effect of different telescope sizes and designs. The simulations agreed well with the experimental data for the telescope diameter and central obscuration test cases. The LIDAR-PC programs are available on the Internet from the USAF Lidar Home Page Web site, http://www.cas.usf.edu/physics/lidar.html/.

  19. Organization's Orderly Interest Exploration: Inception, Development and Insights of AIAA's Topics Database

    NASA Technical Reports Server (NTRS)

    Marshall, Jospeh R.; Morris, Allan T.

    2007-01-01

    Since 2003, AIAA's Computer Systems and Software Systems Technical Committees (TCs) have developed a database that aids technical committee management to map technical topics to their members. This Topics/Interest (T/I) database grew out of a collection of charts and spreadsheets maintained by the TCs. Since its inception, the tool has evolved into a multi-dimensional database whose dimensions include the importance, interest and expertise of TC members and whether or not a member and/or a TC is actively involved with the topic. In 2005, the database was expanded to include the TCs in AIAA s Information Systems Group and then expanded further to include all AIAA TCs. It was field tested at an AIAA Technical Activities Committee (TAC) Workshop in early 2006 through live access by over 80 users. Through the use of the topics database, TC and program committee (PC) members can accomplish relevant tasks such as: to identify topic experts (for Aerospace America articles or external contacts), to determine the interest of its members, to identify overlapping topics between diverse TCs and PCs, to guide new member drives and to reveal emerging topics. This paper will describe the origins, inception, initial development, field test and current version of the tool as well as elucidate the benefits and insights gained by using the database to aid the management of various TC functions. Suggestions will be provided to guide future development of the database for the purpose of providing dynamics and system level benefits to AIAA that currently do not exist in any technical organization.

  20. GIS-based accident location and analysis system (GIS-ALAS) : project report : phase I

    DOT National Transportation Integrated Search

    1998-04-06

    This report summarizes progress made in Phase I of the geographic information system (GIS) based Accident Location and Analysis System (GIS-ALAS). The GIS-ALAS project builds on PC-ALAS, a locationally-referenced highway crash database query system d...

  1. Data for Known Geothermal Resource Areas (KGRA) and Identified Hydrothermal Resource Areas (IHRA) in Southern Idaho and Southeastern Oregon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neupane, Ghanashyam; McLing, Travis; Mattson, Earl

    The presented database includes water chemistry data and structural rating values for various geothermal features used for performing principal component (PC) and cluster analyses work to identify promising KGRAs and IHRAs in southern Idaho and southeastern Oregon. A brief note on various KGRAs/IHRAs is also included herewith. Results of PC and cluster analyses are presented as a separate paper (Lindsey et al., 2017) that is, as of the time of this submission, in 'revision' status.

  2. Protein Identification Using Top-Down Spectra*

    PubMed Central

    Liu, Xiaowen; Sirotkin, Yakov; Shen, Yufeng; Anderson, Gordon; Tsai, Yihsuan S.; Ting, Ying S.; Goodlett, David R.; Smith, Richard D.; Bafna, Vineet; Pevzner, Pavel A.

    2012-01-01

    In the last two years, because of advances in protein separation and mass spectrometry, top-down mass spectrometry moved from analyzing single proteins to analyzing complex samples and identifying hundreds and even thousands of proteins. However, computational tools for database search of top-down spectra against protein databases are still in their infancy. We describe MS-Align+, a fast algorithm for top-down protein identification based on spectral alignment that enables searches for unexpected post-translational modifications. We also propose a method for evaluating statistical significance of top-down protein identifications and further benchmark various software tools on two top-down data sets from Saccharomyces cerevisiae and Salmonella typhimurium. We demonstrate that MS-Align+ significantly increases the number of identified spectra as compared with MASCOT and OMSSA on both data sets. Although MS-Align+ and ProSightPC have similar performance on the Salmonella typhimurium data set, MS-Align+ outperforms ProSightPC on the (more complex) Saccharomyces cerevisiae data set. PMID:22027200

  3. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  4. BioSYNTHESIS: access to a knowledge network of health sciences databases.

    PubMed

    Broering, N C; Hylton, J S; Guttmann, R; Eskridge, D

    1991-04-01

    Users of the IAIMS Knowledge Network at the Georgetown University Medical Center have access to multiple in-house and external databases from a single point of entry through BioSYNTHESIS. The IAIMS project has developed a rich environment of biomedical information resources that represent a medical decision support system for campus physicians and students. The BioSYNTHESIS system is an information navigator that provides transparent access to a Knowledge Network of over a dozen databases. These multiple health sciences databases consist of bibliographic, informational, diagnostic, and research systems which reside on diverse computers such as DEC VAXs, SUN 490, AT&T 3B2s, Macintoshes, IBM PC/PS2s and the AT&T ISN and SYTEK network systems. Ethernet and TCP/IP protocols are used in the network architecture. BioSYNTHESIS also provides network links to the other campus libraries and to external institutions. As additional knowledge resources and technological advances have become available. BioSYNTHESIS has evolved from a two phase to a three phase program. Major components of the system including recent achievements and future plans are described.

  5. Integration of Web-based and PC-based clinical research databases.

    PubMed

    Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M

    2004-01-01

    We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.

  6. Unobtrusive Social Network Data From Email

    DTIC Science & Technology

    2008-12-01

    PERSON Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 outlook archived files and stores that data into an SQL - database. Communication...Applications ( VBA ) program was installed on the personal computers (PC) of all participants, in the session window of their Microsoft Outlook. Details of

  7. Method for a dummy CD mirror server based on NAS

    NASA Astrophysics Data System (ADS)

    Tang, Muna; Pei, Jing

    2002-09-01

    With the development of computer network, information sharing is becoming the necessity in human life. The rapid development of CD-ROM and CD-ROM driver techniques makes it possible to issue large database online. After comparing many designs of dummy CD mirror database, which are the embodiment of a main product in CD-ROM database now and in near future, we proposed and realized a new PC based scheme. Our system has the following merits, such as, supporting all kinds of CD format; supporting many network protocol; the independence of mirror network server and the main server; low price, super large capacity, without the need of any special hardware. Preliminarily experiments have verified the validity of the proposed scheme. Encouraged by the promising application future, we are now preparing to put it into market. This paper discusses the design and implement of the CD-ROM server detailedly.

  8. A systematic review and meta-analysis of the relative efficacy and safety of treatment regimens for HIV-associated cerebral toxoplasmosis: is trimethoprim-sulfamethoxazole a real option?

    PubMed

    Hernandez, A V; Thota, P; Pellegrino, D; Pasupuleti, V; Benites-Zapata, V A; Deshpande, A; Penalva de Oliveira, A C; Vidal, J E

    2017-02-01

    The objective of this study was to perform a systematic review and meta-analysis of the literature to evaluate the efficacy and safety of therapies for cerebral toxoplasmosis in HIV-infected adults. The pyrimethamine plus sulfadiazine (P-S) combination is considered the mainstay therapy for cerebral toxoplasmosis and pyrimethamine plus clindamycin (P-C) is the most common alternative treatment. Although trimethoprim-sulfamethoxazole (TMP-SMX) has potential advantages, its use is infrequent. We searched PubMed and four other databases to identify randomized controlled trials (RCTs) and cohort studies. Two independent reviewers searched the databases, identified studies and extracted data. Risk ratios (RRs) were pooled across studies using random-effects models. Nine studies were included (five RCTs, three retrospective cohort studies and one prospective cohort study). In comparison to P-S, treatment with P-C or TMP-SMX was associated with similar rates of partial or complete clinical response [P-C: RR 0.87; 95% confidence interval (CI) 0.70-1.08; TMP-SMX: RR 0.97; 95% CI 0.78-1.21], radiological response (P-C: RR 0.92; 95% CI 0.82-1.03), skin rash (P-C: RR 0.81; 95% CI 0.56-1.17; TMP-SMX: RR 0.17; 95% CI 0.02-1.29), gastrointestinal impairment (P-C: RR 5.16; 95% CI 0.66-40.11), and drug discontinuation because of adverse events (P-C: RR 0.32; 95% CI 0.07-1.47). Liver impairment was more frequent with P-S than P-C (P-C vs. P-S: RR 0.48; 95% CI 0.24-0.97). The current evidence fails to identify a superior regimen in terms of relative efficacy or safety for the treatment of HIV-associated cerebral toxoplasmosis. Use of TMP-SMX as preferred treatment may be consistent with the available evidence and other real-world considerations. Larger comparative studies are needed. © 2016 British HIV Association.

  9. Development of a prototype commonality analysis tool for use in space programs

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1988-01-01

    A software tool to aid in performing commonality analyses, called Commonality Analysis Problem Solver (CAPS), was designed, and a prototype version (CAPS 1.0) was implemented and tested. The CAPS 1.0 runs in an MS-DOS or IBM PC-DOS environment. The CAPS is designed around a simple input language which provides a natural syntax for the description of feasibility constraints. It provides its users with the ability to load a database representing a set of design items, describe the feasibility constraints on items in that database, and do a comprehensive cost analysis to find the most economical substitution pattern.

  10. CD-ROM and Libraries.

    ERIC Educational Resources Information Center

    Murphy, Brower

    1985-01-01

    The Compact Disc-Read Only Memory (CD-ROM) data format is explained and illustrated, noting current and potential applications. The "5-inch" compact laserdisc is described and photographs of an IBM PC/Hitachi CD-ROM system adopted by Library Corporation to support its MARC database--BiblioFile--are presented. Screen displays for…

  11. Computer Series, 102: Bits and Pieces, 40.

    ERIC Educational Resources Information Center

    Birk, James P., Ed.

    1989-01-01

    Discussed are seven computer programs: (1) a computer graphics experiment for organic chemistry laboratory; (2) a gel filtration simulation; (3) judging spelling correctness; (4) interfacing the TLC548 ADC; (5) a digitizing circuit for the Apple II game port; (6) a chemical information base; and (7) an IBM PC article database. (MVL)

  12. Crystal Data

    National Institute of Standards and Technology Data Gateway

    SRD 3 NIST Crystal Data (PC database for purchase)   NIST Crystal Data contains chemical, physical, and crystallographic information useful to characterize more than 237,671 inorganic and organic crystalline materials. The data include the standard cell parameters, cell volume, space group number and symbol, calculated density, chemical formula, chemical name, and classification by chemical type.

  13. Emergent cholecystectomy is superior to percutaneous cholecystostomy tube placement in critically ill patients with emergent calculous cholecystitis.

    PubMed

    Hall, Bradley R; Armijo, Priscila R; Krause, Crystal; Burnett, Tyler; Oleynikov, Dmitry

    2018-07-01

    The role of percutaneous cholecystostomy (PC) is undefined in patients with multiple comorbidities presenting with emergent calculous cholecystitis (CC). This study compared outcomes between PC, laparoscopic (LC), and open cholecystectomy (OC). The Vizient UHC database was queried for high-risk patients with CC who underwent PC, LC, OC, or laparoscopic converted to open cholecystectomy (CONV). Demographics, outcomes, mortality, length of stay (LOS), and direct cost were compared between the groups. LC was the most common approach with the lowest risk of death, complications, LOS, and cost. Complication risk was highest in OC. Nearly 20% of patients underwent PC. Complication rate, LOS, infection, aspiration pneumonia, and mortality were higher in PC. Direct cost was lowest in LC, followed by CONV, PC, and OC. Emergent cholecystectomy for CC in high-risk patients is safer and more cost effective than PC and this study supports the use of cholecystectomy as the primary treatment approach in these patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Cost Effective Computer-Assisted Legal Research, or When Two Are Better Than One.

    ERIC Educational Resources Information Center

    Griffith, Cary

    1986-01-01

    An analysis of pricing policies and costs of LEXIS and WESTLAW indicates that it is less expensive to subscribe to both using a PC microcomputer rather than a dedicated terminal. Rules for when to use each database are essential to lowering the costs of online legal research. (EM)

  15. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  16. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  17. Pc-5 wave power in the plasmasphere and trough: CRRES observations

    NASA Astrophysics Data System (ADS)

    Hartinger, M.; Moldwin, M.; Angelopoulos, V.; Takahashi, K.; Singer, H. J.; Anderson, R. R.

    2009-12-01

    The CRRES (Combined Release and Radiation Effects Satellite) mission provides an opportunity to study the distribution of MHD wave power in the inner magnetosphere both inside the high-density plasmasphere and in the low-density trough. We present a statistical survey of Pc-5 wave power using CRRES magnetometer and plasma wave data separated into plasmasphere and trough intervals. Using a database of plasmapause crossings, we examined differences in power spectral density between the plasmasphere and trough regions. We found significant differences between the plasmasphere and trough in the radial profiles of Pc-5 wave power. On average, wave power was higher in the trough, but the difference in power depended on magnetic local time. Our study shows that determining the plasmapause location is important for understanding and modeling the MHD wave environment in the Pc-5 frequency band.

  18. A Review of Stellar Abundance Databases and the Hypatia Catalog Database

    NASA Astrophysics Data System (ADS)

    Hinkel, Natalie Rose

    2018-01-01

    The astronomical community is interested in elements from lithium to thorium, from solar twins to peculiarities of stellar evolution, because they give insight into different regimes of star formation and evolution. However, while some trends between elements and other stellar or planetary properties are well known, many other trends are not as obvious and are a point of conflict. For example, stars that host giant planets are found to be consistently enriched in iron, but the same cannot be definitively said for any other element. Therefore, it is time to take advantage of large stellar abundance databases in order to better understand not only the large-scale patterns, but also the more subtle, small-scale trends within the data.In this overview to the special session, I will present a review of large stellar abundance databases that are both currently available (i.e. RAVE, APOGEE) and those that will soon be online (i.e. Gaia-ESO, GALAH). Additionally, I will discuss the Hypatia Catalog Database (www.hypatiacatalog.com) -- which includes abundances from individual literature sources that observed stars within 150pc. The Hypatia Catalog currently contains 72 elements as measured within ~6000 stars, with a total of ~240,000 unique abundance determinations. The online database offers a variety of solar normalizations, stellar properties, and planetary properties (where applicable) that can all be viewed through multiple interactive plotting interfaces as well as in a tabular format. By analyzing stellar abundances for large populations of stars and from a variety of different perspectives, a wealth of information can be revealed on both large and small scales.

  19. Using commercial software products for atmospheric remote sensing

    NASA Astrophysics Data System (ADS)

    Kristl, Joseph A.; Tibaudo, Cheryl; Tang, Kuilian; Schroeder, John W.

    2002-02-01

    The Ontar Corporation (www.Ontar.com) has developed several products for atmospheric remote sensing to calculate radiative transport, atmospheric transmission, and sensor performance in both the normal atmosphere and the atmosphere disturbed by battlefield conditions of smoke, dust, explosives and turbulence. These products include: PcModWin: Uses the USAF standard MODTRAN model to compute the atmospheric transmission and radiance at medium spectral resolution (2 cm-1) from the ultraviolet/visible into the infrared and microwave regions of the spectrum. It can be used for any geometry and atmospheric conditions such as aerosols, clouds and rain. PcLnWin: Uses the USAF standard FASCOD model to compute atmospheric transmission and emission at high (line-by-line) spectral resolution using the HITRAN 2000 database. It can be used over the same spectrum from the UV/visible into the infrared and microwave regions of the spectrum. HitranPC: Computes the absolute high (line-by-line) spectral resolution transmission spectrum of the atmosphere for different temperatures and pressures. HitranPC is a user-friendly program developed by the University of South Florida (USF) and uses the international standard molecular spectroscopic database, HITRAN. LidarPC: A computer program to calculate the Laser Radar/L&n Equation for hard targets and atmospheric backscatter using manual input atmospheric parameters or HitranPC and BETASPEC - transmission and backscatter calculations of the atmosphere. Also developed by the University of South Florida (USF). PcEosael: is a library of programs that mathematically describe aspects of electromagnetic propagation in battlefield environments. 25 modules are connected but can be exercised individually. Covers eight general categories of atmospheric effects, including gases, aerosols and laser propagation. Based on codes developed by the Army Research Lab. NVTherm: NVTherm models parallel scan, serial scan, and staring thermal imagers that operate in the mid and far infrared spectral bands (3 to 12 micrometers wavelength). It predicts the Minimum Resolvable Temperature Difference (MRTD) or just MRT) that can be discriminated by a human when using a thermal imager. NVTherm also predicts the target acquisition range performance likely to be achieved using the sensor.

  20. Simultaneous real-time data collection methods

    NASA Technical Reports Server (NTRS)

    Klincsek, Thomas

    1992-01-01

    This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.

  1. Assessment of Postural Control in Children with Cerebral Palsy: A Review

    ERIC Educational Resources Information Center

    Pavao, Silvia Leticia; dos Santos, Adriana Neves; Woollacott, Marjorie Hines; Rocha, Nelci Adriana Cicuto Ferreira

    2013-01-01

    This paper aimed to review studies that assessed postural control (PC) in children with cerebral palsy (CP) and describe the methods used to investigate postural control in this population. It also intended to describe the performance of children with CP in postural control. An extensive database search was performed using the keywords: postural…

  2. The IBM PC at NASA Ames

    NASA Technical Reports Server (NTRS)

    Peredo, James P.

    1988-01-01

    Like many large companies, Ames relies very much on its computing power to get work done. And, like many other large companies, finding the IBM PC a reliable tool, Ames uses it for many of the same types of functions as other companies. Presentation and clarification needs demand much of graphics packages. Programming and text editing needs require simpler, more-powerful packages. The storage space needed by NASA's scientists and users for the monumental amounts of data that Ames needs to keep demand the best database packages that are large and easy to use. Availability to the Micom Switching Network combines the powers of the IBM PC with the capabilities of other computers and mainframes and allows users to communicate electronically. These four primary capabilities of the PC are vital to the needs of NASA's users and help to continue and support the vast amounts of work done by the NASA employees.

  3. A multimedia perioperative record keeper for clinical research.

    PubMed

    Perrino, A C; Luther, M A; Phillips, D B; Levin, F L

    1996-05-01

    To develop a multimedia perioperative recordkeeper that provides: 1. synchronous, real-time acquisition of multimedia data, 2. on-line access to the patient's chart data, and 3. advanced data analysis capabilities through integrated, multimedia database and analysis applications. To minimize cost and development time, the system design utilized industry standard hardware components and graphical. software development tools. The system was configured to use a Pentium PC complemented with a variety of hardware interfaces to external data sources. These sources included physiologic monitors with data in digital, analog, video, and audio as well as paper-based formats. The development process was guided by trials in over 80 clinical cases and by the critiques from numerous users. As a result of this process, a suite of custom software applications were created to meet the design goals. The Perioperative Data Acquisition application manages data collection from a variety of physiological monitors. The Charter application provides for rapid creation of an electronic medical record from the patient's paper-based chart and investigator's notes. The Multimedia Medical Database application provides a relational database for the organization and management of multimedia data. The Triscreen application provides an integrated data analysis environment with simultaneous, full-motion data display. With recent technological advances in PC power, data acquisition hardware, and software development tools, the clinical researcher now has the ability to collect and examine a more complete perioperative record. It is hoped that the description of the MPR and its development process will assist and encourage others to advance these tools for perioperative research.

  4. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  5. Introducing a New Interface for the Online MagIC Database by Integrating Data Uploading, Searching, and Visualization

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.

    2013-12-01

    The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.

  6. Is there a benefit to adjuvant radiation in stage III penile cancer after lymph node dissection? Findings from the National Cancer Database.

    PubMed

    Winters, Brian R; Kearns, James T; Holt, Sarah K; Mossanen, Matthew; Lin, Daniel W; Wright, Jonathan L

    2018-03-01

    The role of adjuvant radiation in advanced penile cancer (PC) is unknown. We used the National Cancer Database (NCDB) to determine factors associated with receiving adjuvant radiation (aXRT) and their influence on prognosis in men who underwent inguinal lymph node dissection (ILND) for stage III disease. We queried the NCDB from 1998-2012 for all men with PC who had pathologic nodal status and aXRT data available. Clinical and pathologic variables associated with aXRT were examined using chi-square testing. Logistic regression evaluated the odds of receiving aXRT while multivariate Cox regression analysis evaluated the influence of aXRT on overall survival (OS). A total of 589 patients underwent ILND for stage III PC with 23% (N = 136) receiving aXRT. Mean age was 61.8 ±13.7 years. Factors associated with receiving aXRT included higher pathologic nodal stage (MV OR 1.85, 95% CI: 1.13-3.05), while greater distance of travel (MV OR 0.48, 95% CI: 0.25-0.92), and treatment in an academic setting (MV OR 0.53, 95% CI: 0.35-0.81) were inversely associated with receiving aXRT. On Cox regression analysis, aXRT improved OS (combined HR 0.58, 95% CI: 0.39-0.86), which appeared to have been driven by higher nodal burden (N2: HR 0.53, 95% CI: 0.32-0.88; N1: HR 1.36, 95% CI: 0.60-3.09). Determinants of aXRT delivery in stage III PC appear to be related to the proximity to community cancer centers and greater nodal burden. We find evidence of a survival benefit with the use of aXRT, particularly in those with higher nodal stage. Multi-institutional studies are needed to confirm these findings and improve treatment algorithms for high-stage PC. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. National Software Reference Library (NSRL)

    National Institute of Standards and Technology Data Gateway

    National Software Reference Library (NSRL) (PC database for purchase)   A collaboration of the National Institute of Standards and Technology (NIST), the National Institute of Justice (NIJ), the Federal Bureau of Investigation (FBI), the Defense Computer Forensics Laboratory (DCFL),the U.S. Customs Service, software vendors, and state and local law enforement organizations, the NSRL is a tool to assist in fighting crime involving computers.

  8. Mass Spectral Library with Search Program, Data Version: NIST v17

    National Institute of Standards and Technology Data Gateway

    SRD 1A NIST/EPA/NIH Mass Spectral Library with Search Program, Data Version: NIST v17 (PC database for purchase)   Available with full-featured NIST MS Search Program for Windows integrated tools, the NIST '98 is a fully evaluated collection of electron-ionization mass spectra. (147,198 Compounds with Spectra; 147,194 Chemical Structures; 174,948 Spectra )

  9. LRRTM4-C538Y novel gene mutation is associated with hereditary macular degeneration with novel dysfunction of ON-type bipolar cells.

    PubMed

    Kawamura, Yuichi; Suga, Akiko; Fujimaki, Takuro; Yoshitake, Kazutoshi; Tsunoda, Kazushige; Murakami, Akira; Iwata, Takeshi

    2018-05-14

    The macula is a unique structure in higher primates, where cone and rod photoreceptors show highest density in the fovea and the surrounding area, respectively. The hereditary macular dystrophies represent a heterozygous group of rare disorders characterized by central visual loss and atrophy of the macula and surrounding retina. Here we report an atypical absence of ON-type bipolar cell response in a Japanese patient with autosomal dominant macular dystrophy (adMD). To identify a causal genetic mutation for the adMD, we performed whole-exome sequencing (WES) on four affected and four-non affected members of the family for three generations, and identified a novel p.C538Y mutation in a post-synaptic gene, LRRTM4. WES analysis revealed seven rare genetic variations in patients. We further referred to our in-house WES data from 1360 families with inherited retinal diseases, and found that only p.C538Y mutation in LRRTM4 was associated with adMD-affected patients. Combinatorial filtration using public database of single-nucleotide polymorphism frequency and genotype-phenotype annotated database identified novel mutation in atypical adMD.

  10. Association between Periodontal disease and Prostate cancer: Results of a 12-year Longitudinal Cohort Study in South Korea

    PubMed Central

    Lee, Jae-Hong; Kweon, Helen Hye-In; Choi, Jung-Kyu; Kim, Young-Taek; Choi, Seong-Ho

    2017-01-01

    The incidence of prostate cancer (PC) accompanying periodontal disease (PD) is anticipated to increase due to population aging. The aim of this study was to determine the association between PD and PC using data in the National Health Insurance Service-Health Examinee Cohort (NHIS-HEC). A random stratified sample of 187,934 South Koreans was collected from the NHIS database from 2002 to 2013. We assessed the relationship between PD and PC while adjusting for potential confounding factors (sex, age, household income, insurance status, residence area, hypertension, diabetes mellitus, cerebral infarction, angina pectoris, myocardial infarction, smoking status, alcohol intake, and regular exercise). The overall incidence of PC with PD among those aged 40 years and older was 0.28% (n = 531). In the multivariate Cox proportional-hazard regression analysis with adjustment for confounding factors, PD was associated with a 14% higher risk of PC (HR = 1.14, 95% CI = 1.01-1.31, P = 0.042). The findings of this study suggest that PD is significantly and positively associated with PC. Further studies are required to identify the mechanisms underlying the links between PD and PC. PMID:28928887

  11. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  12. Field Measurement and Model Evaluation Program for Assessment of the Environmental Effects of Military Smokes: The Atterbury-87 Field Study of Smoke Dispersion Model

    DTIC Science & Technology

    1989-02-01

    satisfies these criteria and is a major3 reason for the enduring popularity of the Prairie Grass database. Taller or slightly less homogeneous vegetation only...by California Measurements, Inc. (Sierra Madre , CA). The cascade impactor of the PC-2 is comprised of ten aerodynamic inertial impactors arranged in

  13. QA procedures and emissions from nonstandard sources in AQUIS, a PC-based emission inventory and air permit manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Tschanz, J.; Monarch, M.

    1996-05-01

    The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less

  14. Fast and Sensitive Alignment of Microbial Whole Genome Sequencing Reads to Large Sequence Datasets on a Desktop PC: Application to Metagenomic Datasets and Pathogen Identification

    PubMed Central

    2014-01-01

    Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner. PMID:25077800

  15. Fast and sensitive alignment of microbial whole genome sequencing reads to large sequence datasets on a desktop PC: application to metagenomic datasets and pathogen identification.

    PubMed

    Pongor, Lőrinc S; Vera, Roberto; Ligeti, Balázs

    2014-01-01

    Next generation sequencing (NGS) of metagenomic samples is becoming a standard approach to detect individual species or pathogenic strains of microorganisms. Computer programs used in the NGS community have to balance between speed and sensitivity and as a result, species or strain level identification is often inaccurate and low abundance pathogens can sometimes be missed. We have developed Taxoner, an open source, taxon assignment pipeline that includes a fast aligner (e.g. Bowtie2) and a comprehensive DNA sequence database. We tested the program on simulated datasets as well as experimental data from Illumina, IonTorrent, and Roche 454 sequencing platforms. We found that Taxoner performs as well as, and often better than BLAST, but requires two orders of magnitude less running time meaning that it can be run on desktop or laptop computers. Taxoner is slower than the approaches that use small marker databases but is more sensitive due the comprehensive reference database. In addition, it can be easily tuned to specific applications using small tailored databases. When applied to metagenomic datasets, Taxoner can provide a functional summary of the genes mapped and can provide strain level identification. Taxoner is written in C for Linux operating systems. The code and documentation are available for research applications at http://code.google.com/p/taxoner.

  16. Design of real-time communication system for image recognition based colony picking instrument

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Zhang, Rongfu; Yan, Hua; Wu, Huamin

    2017-11-01

    In order to aachieve autommated observatiion and pickinng of monocloonal colonies, an overall dessign and realizzation of real-time commmunication system based on High-throoughput monooclonal auto-piicking instrumment is propossed. The real-time commmunication system is commposed of PCC-PLC commuunication systtem and Centrral Control CComputer (CCC)-PLC communicatioon system. Bassed on RS232 synchronous serial communnication methood to develop a set of dedicated shoort-range commmunication prootocol betweenn the PC and PPLC. Furthermmore, the systemm uses SQL SSERVER database to rrealize the dataa interaction between PC andd CCC. Moreoover, the commmunication of CCC and PC, adopted Socket Ethernnet communicaation based on TCP/IP protoccol. TCP full-dduplex data cannnel to ensure real-time data eexchange as well as immprove system reliability andd security. We tested the commmunication syystem using sppecially develooped test software, thee test results show that the sysstem can realizze the communnication in an eefficient, safe aand stable way between PLC, PC andd CCC, keep thhe real-time conntrol to PLC annd colony inforrmation collecttion.

  17. Global Prostate Cancer Incidence and Mortality Rates According to the Human Development Index.

    PubMed

    Khazaei, Salman; Rezaeian, Shahab; Ayubi, Erfan; Gholamaliee, Behzad; Pishkuhi, Mahin Ahmadi; Khazaei, Somayeh; Mansori, Kamyar; Nematollahi, Shahrzad; Sani, Mohadeseh; Hanis, Shiva Mansouri

    2016-01-01

    Prostate cancer (PC) is one of the leading causes of death, especially in developed countries. The human development index (HDI) and its dimensions seem correlated with incidence and mortality rates of PC. This study aimed to assess the association of the specific components of HDI (life expectancy at birth, education, gross national income per 1000 capita, health, and living standards) with burden indicators of PC worldwide. Information of the incidence and mortality rates of PC was obtained from the GLOBOCAN cancer project in year 2012 and data about the HDI 2013 were obtained from the World Bank database. The correlation between incidence, mortality rates, and the HDI parameters were assessed using STATA software. A significant inequality of PC incidence rates was observed according to concentration indexes=0.25 with 95% CI (0.22, 0.34) and a negative mortality concentration index of -0.04 with 95% CI (-0.09, 0.01) was observed. A positive significant correlation was detected between the incidence rates of PC and the HDI and its dimensions including life expectancy at birth, education, income, urbanization level and obesity. However, there was a negative significant correlation between the standardized mortality rates and the life expectancy, income and HDI.

  18. Comparison of U.S. Environmental Protection Agency’s CAP88 PC versions 3.0 and 4.0

    DOE PAGES

    Jannik, Tim; Farfan, Eduardo B.; Dixon, Ken; ...

    2015-08-01

    The Savannah River National Laboratory (SRNL) with the assistance of Georgia Regents University, completed a comparison of the U.S. Environmental Protection Agency's (EPA) environmental dosimetry code CAP88 PC V3.0 with the recently developed V4.0. CAP88 is a set of computer programs and databases used for estimation of dose and risk from radionuclide emissions to air. At the U.S. Department of Energy's Savannah River Site, CAP88 is used by SRNL for determining compliance with EPA's National Emission Standards for Hazardous Air Pollutants (40 CFR 61, Subpart H) regulations. Using standardized input parameters, individual runs were conducted for each radionuclide within itsmore » corresponding database. Some radioactive decay constants, human usage parameters, and dose coefficients changed between the two versions, directly causing a proportional change in the total effective 137Cs, 3H, 129I, 239Pu, and 90Sr) is provided. In general, the total effective doses will decrease for alpha/beta emitters because of reduced inhalation and ingestion rates in V4.0. However, for gamma emitters, such as 60Co and 137Cs, the total effective doses will increase because of changes EPA made in the external ground shine calculations.« less

  19. An integrated tool for the diagnosis of voice disorders.

    PubMed

    Godino-Llorente, Juan I; Sáenz-Lechón, Nicolás; Osma-Ruiz, Víctor; Aguilera-Navarro, Santiago; Gómez-Vilda, Pedro

    2006-04-01

    A PC-based integrated aid tool has been developed for the analysis and screening of pathological voices. With it the user can simultaneously record speech, electroglottographic (EGG), and videoendoscopic signals, and synchronously edit them to select the most significant segments. These multimedia data are stored on a relational database, together with a patient's personal information, anamnesis, diagnosis, visits, explorations and any other comment the specialist may wish to include. The speech and EGG waveforms are analysed by means of temporal representations and the quantitative measurements of parameters such as spectrograms, frequency and amplitude perturbation measurements, harmonic energy, noise, etc. are calculated using digital signal processing techniques, giving an idea of the degree of hoarseness and quality of the voice register. Within this framework, the system uses a standard protocol to evaluate and build complete databases of voice disorders. The target users of this system are speech and language therapists and ear nose and throat (ENT) clinicians. The application can be easily configured to cover the needs of both groups of professionals. The software has a user-friendly Windows style interface. The PC should be equipped with standard sound and video capture cards. Signals are captured using common transducers: a microphone, an electroglottograph and a fiberscope or telelaryngoscope. The clinical usefulness of the system is addressed in a comprehensive evaluation section.

  20. Using SIR (Scientific Information Retrieval System) for data management during a field program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tichler, J.L.

    As part of the US Department of Energy's program, PRocessing of Emissions by Clouds and Precipitation (PRECP), a team of scientists from four laboratories conducted a study in north central New York State, to characterize the chemical and physical processes occurring in winter storms. Sampling took place from three aircraft, two instrumented motor homes and a network of 26 surface precipitation sampling sites. Data management personnel were part of the field program, using a portable IBM PC-AT computer to enter information as it became available during the field study. Having the same database software on the field computer and onmore » the cluster of VAX 11/785 computers in use aided database development and the transfer of data between machines. 2 refs., 3 figs., 5 tabs.« less

  1. Substrate analysis of the Pneumocystis carinii protein kinases PcCbk1 and PcSte20 using yeast proteome microarrays provides a novel method for Pneumocystis signalling biology.

    PubMed

    Kottom, Theodore J; Limper, Andrew H

    2011-10-01

    Pneumocystis carinii (Pc) undergoes morphological transitions between cysts and trophic forms. We have previously described two Pc serine/threonine kinases, termed PcCbk1 and PcSte20, with PcSte20 belonging to a family of kinases involved in yeast mating, while PcCbk1 is a member of a group of protein kinases involved in regulation of cell cycle, shape, and proliferation. As Pc remains genetically intractable, knowledge on specific substrates phosphorylated by these kinases remains limited. Utilizing the phylogenetic relatedness of Pc to Saccharomyces cerevisiae, we interrogated a yeast proteome microarray containing >4000 purified protein based peptides, leading to the identification of 18 potential PcCbk1 and 15 PcSte20 substrates (Z-score > 3.0). A number of these potential protein substrates are involved in bud site selection, polarized growth, and response to mating α factor and pseudohyphal and invasive growth. Full-length open reading frames suggested by the PcCbk1 and PcSte20 protoarrays were amplified and expressed. These five proteins were used as substrates for PcCbk1 or PcSte20, with each being highly phosphorylated by the respective kinase. Finally, to demonstrate the utility of this method to identify novel PcCbk1 and PcSte20 substrates, we analysed DNA sequence data from the partially complete Pc genome database and detected partial sequence information of potential PcCbk1 kinase substrates PcPxl1 and PcInt1. We additionally identified the potential PcSte20 kinase substrate PcBdf2. Full-length Pc substrates were cloned and expressed in yeast, and shown to be phosphorylated by the respective Pc kinases. In conclusion, the yeast protein microarray represents a novel crossover technique for identifying unique potential Pc kinase substrates. Copyright © 2011 John Wiley & Sons, Ltd.

  2. The reliability paradox of the Parent-Child Conflict Tactics Corporal Punishment Subscale.

    PubMed

    Lorber, Michael F; Slep, Amy M Smith

    2018-02-01

    In the present investigation we consider and explain an apparent paradox in the measurement of corporal punishment with the Parent-Child Conflict Tactics Scale (CTS-PC): How can it have poor internal consistency and still be reliable? The CTS-PC was administered to a community sample of 453 opposite sex couples who were parents of 3- to 7-year-old children. Internal consistency was marginal, yet item response theory analyses revealed that reliability rose sharply with increasing corporal punishment, exceeding .80 in the upper ranges of the construct. The results suggest that the CTS-PC Corporal Punishment subscale reliably discriminates among parents who report average to high corporal punishment (64% of mothers and 56% of fathers in the present sample), despite low overall internal consistency. These results have straightforward implications for the use and reporting of the scale. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Possible costs associated with investigating and mitigating geologic hazards in rural areas of western San Mateo County, California with a section on using the USGS website to determine the cost of developing property for residences in rural parts of San Mateo County, California

    USGS Publications Warehouse

    Brabb, Earl E.; Roberts, Sebastian; Cotton, William R.; Kropp, Alan L.; Wright, Robert H.; Zinn, Erik N.; Digital database by Roberts, Sebastian; Mills, Suzanne K.; Barnes, Jason B.; Marsolek, Joanna E.

    2000-01-01

    This publication consists of a digital map database on a geohazards web site, http://kaibab.wr.usgs.gov/geohazweb/intro.htm, this text, and 43 digital map images available for downloading at this site. The report is stored as several digital files, in ARC export (uncompressed) format for the database, and Postscript and PDF formats for the map images. Several of the source data layers for the images have already been released in other publications by the USGS and are available for downloading on the Internet. These source layers are not included in this digital database, but rather a reference is given for the web site where the data can be found in digital format. The exported ARC coverages and grids lie in UTM zone 10 projection. The pamphlet, which only describes the content and character of the digital map database, is included as Postscript, PDF, and ASCII text files and is also available on paper as USGS Open-File Report 00-127. The full versatility of the spatial database is realized by importing the ARC export files into ARC/INFO or an equivalent GIS. Other GIS packages, including MapInfo and ARCVIEW, can also use the ARC export files. The Postscript map image can be used for viewing or plotting in computer systems with sufficient capacity, and the considerably smaller PDF image files can be viewed or plotted in full or in part from Adobe ACROBAT software running on Macintosh, PC, or UNIX platforms.

  4. GenomeVista

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poliakov, Alexander; Couronne, Olivier

    2002-11-04

    Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less

  5. Managing Rock and Paleomagnetic Data Flow with the MagIC Database: from Measurement and Analysis to Comprehensive Archive and Visualization

    NASA Astrophysics Data System (ADS)

    Koppers, A. A.; Minnett, R. C.; Tauxe, L.; Constable, C.; Donadini, F.

    2008-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by rock and paleomagnetic data. The goal of MagIC is to archive all measurements and derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Organizing data for presentation in peer-reviewed publications or for ingestion into databases is a time-consuming task, and to facilitate these activities, three tightly integrated tools have been developed: MagIC-PY, the MagIC Console Software, and the MagIC Online Database. A suite of Python scripts is available to help users port their data into the MagIC data format. They allow the user to add important metadata, perform basic interpretations, and average results at the specimen, sample and site levels. These scripts have been validated for use as Open Source software under the UNIX, Linux, PC and Macintosh© operating systems. We have also developed the MagIC Console Software program to assist in collating rock and paleomagnetic data for upload to the MagIC database. The program runs in Microsoft Excel© on both Macintosh© computers and PCs. It performs routine consistency checks on data entries, and assists users in preparing data for uploading into the online MagIC database. The MagIC website is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual FlashMap interface to browse and select locations. Users can also browse the database by data type (inclination, intensity, VGP, hysteresis, susceptibility) or by data compilation to view all contributions associated with previous databases, such as PINT, GMPDB or TAFI or other user-defined compilations. Query results are displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, when supported by the data, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams.

  6. The RECONS 25 Parsec Database: Who Are the Stars? Where Are the Planets?

    NASA Astrophysics Data System (ADS)

    Henry, Todd J.; Dieterich, S.; Hosey, A. D.; Ianna, P. A.; Jao, W.; Koerner, D. W.; Riedel, A. R.; Slatten, K. J.; Subasavage, J.; Winters, J. G.; RECONS

    2013-01-01

    Since 1994, RECONS (www.recons.org, REsearch Consortium On Nearby Stars) has been discovering and characterizing the Sun's neighbors. Nearby stars provide increased fluxes, larger astrometric perturbations, and higher probabilities for eventual resolution and detailed study of planets than similar stars at larger distances. Examination of the nearby stellar sample will reveal the prevalence and structure of solar systems, as well as the balance of Jovian and terrestrial worlds. These are the stars and planets that will ultimately be key in our search for life elsewhere. Here we outline what we know ... and what we don't know ... about the population of the nearest stars. We have expanded the original RECONS 10 pc horizon to 25 pc and are constructing a database that currently includes 2124 systems. By using the CTIO 0.9m telescope --- now operated by RECONS as part of the SMARTS Consortium --- we have published the first accurate parallaxes for 149 systems within 25 pc and currently have an additional 213 unpublished systems to add. Still, we predict that roughly two-thirds of the systems within 25 pc do not yet have accurate distance measurements. In addition to revealing the Sun's stellar neighbors, we have been using astrometric techniques to search for massive planets orbiting roughly 200 of the nearest red dwarfs. Unlike radial velocity searches, our astrometric effort is most sensitive to Jovian planets in Jovian orbits, i.e. those that span decades. We have now been monitoring stars for up to 13 years with positional accuracies of a few milliarcseconds per night. We have detected stellar and brown dwarf companions, as well as enigmatic, unseen secondaries, but have yet to reveal a single super-Jupiter ... a somewhat surprising result. In total, only 3% of stars within 25 pc are known to possess planets. It seems clear that we have a great deal of work to do to map out the stars, planets, and perhaps life in the solar neighborhood. This effort is supported by the NSF through grant AST-0908402 and via observations made possible by the SMARTS Consortium.

  7. Automatic image database generation from CAD for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.

    1993-06-01

    The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.

  8. Serum Immunoglobulin G4 in Discriminating Autoimmune Pancreatitis From Pancreatic Cancer: A Diagnostic Meta-analysis.

    PubMed

    Dai, Cong; Cao, Qin; Jiang, Min; Sun, Ming-Jun

    2018-03-01

    Differentiation between autoimmune pancreatitis (AIP) and pancreatic cancer (PC) is a clinical challenge. Emerging published data on the accuracy of serum immunoglobulin G4 (IgG4) for the differential diagnosis between AIP and PC are inconsistent. The objective of our study was to perform a meta-analysis evaluating the clinical utility of serum IgG4 in the differential diagnosis between AIP and PC. We performed a systematic literature search of multiple electronic databases. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. Random-effects model was used to summarize the diagnostic odds ratio and other measures of accuracy. Eleven studies comprising 523 AIP patients and 771 PC patients were included in the meta-analysis. The summary estimates for serum IgG4 in distinguishing AIP from PC were as follows: diagnostic odds ratio, 57.30 (95% confidence interval [CI], 23.17-141.67); sensitivity, 0.72 (95% CI, 0.68-0.76); specificity, 0.93 (95% CI, 0.91-0.94). The area under the curve of serum IgG4 in distinguishing AIP from PC was 0.9200. Our meta-analysis found that serum IgG4 has high specificity and relatively low sensitivity in the differential diagnosis between AIP and PC. Therefore, serum IgG4 is useful in distinguishing AIP from PC.

  9. MAPPER: A personal computer map projection tool

    NASA Technical Reports Server (NTRS)

    Bailey, Steven A.

    1993-01-01

    MAPPER is a set of software tools designed to let users create and manipulate map projections on a personal computer (PC). The capability exists to generate five popular map projections. These include azimuthal, cylindrical, mercator, lambert, and sinusoidal projections. Data for projections are contained in five coordinate databases at various resolutions. MAPPER is managed by a system of pull-down windows. This interface allows the user to intuitively create, view and export maps to other platforms.

  10. PCACE-Personal-Computer-Aided Cabling Engineering

    NASA Technical Reports Server (NTRS)

    Billitti, Joseph W.

    1987-01-01

    PCACE computer program developed to provide inexpensive, interactive system for learning and using engineering approach to interconnection systems. Basically database system that stores information as files of individual connectors and handles wiring information in circuit groups stored as records. Directly emulates typical manual engineering methods of handling data, thus making interface between user and program very natural. Apple version written in P-Code Pascal and IBM PC version of PCACE written in TURBO Pascal 3.0

  11. Prostate cancer-related anxiety in long-term survivors after radical prostatectomy.

    PubMed

    Meissner, Valentin H; Herkommer, Kathleen; Marten-Mittag, Birgitt; Gschwend, Jürgen E; Dinkel, Andreas

    2017-12-01

    Knowledge of the psychological distress of long- and very long-term (>10 years) prostate cancer (PC) survivors is limited. This study intended to examine the parameters influencing anxiety related to prostate-specific antigen (PSA) and PC in long-term survivors after radical prostatectomy. We surveyed 4719 PC survivors from the German multicenter prospective database "Familial Prostate Cancer." We evaluated the association of PC-related anxiety (MAX-PC) with sociodemographic characteristics, family history of PC, global health status/quality of life (EORTC QLQ-C30), depression and anxiety (PHQ-2; GAD-2), latest PSA level, time since radical prostatectomy, and current therapy. The survey participants' mean age was 75.2 years (SD = 6.5). Median follow-up was 11.5 years, and 19.5% of participants had survived more than 15 years since the initial treatment. The final regression analysis found that younger age, lower global health status/quality of life, higher depression and anxiety scores, higher latest PSA level, and shorter time since radical prostatectomy predicted increased PSA-related anxiety and PC anxiety. Familial PC was predictive only of PSA anxiety (all p < 0.05). The final model explained 12% of the variance for PSA anxiety and 24% for PC anxiety. PC-related anxiety remained relevant many years after prostatectomy and was influenced by younger age, psychological status, rising PSA level, and shorter time since initial treatment. Survivors with these characteristics are at increased risk of PC-related anxieties, which should be considered by the treating physician during follow-up.

  12. Discovery of small molecules binding to the normal conformation of prion by combining virtual screening and multiple biological activity evaluation methods

    NASA Astrophysics Data System (ADS)

    Li, Lanlan; Wei, Wei; Jia, Wen-Juan; Zhu, Yongchang; Zhang, Yan; Chen, Jiang-Huai; Tian, Jiaqi; Liu, Huanxiang; He, Yong-Xing; Yao, Xiaojun

    2017-12-01

    Conformational conversion of the normal cellular prion protein, PrPC, into the misfolded isoform, PrPSc, is considered to be a central event in the development of fatal neurodegenerative diseases. Stabilization of prion protein at the normal cellular form (PrPC) with small molecules is a rational and efficient strategy for treatment of prion related diseases. However, few compounds have been identified as potent prion inhibitors by binding to the normal conformation of prion. In this work, to rational screening of inhibitors capable of stabilizing cellular form of prion protein, multiple approaches combining docking-based virtual screening, steady-state fluorescence quenching, surface plasmon resonance and thioflavin T fluorescence assay were used to discover new compounds interrupting PrPC to PrPSc conversion. Compound 3253-0207 that can bind to PrPC with micromolar affinity and inhibit prion fibrillation was identified from small molecule databases. Molecular dynamics simulation indicated that compound 3253-0207 can bind to the hotspot residues in the binding pocket composed by β1, β2 and α2, which are significant structure moieties in conversion from PrPC to PrPSc.

  13. Influence of Different Factors on Relative Air Humidity in Zaragoza, Spain

    NASA Astrophysics Data System (ADS)

    Cuadrat, José M.

    2015-03-01

    In this study, the spatial patterns of relative air humidity and its relation to urban, geographical and meteorological factors in the city of Zaragoza (Spain) is discussed. We created a relative humidity database by means of 32 urban transects. Data were taken on different days and with different weather types. This data set was used to map the mean spatial distribution of urban dry island (UDI). Using stepwise multiple regression analysis and Landsat ETM+ images the relationships between mean UDI and the main geographic-urban factors: topography, land cover and surface reflectivity, have been analyzed. Different spatial patterns of UDI were determined using Principal Component Analysis (Varimax rotation). The three components extracted accounted for 91% of the total variance. PC1 accounted for the most general patterns (similar to mean UDI); PC2 showed a shift of dry areas to the SE and PC3 a shift to NW. Using data on wind direction in Zaragoza, we have found that the displacement of dry areas to the SE (PC 2) was greater during NW winds while the shift to the NW (PC 3) was produced mainly by SE winds.

  14. Monitoring patron use of CD-ROM databases using SignIn-Stat.

    PubMed Central

    Silver, H; Dennis, S

    1990-01-01

    SignIn-Stat, a PC-based, menu-driven program, collects information from users of the library's public access computer systems. It was used to collect patron use data for the library's four CD-ROM workstations for the period September 1987 to April 1988 and to survey users for the period December 1987 to March 1988. During the sample period, 5,909 CD-ROM uses were recorded. MEDLINE was the most heavily used database, followed by PsycLIT and Micromedex CCIS. Students accounted for 61% of the use, while faculty, residents, and staff were responsible for 31%. Graduate students had the highest rate of use per student. Nineteen percent of use was by patrons who had never used CD-ROMs before, while 37% was by patrons who had used CD-ROMs ten or more times. Residents were the least experienced user group, while graduate students and faculty were the most experienced. PMID:2203498

  15. A meta-analysis of patient outcomes with subcentimeter disease after chemotherapy for metastatic non-seminomatous germ cell tumor

    PubMed Central

    Ravi, P.; Gray, K. P.; O'Donnell, E. K.; Sweeney, C. J.

    2014-01-01

    Background Approximately a quarter of men with metastatic non-seminomatous germ cell tumor (NSGCT) have a residual mass, typically in the retroperitoneum, after chemotherapy. The management of small residual masses (≤1 cm) is controversial, with good outcomes seen with either post-chemotherapy retroperitoneal lymph node dissection (PC-RPLND) or surveillance. We sought to review our experience of surveillance and synthesize the cumulative findings with the current literature in the form of a meta-analysis. Patients and methods We searched PubMed, EMBASE and abstracts from ASCO and AUA to identify relevant, English-language studies for the meta-analysis. The DFCI (Dana Farber Cancer Institute) database was constructed from a database of men undergoing cisplatin-based chemotherapy for metastatic NSGCT. The outcomes of interest were the proportion with necrosis, teratoma or active cancer on histology at PC-RPLND (literature) and the total number of relapses, RP-only relapses and overall survival in men undergoing surveillance (literature and DFCI cohort). Results Three of 47 men undergoing post-chemotherapy surveillance at our institution relapsed over a median follow-up of 5.4 years. All three were alive at a median of 4.2 years after relapse. On meta-analysis, the pooled estimates of necrosis, teratoma and active cancer in the 588 men who underwent PC-RPLND were 71, 24 and 4%, respectively. Of the combined 455 men who underwent surveillance, the pooled estimate of the relapse rate was 5%, with an RP-only relapse rate of 3%. Of the 15 men who suffered an RP-only relapse on surveillance, two died of disease. Conclusion Surveillance is a reasonable strategy for men with minimal residual RP disease after chemotherapy and avoids an RPLND in ∼97% of men who are cured with chemotherapy alone. PMID:24276027

  16. Validation of Living Donor Nephrectomy Codes

    PubMed Central

    Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.

    2018-01-01

    Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679

  17. Targeted agents for patients with advanced/metastatic pancreatic cancer: A protocol for systematic review and network meta-analysis.

    PubMed

    Di, Baoshan; Pan, Bei; Ge, Long; Ma, Jichun; Wu, Yiting; Guo, Tiankang

    2018-03-01

    Pancreatic cancer (PC) is a devastating malignant tumor. Although surgical resection may offer a good prognosis and prolong survival, approximately 80% patients with PC are always diagnosed as unresectable tumor. National Comprehensive Cancer Network's (NCCN) recommended gemcitabine-based chemotherapy as efficient treatment. While, according to recent studies, targeted agents might be a better available option for advanced or metastatic pancreatic cancer patients. The aim of this systematic review and network meta-analysis will be to examine the differences of different targeted interventions for advanced/metastatic PC patients. We will conduct this systematic review and network meta-analysis using Bayesian method and according to Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P) statement. To identify relevant studies, 6 electronic databases including PubMed, EMBASE, the Cochrane Central Register of Controlled Trials (CENTRAL), Web of science, CNKI (Chinese National Knowledge Infrastructure), and CBM (Chinese Biological Medical Database) will be searched. The risk of bias in included randomized controlled trials (RCTs) will be assessed using the Cochrane Handbook version 5.1.0. And we will use GRADE approach to assess the quality of evidence from network meta-analysis. Data will be analyzed using R 3.4.1 software. To the best of our knowledge, this systematic review and network meta-analysis will firstly use both direct and indirect evidence to compare the differences of different targeted agents and targeted agents plus chemotherapy for advanced/metastatic pancreatic cancer patients. This is a protocol of systematic review and meta-analysis, so the ethical approval and patient consent are not required. We will disseminate the results of this review by submitting to a peer-reviewed journal.

  18. High-Intensity Focused Ultrasound (HIFU) in Localized Prostate Cancer Treatment.

    PubMed

    Alkhorayef, Mohammed; Mahmoud, Mustafa Z; Alzimami, Khalid S; Sulieman, Abdelmoneim; Fagiri, Maram A

    2015-01-01

    High-intensity focused ultrasound (HIFU) applies high-intensity focused ultrasound energy to locally heat and destroy diseased or damaged tissue through ablation. This study intended to review HIFU to explain the fundamentals of HIFU, evaluate the evidence concerning the role of HIFU in the treatment of prostate cancer (PC), review the technologies used to perform HIFU and the published clinical literature regarding the procedure as a primary treatment for PC. Studies addressing HIFU in localized PC were identified in a search of internet scientific databases. The analysis of outcomes was limited to journal articles written in English and published between 2000 and 2013. HIFU is a non-invasive approach that uses a precisely delivered ultrasound energy to achieve tumor cell necrosis without radiation or surgical excision. In current urological oncology, HIFU is used clinically in the treatment of PC. Clinical research on HIFU therapy for localized PC began in the 1990s, and the majority of PC patients were treated with the Ablatherm device. HIFU treatment for localized PC can be considered as an alternative minimally invasive therapeutic modality for patients who are not candidates for radical prostatectomy. Patients with lower pre-HIFU PSA level and favourable pathologic Gleason score seem to present better oncologic outcomes. Future advances in technology and safety will undoubtedly expand the HIFU role in this indication as more of patient series are published, with a longer follow-up period.

  19. Proceedings of the CASE Adoption Workshop Held in Pittsburgh, Pennsylvania on 13-14 November 1990

    DTIC Science & Technology

    1992-05-01

    position. It Is published in the interest of scientific and technical information exchange. Review and Approval This report has been reviewed and is...available through the Defense Technical Information Center. DTIC provides access so and transr of scientific and technical information for DoD... Metthods and Tools $1,850 P-Cube Corporation 572 East Lamibert Rd CASEbase (a PC-based CASE database) $495 Brea, CA 92621 (714) 990-3169 Foresite

  20. Evidence for a Pneumocystis carinii Flo8-like transcription factor: insights into organism adhesion.

    PubMed

    Kottom, Theodore J; Limper, Andrew H

    2016-02-01

    Pneumocystis carinii (Pc) adhesion to alveolar epithelial cells is well established and is thought to be a prerequisite for the initiation of Pneumocystis pneumonia. Pc binding events occur in part through the major Pc surface glycoprotein Msg, as well as an integrin-like molecule termed PcInt1. Recent data from the Pc sequencing project also demonstrate DNA sequences homologous to other genes important in Candida spp. binding to mammalian host cells, as well as organism binding to polystyrene surfaces and in biofilm formation. One of these genes, flo8, a transcription factor needed for downstream cAMP/PKA-pathway-mediated activation of the major adhesion/flocculin Flo11 in yeast, was cloned from a Pc cDNA library utilizing a partial sequence available in the Pc genome database. A CHEF blot of Pc genomic DNA yielded a single band providing evidence this gene is present in the organism. BLASTP analysis of the predicted protein demonstrated 41 % homology to the Saccharomyces cerevisiae Flo8. Northern blotting demonstrated greatest expression at pH 6.0-8.0, pH comparable to reported fungal biofilm milieu. Western blot and immunoprecipitation assays of PcFlo8 protein in isolated cyst and tropic life forms confirmed the presence of the cognate protein in these Pc life forms. Heterologous expression of Pcflo8 cDNA in flo8Δ-deficient yeast strains demonstrated that the Pcflo8 was able to restore yeast binding to polystyrene and invasive growth of yeast flo8Δ cells. Furthermore, Pcflo8 promoted yeast binding to HEK293 human epithelial cells, strengthening its functional classification as a Flo8 transcription factor. Taken together, these data suggest that PcFlo8 is expressed by Pc and may exert activity in organism adhesion and biofilm formation.

  1. Evidence for a Pneumocystis carinii Flo8-like Transcription Factor: Insights into Organism Adhesion

    PubMed Central

    Kottom, Theodore J.; Limper, Andrew H.

    2015-01-01

    Pneumocystis carinii (Pc) adhesion to alveolar epithelial cells is well established and is thought to be a prerequisite for initiation of Pneumocystis pneumonia. Pc binding events occur in part through the major Pc surface glycoprotein Msg, as well as an integrin-like molecule termed PcInt1. Recent data from the Pc sequencing project also demonstrate DNA sequences homologous to other genes important in Candida spp. binding to mammalian host cells, as well as organism binding to polystyrene surfaces and in biofilm formation. One of these genes, flo8, a transcription factor needed for downstream cAMP/PKA-pathway-mediated activation of the major adhesin/flocculin Flo11 in yeast, was cloned from a Pc cDNA library utilizing a partial sequence available in the Pc genome database. A CHEF blot of Pc genomic DNA yielded a single band providing evidence this gene is present in the organism. BLASTP analysis of the predicted protein demonstrated 41% homology to the Saccharomyces cerevisiae Flo8. Northern blotting demonstrated greatest expression at pH 6.0–8.0, pH comparable to reported fungal biofilm milieu. Western blot and immunoprecipitation assays of PcFlo8 protein in isolated cyst and tropic life forms confirmed the presence of the cognate protein in these Pc life forms. Heterologous expression of Pcflo8 cDNA in flo8Δ (deficient) yeast strains demonstrated the Pcflo8 was able to restore yeast binding to polystyrene and invasive growth of yeast flo8Δ cells. Furthermore, Pcflo8 promoted yeast binding to HEK293 human epithelial cells, strengthening its functional classification as a Flo8 transcription factor. Taken together these data suggests that PcFlo8 is expressed by Pc and may exert activity in organism adhesion and biofilm formation. PMID:26215665

  2. Comparison of U.S. Environmental Protection Agency's CAP88 PC Versions 3.0 and 4.0.

    PubMed

    Jannik, Tim; Farfan, Eduardo B; Dixon, Ken; Newton, Joseph; Sailors, Christopher; Johnson, Levi; Moore, Kelsey; Stahman, Richard

    2015-08-01

    The Savannah River National Laboratory (SRNL) with the assistance of Georgia Regents University, completed a comparison of the U.S. Environmental Protection Agency's (U.S. EPA) environmental dosimetry code CAP88 PC V3.0 with the recently developed V4.0. CAP88 is a set of computer programs and databases used for estimation of dose and risk from radionuclide emissions to air. At the U.S. Department of Energy's Savannah River Site, CAP88 is used by SRNL for determining compliance with U.S. EPA's National Emission Standards for Hazardous Air Pollutants (40 CFR 61, Subpart H) regulations. Using standardized input parameters, individual runs were conducted for each radionuclide within its corresponding database. Some radioactive decay constants, human usage parameters, and dose coefficients changed between the two versions, directly causing a proportional change in the total effective dose. A detailed summary for select radionuclides of concern at the Savannah River Site (60Co, 137Cs, 3H, 129I, 239Pu, and 90Sr) is provided. In general, the total effective doses will decrease for alpha/beta emitters because of reduced inhalation and ingestion rates in V4.0. However, for gamma emitters, such as 60Co and 137Cs, the total effective doses will increase because of changes U.S. EPA made in the external ground shine calculations.

  3. Quality tools and resources to support organisational improvement integral to high-quality primary care: a systematic review of published and grey literature.

    PubMed

    Janamian, Tina; Upham, Susan J; Crossland, Lisa; Jackson, Claire L

    2016-04-18

    To conduct a systematic review of the literature to identify existing online primary care quality improvement tools and resources to support organisational improvement related to the seven elements in the Primary Care Practice Improvement Tool (PC-PIT), with the identified tools and resources to progress to a Delphi study for further assessment of relevance and utility. Systematic review of the international published and grey literature. CINAHL, Embase and PubMed databases were searched in March 2014 for articles published between January 2004 and December 2013. GreyNet International and other relevant websites and repositories were also searched in March-April 2014 for documents dated between 1992 and 2012. All citations were imported into a bibliographic database. Published and unpublished tools and resources were included in the review if they were in English, related to primary care quality improvement and addressed any of the seven PC-PIT elements of a high-performing practice. Tools and resources that met the eligibility criteria were then evaluated for their accessibility, relevance, utility and comprehensiveness using a four-criteria appraisal framework. We used a data extraction template to systematically extract information from eligible tools and resources. A content analysis approach was used to explore the tools and resources and collate relevant information: name of the tool or resource, year and country of development, author, name of the organisation that provided access and its URL, accessibility information or problems, overview of each tool or resource and the quality improvement element(s) it addresses. If available, a copy of the tool or resource was downloaded into the bibliographic database, along with supporting evidence (published or unpublished) on its use in primary care. This systematic review identified 53 tools and resources that can potentially be provided as part of a suite of tools and resources to support primary care practices in improving the quality of their practice, to achieve improved health outcomes.

  4. [Characteristics of acupoint application for the sub-healthy condition treated with ancient and modern acupuncture based on data mining exploration].

    PubMed

    Cai, Liyan; Wu, Jie; Ma, Tingting; Yang, Lijie

    2015-10-01

    The acupoint selection was retrieved from the ancient and modern literature on the treatment of sub-healthy condition with acupuncture. The law of acupoint application was analyzed so as to provide a certain reference to the determination of acupoint prescription in clinical acupuncture. The ancient literature was retrieved from Chinese basic ancient literature database. The modern literature was retrieved from Cochrane Library, Medline, PubMed, Ovid evidence-based medicine database, Chinese biomedical literature database, China journal full-text database, VIP journal full-text database and Wanfang database. The database mining software was adopted to explore the law of acupoint application in treatment of sub-healthy conditions with ancient and modern acupuncture. The acupoint use frequency, compatibility association rule, law for meridian use and the use regularity of specific points were analyzed. In the ancient treatment for sub-healthy condition, the top five commonly used acupoints are Shenmen (HT 7), Zhaohai (KI 6), Taibai (SP 3), Daling (PC 7) and Taixi (KI 3). The most commonly combined points are Zhangmen (LR 13), Taibai (SP 3) and Zhaohai (KI 6). The most commonly used meridians are the bladder meridian of foot-taiyang, kidney meridian of foot-shaoyin and liver meridian of foot-jueyin. The most commonly used specific points are the five-shu points. The most commonly used acupoints are located in the lower limbs. In the modern treatment, the top five commonly used acupoints are Zusanli (ST 36), Sanyinjiao (SP 6), Baihui (GV 20), Shenshu (BL 23) and Guanyuan (CV 4). The most commonly supplemented points are Hegu (LI 4) and Taichong (LR 3). The most commonly used meridians are the bladder meridian of foot-taiyang, the conception vessel and the governor vessel. The most commonly used specific points are the back-shu points. The most commonly used acupoints are located in the lower limbs. After the systematic comprehension of the relevant ancient and modern literature, the most commonly used acupoints are selected along the bladder meridian of foot-taiyang, and the most commonly used specific points are the back-shu points, the five-shu points and the front-mu-points. the acupoints are mostly located in the lower limbs.

  5. Storage and retrieval of digital images in dermatology.

    PubMed

    Bittorf, A; Krejci-Papa, N C; Diepgen, T L

    1995-11-01

    Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.

  6. THRIVE: threshold homomorphic encryption based secure and privacy preserving biometric verification system

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Kiraz, Mehmet Sabir; Erdogan, Hakan; Savas, Erkay

    2015-12-01

    In this paper, we introduce a new biometric verification and template protection system which we call THRIVE. The system includes novel enrollment and authentication protocols based on threshold homomorphic encryption where a private key is shared between a user and a verifier. In the THRIVE system, only encrypted binary biometric templates are stored in a database and verification is performed via homomorphically randomized templates, thus, original templates are never revealed during authentication. Due to the underlying threshold homomorphic encryption scheme, a malicious database owner cannot perform full decryption on encrypted templates of the users in the database. In addition, security of the THRIVE system is enhanced using a two-factor authentication scheme involving user's private key and biometric data. Using simulation-based techniques, the proposed system is proven secure in the malicious model. The proposed system is suitable for applications where the user does not want to reveal her biometrics to the verifier in plain form, but needs to prove her identity by using biometrics. The system can be used with any biometric modality where a feature extraction method yields a fixed size binary template and a query template is verified when its Hamming distance to the database template is less than a threshold. The overall connection time for the proposed THRIVE system is estimated to be 336 ms on average for 256-bit biometric templates on a desktop PC running with quad core 3.2 GHz CPUs at 10 Mbit/s up/down link connection speed. Consequently, the proposed system can be efficiently used in real-life applications.

  7. Germline BRCA mutation in male carriers-ripe for precision oncology?

    PubMed

    Leão, Ricardo Romão Nazário; Price, Aryeh Joshua; James Hamilton, Robert

    2018-04-01

    Prostate cancer (PC) is one of the known heritable cancers with individual variations attributed to genetic factors. BRCA1 and BRCA2 are tumour suppressor genes with crucial roles in repairing DNA and thereby maintaining genomic integrity. Germline BRCA mutations predispose to multiple familial tumour types including PC. We performed a Pubmed database search along with review of reference lists from prominent articles to capture papers exploring the association between BRCA mtuations and prostate cancer risk and prognosis. Articles were retrieved until May 2017 and filtered for relevance, and publication type. We explored familial PC genetics; discussed the discovery and magnitude of the association between BRCA mutations and PC risk and outcome; examined implications of factoring BRCA mutations into PC screening; and discussed the rationale for chemoprevention in this high-risk population. We confirmed that BRCA1/2 mutations confer an up to 4.5-fold and 8.3-fold increased risk of PC, respectively. BRCA2 mutations are associated with an increased risk of high-grade disease, progression to metastatic castration-resistant disease, and 5-year cancer-specific survival rates of 50 to 60%. Despite the growing body of research on DNA repair genes, deeper analysis is needed to understand the aetiological role of germline BRCA mutations in the natural history of PC. There is a need for awareness to screen for this marker of PC risk. There is similarly an opportunity for structured PC screening programs for BRCA mutation carriers. Finally, further research is required to identify potential chemopreventive strategies for this high-risk subgroup.

  8. Discovery of Novel Anti-prion Compounds Using In Silico and In Vitro Approaches

    PubMed Central

    Hyeon, Jae Wook; Choi, Jiwon; Kim, Su Yeon; Govindaraj, Rajiv Gandhi; Jam Hwang, Kyu; Lee, Yeong Seon; An, Seong Soo A.; Lee, Myung Koo; Joung, Jong Young; No, Kyoung Tai; Lee, Jeongmin

    2015-01-01

    Prion diseases are associated with the conformational conversion of the physiological form of cellular prion protein (PrPC) to the pathogenic form, PrPSc. Compounds that inhibit this process by blocking conversion to the PrPSc could provide useful anti-prion therapies. However, no suitable drugs have been identified to date. To identify novel anti-prion compounds, we developed a combined structure- and ligand-based virtual screening system in silico. Virtual screening of a 700,000-compound database, followed by cluster analysis, identified 37 compounds with strong interactions with essential hotspot PrP residues identified in a previous study of PrPC interaction with a known anti-prion compound (GN8). These compounds were tested in vitro using a multimer detection system, cell-based assays, and surface plasmon resonance. Some compounds effectively reduced PrPSc levels and one of these compounds also showed a high binding affinity for PrPC. These results provide a promising starting point for the development of anti-prion compounds. PMID:26449325

  9. Key technology research of HILS based on real-time operating system

    NASA Astrophysics Data System (ADS)

    Wang, Fankai; Lu, Huiming; Liu, Che

    2018-03-01

    In order to solve the problems that the long development cycle of traditional simulation and digital simulation doesn't have the characteristics of real time, this paper designed a HILS(Hardware In the Loop Simulation) system based on the real-time operating platform xPC. This system solved the communication problems between HMI and Simulink models through the MATLAB engine interface, and realized the functions of system setting, offline simulation, model compiling and downloading, etc. Using xPC application interface and integrating the TeeChart ActiveX chart component to realize the monitoring function of real-time target application; Each functional block in the system is encapsulated in the form of DLL, and the data interaction between modules was realized by MySQL database technology. When the HILS system runs, search the address of the online xPC target by means of the Ping command, to establish the Tcp/IP communication between the two machines. The technical effectiveness of the developed system is verified through the typical power station control system.

  10. The spectra program library: A PC based system for gamma-ray spectra analysis and INAA data reduction

    USGS Publications Warehouse

    Baedecker, P.A.; Grossman, J.N.

    1995-01-01

    A PC based system has been developed for the analysis of gamma-ray spectra and for the complete reduction of data from INAA experiments, including software to average the results from mulitple lines and multiple countings and to produce a final report of analysis. Graphics algorithms may be called for the analysis of complex spectral features, to compare the data from alternate photopeaks and to evaluate detector performance during a given counting cycle. A database of results for control samples can be used to prepare quality control charts to evaluate long term precision and to search for systemic variations in data on reference samples as a function of time. The entire software library can be accessed through a user-friendly menu interface with internal help.

  11. "On-screen" writing and composing: two years experience with Manuscript Manager, Apple II and IBM-PC versions.

    PubMed

    Offerhaus, L

    1989-06-01

    The problems of the direct composition of a biomedical manuscript on a personal computer are discussed. Most word processing software is unsuitable because literature references, once stored, cannot be rearranged if major changes are necessary. These obstacles have been overcome in Manuscript Manager, a combination of word processing and database software. As it follows Council of Biology Editors and Vancouver rules, the printouts should be technically acceptable to most leading biomedical journals.

  12. A PC-controlled microwave tomographic scanner for breast imaging

    NASA Astrophysics Data System (ADS)

    Padhi, Shantanu; Howard, John; Fhager, A.; Bengtsson, Sebastian

    2011-01-01

    This article presents the design and development of a personal computer based controller for a microwave tomographic system for breast cancer detection. The system uses motorized, dual-polarized antennas and a custom-made GUI interface to control stepper motors, a wideband vector network analyzer (VNA) and to coordinate data acquisition and archival in a local MDSPlus database. Both copolar and cross-polar scattered field components can be measured directly. Experimental results are presented to validate the various functionalities of the scanner.

  13. Gene therapy in pancreatic cancer

    PubMed Central

    Liu, Si-Xue; Xia, Zhong-Sheng; Zhong, Ying-Qiang

    2014-01-01

    Pancreatic cancer (PC) is a highly lethal disease and notoriously difficult to treat. Only a small proportion of PC patients are eligible for surgical resection, whilst conventional chemoradiotherapy only has a modest effect with substantial toxicity. Gene therapy has become a new widely investigated therapeutic approach for PC. This article reviews the basic rationale, gene delivery methods, therapeutic targets and developments of laboratory research and clinical trials in gene therapy of PC by searching the literature published in English using the PubMed database and analyzing clinical trials registered on the Gene Therapy Clinical Trials Worldwide website (http://www. wiley.co.uk/genmed/ clinical). Viral vectors are main gene delivery tools in gene therapy of cancer, and especially, oncolytic virus shows brighter prospect due to its tumor-targeting property. Efficient therapeutic targets for gene therapy include tumor suppressor gene p53, mutant oncogene K-ras, anti-angiogenesis gene VEGFR, suicide gene HSK-TK, cytosine deaminase and cytochrome p450, multiple cytokine genes and so on. Combining different targets or combination strategies with traditional chemoradiotherapy may be a more effective approach to improve the efficacy of cancer gene therapy. Cancer gene therapy is not yet applied in clinical practice, but basic and clinical studies have demonstrated its safety and clinical benefits. Gene therapy will be a new and promising field for the treatment of PC. PMID:25309069

  14. Performance Confirmation Data Aquisition System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D.W. Markman

    2000-10-27

    The purpose of this analysis is to identify and analyze concepts for the acquisition of data in support of the Performance Confirmation (PC) program at the potential subsurface nuclear waste repository at Yucca Mountain. The scope and primary objectives of this analysis are to: (1) Review the criteria for design as presented in the Performance Confirmation Data Acquisition/Monitoring System Description Document, by way of the Input Transmittal, Performance Confirmation Input Criteria (CRWMS M&O 1999c). (2) Identify and describe existing and potential new trends in data acquisition system software and hardware that would support the PC plan. The data acquisition softwaremore » and hardware will support the field instruments and equipment that will be installed for the observation and perimeter drift borehole monitoring, and in-situ monitoring within the emplacement drifts. The exhaust air monitoring requirements will be supported by a data communication network interface with the ventilation monitoring system database. (3) Identify the concepts and features that a data acquisition system should have in order to support the PC process and its activities. (4) Based on PC monitoring needs and available technologies, further develop concepts of a potential data acquisition system network in support of the PC program and the Site Recommendation and License Application.« less

  15. Percentage of positive prostate biopsies independently predicts biochemical outcome following radiation therapy for prostate cancer.

    PubMed

    Gabriele, Domenico; Garibaldi, Monica; Girelli, Giuseppe; Taraglio, Stefano; Duregon, Eleonora; Gabriele, Pietro; Guiot, Caterina; Bollito, Enrico

    2016-06-01

    This work aims to definitely show the ability of percentage of positive biopsy cores (%PC) to independently predict biochemical outcome beyond traditional pretreatment risk-factors in prostate cancer (PCa) patients treated with radiotherapy. A cohort of 2493 men belonging to the EUREKA-2 retrospective multicentric database on (PCa) and treated with external-beam radiation therapy (EBRT) as primary treatment comprised the study population (median follow-up 50 months). A Cox regression time to prostate-specific antigen (PSA) failure analysis was performed to evaluate the predictive power of %PC, both in univariate and multivariate settings, with age, pretreatment PSA, clinical-radiological staging, bioptic Gleason Score (bGS), RT dose and RT +/- ADT as covariates. P statistics for %PC is lower than 0.001 both in univariate and multivariate models. %PC as a continuous variable yields an AUC of 69% in ROC curve analysis for biochemical relapse. Four classes of %PC (1-20%, 21-50%, 51-80% and 81-100%) distinctly split patients for risk of biochemical relapse (overall log-rank test P<0.0001), with biochemical progression free survival (bPFS) at 5-years ranging from 88% to 58% and 10-years bPFS ranging from 80% to 38%. We strongly affirm the usefulness of %PC information beyond main risk factors (PSA, staging and bGS) in predicting biochemical recurrence after EBRT for PCa. The stratification of patients according to %PC may be valuable to further discriminate cases with favourable or adverse prognosis.

  16. Ethical Challenges and Solutions Regarding Delirium Studies in Palliative Care

    PubMed Central

    Sweet, Lisa; Adamis, Dimitrios; Meagher, David; Davis, Daniel; Currow, David; Bush, Shirley H.; Barnes, Christopher; Hartwick, Michael; Agar, Meera; Simon, Jessica; Breitbart, William; MacDonald, Neil; Lawlor, Peter G.

    2014-01-01

    Context Delirium occurs commonly in settings of palliative care (PC), in which patient vulnerability in the unique context of end-of-life care and delirium-associated impairment of decision-making capacity may together present many ethical challenges. Objectives Based on deliberations at the Studies to Understand Delirium in Palliative Care Settings (SUNDIPS) meeting and an associated literature review, this article discusses ethical issues central to the conduct of research on delirious PC patients. Methods Together with an analysis of the ethical deliberations at the SUNDIPS meeting, we conducted a narrative literature review by key words searching of relevant databases and a subsequent hand search of initially identified articles. We also reviewed statements of relevance to delirium research in major national and international ethics guidelines. Results Key issues identified include the inclusion of PC patients in delirium research, capacity determination, and the mandate to respect patient autonomy and ensure maintenance of patient dignity. Proposed solutions include designing informed consent statements that are clear, concise, and free of complex phraseology; use of concise, yet accurate, capacity assessment instruments with a minimally burdensome schedule; and use of PC friendly consent models, such as facilitated, deferred, experienced, advance, and proxy models. Conclusion Delirium research in PC patients must meet the common standards for such research in any setting. Certain features unique to PC establish a need for extra diligence in meeting these standards and the employment of assessments, consent procedures, and patient-family interactions that are clearly grounded on the tenets of PC. PMID:24388124

  17. Holocene Ostracoda from the Herald Canyon, Eastern Siberian Sea from the SWERUS-C3 Expedition 2014

    NASA Astrophysics Data System (ADS)

    Gemery, L.; Cronin, T. M.; Jakobsson, M.; Barrientos, N.; O'Regan, M.; Muschitiello, F.; Koshurnikov, A.; Gukov, A.

    2015-12-01

    We analyzed Arctic benthic ostracode assemblages from two piston cores (PC) and their complementary multicores from Herald Canyon in the Eastern Siberian Sea. The cores (SWERUS-L2-2-PC1 [8.1 m], 2-MUC4, 71.7 m water depth, and SWERUS-L2-4-PC1 [6.2 m], 4-MUC4, 119.7 m water depth) were collected during Leg 2 of the 2014 SWERUS-C3 Expedition. Radiocarbon dates on mollusks indicate that sediments from 2-PC1 and 4-PC1 were deposited over the last 5,000 and 10,000 years respectively. The dominant ostracode species include: Acanthocythereis dunelmensis, Cytheropteron elaeni, Elofsonella concinna, Kotoracythere janae, Normanicythere leioderma, Semicytherura complanata. Based on species' distributions obtained from a 1,200-sample modern ostracode database, these species are known to be typical of shallow mid- to outer-continental shelf environments in the modern Arctic Ocean. The abundant and diverse benthic ostracode assemblages found in these cores suggest the influence of nutrient-rich Pacific water flowing in through the Bering Strait. The faunal assemblages are fairly uniform throughout 2-PC1, suggesting minimal variability in Pacific water inflow since at least 5 ka. In the lower section of 4-PC1, there is a major change to ostracode assemblages containing typical inner shelf, often brackish-water species, such as Cytheromorpha macchesneyi, and associated shallow Arctic shelf species (Sarsicytheridea punctillata and several Cytheropteron species), reflecting a period of lower, deglacial sea level.

  18. High-Intensity Focused Ultrasound (HIFU) in Localized Prostate Cancer Treatment

    PubMed Central

    Alkhorayef, Mohammed; Mahmoud, Mustafa Z.; Alzimami, Khalid S.; Sulieman, Abdelmoneim; Fagiri, Maram A.

    2015-01-01

    Summary Background High-intensity focused ultrasound (HIFU) applies high-intensity focused ultrasound energy to locally heat and destroy diseased or damaged tissue through ablation. This study intended to review HIFU to explain the fundamentals of HIFU, evaluate the evidence concerning the role of HIFU in the treatment of prostate cancer (PC), review the technologies used to perform HIFU and the published clinical literature regarding the procedure as a primary treatment for PC. Material/Methods Studies addressing HIFU in localized PC were identified in a search of internet scientific databases. The analysis of outcomes was limited to journal articles written in English and published between 2000 and 2013. Results HIFU is a non-invasive approach that uses a precisely delivered ultrasound energy to achieve tumor cell necrosis without radiation or surgical excision. In current urological oncology, HIFU is used clinically in the treatment of PC. Clinical research on HIFU therapy for localized PC began in the 1990s, and the majority of PC patients were treated with the Ablatherm device. Conclusions HIFU treatment for localized PC can be considered as an alternative minimally invasive therapeutic modality for patients who are not candidates for radical prostatectomy. Patients with lower pre-HIFU PSA level and favourable pathologic Gleason score seem to present better oncologic outcomes. Future advances in technology and safety will undoubtedly expand the HIFU role in this indication as more of patient series are published, with a longer follow-up period. PMID:25806099

  19. Challenges of audit of care on clinical quality indicators for hypertension and type 2 diabetes across four European countries.

    PubMed

    Suija, Kadri; Kivisto, Katrin; Sarria-Santamera, Antonio; Kokko, Simo; Liseckiene, Ida; Bredehorst, Maren; Jaruseviciene, Lina; Papp, Renata; Oona, Marje; Kalda, Ruth

    2015-02-01

    The purpose of the study was to measure clinical quality by doing an audit of clinical records and to compare the performance based on clinical quality indicators (CQI) for hypertension and type 2 diabetes across seven European countries: Estonia, Finland, Germany, Hungary, Italy, Lithuania and Spain. Two common chronic conditions in primary care (PC), hypertension and type 2 diabetes, were selected for audit. The assessment of CQI started with a literature review of different databases: Organization for Economic Co-operation and Development, World Health Organization, European Commission European Community Health Indicators, US National Library of Medicine. Data were collected from clinical records. Although it was agreed to obtain the clinical indicators in a similar way from each country, the specific data collection process in every country varied greatly, due to different traditions in collecting and keeping the patients' data, as well as differences in regulation regarding access to clinical information. Also, there was a huge variability across countries in the level of compliance with the indicators. Measurement of clinical performance in PC by audit is methodologically challenging: different databases provide different information, indicators of quality of care have insufficient scientific proof and there are country-specific regulations. There are large differences not only in quality of health care across Europe but also in how it is measured. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Development of a platform-independent receiver control system for SISIFOS

    NASA Astrophysics Data System (ADS)

    Lemke, Roland; Olberg, Michael

    1998-05-01

    Up to now receiver control software was a time consuming development usually written by receiver engineers who had mainly the hardware in mind. We are presenting a low-cost and very flexible system which uses a minimal interface to the real hardware, and which makes it easy to adapt to new receivers. Our system uses Tcl/Tk as a graphical user interface (GUI), SpecTcl as a GUI builder, Pgplot as plotting software, a simple query language (SQL) database for information storage and retrieval, Ethernet socket to socket communication and SCPI as a command control language. The complete system is in principal platform independent but for cost saving reasons we are using it actually on a PC486 running Linux 2.0.30, which is a copylefted Unix. The only hardware dependent part are the digital input/output boards, analog to digital and digital to analog convertors. In the case of the Linux PC we are using a device driver development kit to integrate the boards fully into the kernel of the operating system, which indeed makes them look like an ordinary device. The advantage of this system is firstly the low price and secondly the clear separation between the different software components which are available for many operating systems. If it is not possible, due to CPU performance limitations, to run all the software in a single machine,the SQL-database or the graphical user interface could be installed on separate computers.

  1. JAX Colony Management System (JCMS): an extensible colony and phenotype data management system.

    PubMed

    Donnelly, Chuck J; McFarland, Mike; Ames, Abigail; Sundberg, Beth; Springer, Dave; Blauth, Peter; Bult, Carol J

    2010-04-01

    The Jackson Laboratory Colony Management System (JCMS) is a software application for managing data and information related to research mouse colonies, associated biospecimens, and experimental protocols. JCMS runs directly on computers that run one of the PC Windows operating systems, but can be accessed via web browser interfaces from any computer running a Windows, Macintosh, or Linux operating system. JCMS can be configured for a single user or multiple users in small- to medium-size work groups. The target audience for JCMS includes laboratory technicians, animal colony managers, and principal investigators. The application provides operational support for colony management and experimental workflows, sample and data tracking through transaction-based data entry forms, and date-driven work reports. Flexible query forms allow researchers to retrieve database records based on user-defined criteria. Recent advances in handheld computers with integrated barcode readers, middleware technologies, web browsers, and wireless networks add to the utility of JCMS by allowing real-time access to the database from any networked computer.

  2. Web interfaces to relational databases

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  3. Users Guide to the JPL Doppler Gravity Database

    NASA Technical Reports Server (NTRS)

    Muller, P. M.; Sjogren, W. L.

    1986-01-01

    Local gravity accelerations and gravimetry have been determined directly from spacecraft Doppler tracking data near the Moon and various planets by the Jet Propulsion Laboratory. Researchers in many fields have an interest in planet-wide global gravimetric mapping and its applications. Many of them use their own computers in support of their studies and would benefit from being able to directly manipulate these gravity data for inclusion in their own modeling computations. Pubication of some 150 Apollo 15 subsatellite low-altitude, high-resolution, single-orbit data sets is covered. The doppler residuals with a determination of the derivative function providing line-of-sight-gravity are both listed and plotted (on microfilm), and can be ordered in computer readable forms (tape and floppy disk). The form and format of this database as well as the methods of data reduction are explained and referenced. A skeleton computer program is provided which can be modified to support re-reductions and re-formatted presentations suitable to a wide variety of research needs undertaken on mainframe or PC class microcomputers.

  4. Integration of targeted metabolomics and transcriptomics identifies deregulation of phosphatidylcholine metabolism in Huntington's disease peripheral blood samples.

    PubMed

    Mastrokolias, Anastasios; Pool, Rene; Mina, Eleni; Hettne, Kristina M; van Duijn, Erik; van der Mast, Roos C; van Ommen, GertJan; 't Hoen, Peter A C; Prehn, Cornelia; Adamski, Jerzy; van Roon-Mom, Willeke

    Metabolic changes have been frequently associated with Huntington's disease (HD). At the same time peripheral blood represents a minimally invasive sampling avenue with little distress to Huntington's disease patients especially when brain or other tissue samples are difficult to collect. We investigated the levels of 163 metabolites in HD patient and control serum samples in order to identify disease related changes. Additionally, we integrated the metabolomics data with our previously published next generation sequencing-based gene expression data from the same patients in order to interconnect the metabolomics changes with transcriptional alterations. This analysis was performed using targeted metabolomics and flow injection electrospray ionization tandem mass spectrometry in 133 serum samples from 97 Huntington's disease patients (29 pre-symptomatic and 68 symptomatic) and 36 controls. By comparing HD mutation carriers with controls we identified 3 metabolites significantly changed in HD (serine and threonine and one phosphatidylcholine-PC ae C36:0) and an additional 8 phosphatidylcholines (PC aa C38:6, PC aa C36:0, PC ae C38:0, PC aa C38:0, PC ae C38:6, PC ae C42:0, PC aa C36:5 and PC ae C36:0) that exhibited a significant association with disease severity. Using workflow based exploitation of pathway databases and by integrating our metabolomics data with our gene expression data from the same patients we identified 4 deregulated phosphatidylcholine metabolism related genes ( ALDH1B1 , MBOAT1 , MTRR and PLB1 ) that showed significant association with the changes in metabolite concentrations. Our results support the notion that phosphatidylcholine metabolism is deregulated in HD blood and that these metabolite alterations are associated with specific gene expression changes.

  5. Association between prostate cancer and urinary calculi: a population-based study.

    PubMed

    Chung, Shiu-Dong; Liu, Shih-Ping; Lin, Herng-Ching

    2013-01-01

    Understanding the reasons underlying the emerging trend and the changing demographics of Asian prostate cancer (PC) has become an important field of study. This study set out to explore the possibility that urinary calculi (UC) and PC may share an association by conducting a case-control study on a population-based database in Taiwan. The cases of this study included 2,900 subjects ≥ 40 years-old who had received their first-time diagnosis of PC and 14,500 randomly selected controls without PC. Conditional logistic regressions were employed to explore the association between PC and having been previously diagnosed with UC. We found that prior UC was found among 608 (21.0%) cases and 2,037 (14.1%) controls (p<0.001). Conditional logistic regression analysis revealed that compared to controls, the odds ratio (OR) of prior UC for cases was 1.63 (95% CI = 1.47-1.80). Furthermore, we found that cases were more likely to have been previously diagnosed with kidney calculus (OR = 1.71; 95% CI = 1.42-2.05), bladder calculus (OR = 2.06; 95% CI = 1.32-3.23), unspecified calculus (OR = 1.66; 95% CI = 1.37-2.00), and ≥2 locations of UC (OR = 1.73; 1.47-2.02) than controls. However, there was no significant relationship between PC and prior ureter calculus. We also found that of the patients with UC, there was no significant difference between PC and treatment method. This investigation detected an association between PC and prior UC. These results highlight a potential target population for PC screening.

  6. Trends in Radical Prostatectomy Risk Group Distribution in a European Multicenter Analysis of 28 572 Patients: Towards Tailored Treatment.

    PubMed

    van den Bergh, Roderick; Gandaglia, Giorgio; Tilki, Derya; Borgmann, Hendrik; Ost, Piet; Surcel, Christian; Valerio, Massimo; Sooriakumaran, Prasanna; Salomon, Laurent; Briganti, Alberto; Graefen, Markus; van der Poel, Henk; de la Taille, Alexandre; Montorsi, Francesco; Ploussard, Guillaume

    2017-08-08

    Active surveillance (AS) has been increasingly proposed as the preferential initial management strategy for low-risk prostate cancer (PC), while in high-risk PC the indication for surgery has widened. To evaluate the development of risk group distribution of patients undergoing radical prostatectomy (RP). Retrospective database review of combined RP databases (2000-2015) of four large European centers (Créteil, Paris; San Rafaele, Milan; Martini Klinik, Hamburg; NKI, AvL, Amsterdam). Clinical and pathological characteristics per year of surgery. Eligibility for AS was defined according to Prostate Cancer Research International Active Surveillance criteria: cT≤2c, cN0/X, cM0/X, PSA ≤10ng/ml, prostate-specific antigen density <0.2ng/ml/ml, one to two positive biopsies, and Gleason score ≤6, high-risk disease as: cT≥3, c N1, cM1, PSA >20ng/ml, and/or Gleason ≥8. In total, 28572 patients had complete clinical and 24790 complete pathological data available. The absolute number of RPs increased: 401, 975, 2344, and 2504 in 2000, 2005, 2010, and 2015, respectively. The proportion of cases considered suitable for AS decreased: 31%, 32%, 18%, and 5%, while the cases considered high risk increased: 10%, 8%, 16%, and 30%. The percentage of patients having only localized Gleason 6 disease after RP decreased: 46%, 34%, 14%, and 8% for all patients (p<0.01), as well as for AS-suitable patients: 70%, 54%, 41%, and 38% (p<0.01). Comparisons between centers were outside the scope of this article. Developments in diagnostics may have impacted on results. This European analysis confirmed the risk profile of patients undergoing RP shifting away of the most favorable disease spectrum. Patients with PC clinically considered suitable for AS and men having only localized Gleason 6 disease pathologically comprised a decreasing share of all RP performed. High-risk disease comprised an increasing share of all RPs. The databases of four large European centers of prostate cancer surgery were analyzed. In recent years, the risk profile of patients shifted away of low-risk cancer, while high-risk cancer comprised a larger part of cases. This confirms the introduction of active surveillance for low-risk prostate cancer and increase in potentially curative options for high-risk disease. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  7. A PC-Based Free Text DSS for Health Care

    NASA Technical Reports Server (NTRS)

    Grams, Ralph R.; Buchanan, Paul; Massey, James K.; Jin, Ming

    1987-01-01

    A free Decision Support System(DST) has been constructed for health care professional that allows the analysis of complex medical cases and the creation of diagnostic list of potential diseases for clinical evaluation.The system uses a PC-based text management system specifically designed for desktop operation. The texts employed in the decision support package include the Merck Manual (published by Merck Sharpe & Dohme) and Control of Communicable Diseas in Man (published by the American Public Health Association). The background and design of the database are discussed along with a structured analysis procedure for handling free text DSS system. A case study is presented to show the application of this technology and conclusions are drawn in the summary that point to expanded areas of professional intention and new frontiers yet to be explored in this rapidly progressing field.

  8. Identifying potential selective fluorescent probes for cancer-associated protein carbonic anhydrase IX using a computational approach.

    PubMed

    Kamstra, Rhiannon L; Floriano, Wely B

    2014-11-01

    Carbonic anhydrase IX (CAIX) is a biomarker for tumor hypoxia. Fluorescent inhibitors of CAIX have been used to study hypoxic tumor cell lines. However, these inhibitor-based fluorescent probes may have a therapeutic effect that is not appropriate for monitoring treatment efficacy. In the search for novel fluorescent probes that are not based on known inhibitors, a database of 20,860 fluorescent compounds was virtually screened against CAIX using hierarchical virtual ligand screening (HierVLS). The screening database contained 14,862 compounds tagged with the ATTO680 fluorophore plus an additional 5998 intrinsically fluorescent compounds. Overall ranking of compounds to identify hit molecular probe candidates utilized a principal component analysis (PCA) approach. Four potential binding sites, including the catalytic site, were identified within the structure of the protein and targeted for virtual screening. Available sequence information for 23 carbonic anhydrase isoforms was used to prioritize the four sites based on the estimated "uniqueness" of each site in CAIX relative to the other isoforms. A database of 32 known inhibitors and 478 decoy compounds was used to validate the methodology. A receiver-operating characteristic (ROC) analysis using the first principal component (PC1) as predictive score for the validation database yielded an area under the curve (AUC) of 0.92. AUC is interpreted as the probability that a binder will have a better score than a non-binder. The use of first component analysis of binding energies for multiple sites is a novel approach for hit selection. The very high prediction power for this approach increases confidence in the outcome from the fluorescent library screening. Ten of the top scoring candidates for isoform-selective putative binding sites are suggested for future testing as fluorescent molecular probe candidates. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  10. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  11. The Effects of Truth Bias on Artifact-User Relationships: An Investigation of Factors for Improving Deception Detection in Artifact Produced Information

    DTIC Science & Technology

    1998-08-07

    Scenarios 124 APPENDIX B - Information Manipulation Descriptions 132 vi APPENDIX C - PC-III Screens 146 APPENDIX D - Discrepancy Reporting Sheet 162...in Figure 3-2. • A rtifact Truth Bias D e c e p tio n D ete ctio n A bility t Figure 3-2 - Artifact Truth Bias and Deception Detection...discrepancy recording sheet is provided in Appendix D . For each to the courses, a series of data manipulations was incorporated into the database

  12. UNIVIEW: A computer graphics platform bringing information databases to life

    NASA Astrophysics Data System (ADS)

    Warnstam, J.

    2008-06-01

    Uniview is a PC-based software platform for three-dimensional exploration of the Universe and the visualisation of information that is located at any position in this Universe, be it on the surface of the Earth or many light-years away from home. What began as a collaborative project with the American Museum of Natural History1 in New York in 2003 has now evolved into one of the leading visualisation platforms for the planetarium and science centre market with customers in both Europe and USA.

  13. Current Status of an Implementation of a System Monitoring for Seamless Auxiliary Data at the Geodetic Observatory Wettzell

    NASA Astrophysics Data System (ADS)

    Neidhardt, Alexander; Kirschbauer, Katharina; Plötz, Christian; Schönberger, Matthias; Böer, Armin; Wettzell VLBI Team

    2016-12-01

    The first test implementation of an auxiliary data archive is tested at the Geodetic Observatory Wetttzell. It is software which follows on the Wettzell SysMon, extending the database and data sensors with the functionalities of a professional monitoring environment, named Zabbix. Some extensions to the remote control server on the NASA Field System PC enable the inclusion of data from external antennas. The presentation demonstrates the implementation and discusses the current possibilities to encourage other antennas to join the auxiliary archive.

  14. Computed Tomography Perfusion Improves Diagnostic Accuracy in Acute Posterior Circulation Stroke.

    PubMed

    Sporns, Peter; Schmidt, Rene; Minnerup, Jens; Dziewas, Rainer; Kemmling, André; Dittrich, Ralf; Zoubi, Tarek; Heermann, Philipp; Cnyrim, Christian; Schwindt, Wolfram; Heindel, Walter; Niederstadt, Thomas; Hanning, Uta

    2016-01-01

    Computed tomography perfusion (CTP) has a high diagnostic value in the detection of acute ischemic stroke in the anterior circulation. However, the diagnostic value in suspected posterior circulation (PC) stroke is uncertain, and whole brain volume perfusion is not yet in widespread use. We therefore studied the additional value of whole brain volume perfusion to non-contrast CT (NCCT) and CT angiography source images (CTA-SI) for infarct detection in patients with suspected acute ischemic PC stroke. This is a retrospective review of patients with suspected stroke in the PC in a database of our stroke center (n = 3,011) who underwent NCCT, CTA and CTP within 9 h after stroke onset and CT or MRI on follow-up. Images were evaluated for signs and pc-ASPECTS locations of ischemia. Three imaging models - A (NCCT), B (NCCT + CTA-SI) and C (NCCT + CTA-SI + CTP) - were compared with regard to the misclassification rate relative to gold standard (infarction in follow-up imaging) using the McNemar's test. Of 3,011 stroke patients, 267 patients had a suspected stroke in the PC and 188 patients (70.4%) evidenced a PC infarct on follow-up imaging. The sensitivity of Model C (76.6%) was higher compared with that of Model A (21.3%) and Model B (43.6%). CTP detected significantly more ischemic lesions, especially in the cerebellum, posterior cerebral artery territory and thalami. Our findings in a large cohort of consecutive patients show that CTP detects significantly more ischemic strokes in the PC than CTA and NCCT alone. © 2016 S. Karger AG, Basel.

  15. Ethical challenges and solutions regarding delirium studies in palliative care.

    PubMed

    Sweet, Lisa; Adamis, Dimitrios; Meagher, David J; Davis, Daniel; Currow, David C; Bush, Shirley H; Barnes, Christopher; Hartwick, Michael; Agar, Meera; Simon, Jessica; Breitbart, William; MacDonald, Neil; Lawlor, Peter G

    2014-08-01

    Delirium occurs commonly in settings of palliative care (PC), in which patient vulnerability in the unique context of end-of-life care and delirium-associated impairment of decision-making capacity may together present many ethical challenges. Based on deliberations at the Studies to Understand Delirium in Palliative Care Settings (SUNDIPS) meeting and an associated literature review, this article discusses ethical issues central to the conduct of research on delirious PC patients. Together with an analysis of the ethical deliberations at the SUNDIPS meeting, we conducted a narrative literature review by key words searching of relevant databases and a subsequent hand search of initially identified articles. We also reviewed statements of relevance to delirium research in major national and international ethics guidelines. Key issues identified include the inclusion of PC patients in delirium research, capacity determination, and the mandate to respect patient autonomy and ensure maintenance of patient dignity. Proposed solutions include designing informed consent statements that are clear, concise, and free of complex phraseology; use of concise, yet accurate, capacity assessment instruments with a minimally burdensome schedule; and use of PC friendly consent models, such as facilitated, deferred, experienced, advance, and proxy models. Delirium research in PC patients must meet the common standards for such research in any setting. Certain features unique to PC establish a need for extra diligence in meeting these standards and the employment of assessments, consent procedures, and patient-family interactions that are clearly grounded on the tenets of PC. Copyright © 2014 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  16. Influence of carbonation on the acid neutralization capacity of cements and cement-solidified/stabilized electroplating sludge.

    PubMed

    Chen, Quanyuan; Zhang, Lina; Ke, Yujuan; Hills, Colin; Kang, Yanming

    2009-02-01

    Portland cement (PC) and blended cements containing pulverized fuel ash (PFA) or granulated blast-furnace slag (GGBS) were used to solidify/stabilize an electroplating sludge in this work. The acid neutralization capacity (ANC) of the hydrated pastes increased in the order of PC > PC/GGBS > PC/PFA. The GGBS or PFA replacement (80 wt%) reduced the ANC of the hydrated pastes by 30-50%. The ANC of the blended cement-solidified electroplating sludge (cement/sludge 1:2) was 20-30% higher than that of the hydrated blended cement pastes. Upon carbonation, there was little difference in the ANC of the three cement pastes, but the presence of electroplating sludge (cement/sludge 1:2) increased the ANC by 20%. Blended cements were more effective binders for immobilization of Ni, Cr and Cu, compared with PC, whereas Zn was encapsulated more effectively in the latter. Accelerated carbonation improved the immobilization of Cr, Cu and Zn, but not Ni. The geochemical code PHREEQC, with the edited database from EQ3/6 and HATCHES, was used to calculate the saturation index and solubility of likely heavy metal precipitates in cement-based solidification/stabilization systems. The release of heavy metals could be related to the disruption of cement matrices and the remarkable variation of solubility of heavy metal precipitates at different pH values.

  17. Curative cytoreductive surgery followed by hyperthermic intraperitoneal chemotherapy in patients with peritoneal carcinomatosis and synchronous resectable liver metastases arising from colorectal cancer.

    PubMed

    Lorimier, G; Linot, B; Paillocher, N; Dupoiron, D; Verrièle, V; Wernert, R; Hamy, A; Capitain, O

    2017-01-01

    This study describes the outcomes of patients with colorectal peritoneal carcinomatosis (PC) with or without liver metastases (LMs) after curative surgery combined with hyperthermic intraperitoneal chemotherapy, in order to assess prognostic factors. Cytoreductive surgery (CRS) followed by hyperthermic intraperitoneal chemotherapy (HIPEC) increases overall survival (OS) in patients with PC. The optimal treatment both for PC and for LMs within one surgical operation remains controversial. Patients with PC who underwent CRS followed by HIPEC were evaluated from a prospective database. Overall survival and disease free survival (DFS) rates in patients with PC and with or without LMs were compared. Univariate and multivariate analyses were performed to evaluate predictive variables for survival. From 1999 to 2011, 22 patients with PC and synchronous LMs (PCLM group), were compared to 36 patients with PC alone (PC group). No significant difference was found between the two groups. The median OS were 36 months [range, 20-113] for the PCLM group and 25 months [14-82] for the PC group (p > 0.05) with 5-year OS rates of 38% and 40% respectively (p > 0.05). The median DFS were 9 months [9-20] and 11.8 months [6.5-23] respectively (p = 0.04). The grade III-IV morbidity and cytoreduction score (CCS) >0 (p < 0.05) were identified as independent factors for poor OS. Resections of LMs and CCS >0 impair significantly DFS. Synchronous complete CRS of PC and LMs from a colorectal origin plus HIPEC is a feasible therapeutic option. The improvement in OS is similar to that provided for patients with PC alone. Copyright © 2016 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  18. Prediction of proprotein convertase cleavage sites.

    PubMed

    Duckert, Peter; Brunak, Søren; Blom, Nikolaj

    2004-01-01

    Many secretory proteins and peptides are synthesized as inactive precursors that in addition to signal peptide cleavage undergo post-translational processing to become biologically active polypeptides. Precursors are usually cleaved at sites composed of single or paired basic amino acid residues by members of the subtilisin/kexin-like proprotein convertase (PC) family. In mammals, seven members have been identified, with furin being the one first discovered and best characterized. Recently, the involvement of furin in diseases ranging from Alzheimer's disease and cancer to anthrax and Ebola fever has created additional focus on proprotein processing. We have developed a method for prediction of cleavage sites for PCs based on artificial neural networks. Two different types of neural networks have been constructed: a furin-specific network based on experimental results derived from the literature, and a general PC-specific network trained on data from the Swiss-Prot protein database. The method predicts cleavage sites in independent sequences with a sensitivity of 95% for the furin neural network and 62% for the general PC network. The ProP method is made publicly available at http://www.cbs.dtu.dk/services/ProP.

  19. Effectiveness of Emergency Department Based Palliative Care for Adults with Advanced Disease: A Systematic Review

    PubMed Central

    Nunes, Cristina Moura; Gomes, Barbara

    2016-01-01

    Abstract Background: Emergency departments (EDs) are seeing more patients with palliative care (PC) needs, but evidence on best practice is scarce. Objectives: To examine the effectiveness of ED-based PC interventions on hospital admissions (primary outcome), length of stay (LOS), symptoms, quality of life, use of other health care services, and PC referrals for adults with advanced disease. Methods: We searched five databases until August 2014, checked reference lists/conference abstracts, and contacted experts. Eligible studies were controlled trials, pre-post studies, cohort studies, and case series reporting outcomes of ED-based PC. Results: Five studies with 4374 participants were included: three case series and two cohort studies. Interventions included a screening tool, traditional ED-PC, and integrated ED-PC. Two studies reported on hospital admissions: in one study there was no statistically significant difference in 90-day readmission rates between patients who initiated integrated PC at the ED (11/50 patients, 22%) compared to those who initiated PC after hospital admission (179/1385, 13%); another study showed a high admission rate (90%) in 14 months following ED-PC, but without comparison. One study showed an LOS reduction (mean 4.32 days in ED-initiated PC group versus 8.29 days in postadmission-initiated group; p < 0.01). There was scarce evidence on other outcomes except for conflicting findings on survival: in one study, ED-PC patients were more likely to experience an interval between ED presentation and death >9 hours (OR 2.75, 95% CI 2.21–3.41); another study showed increased mortality risk in the intervention group; and a case series described a higher in-hospital death rate when PC was ED-initiated (62%), compared to ward (16%) or ICU (50%) (unknown p-value). Conclusions: There is yet no evidence that ED-based PC affects patient outcomes except for indication from one study of no association with 90-day hospital readmission but a possible reduction in LOS if integrated PC is introduced early at ED rather than after hospital admission. There is an urgent need for trials to confirm these findings alongside other potential benefits and survival effects. PMID:27115914

  20. A materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    The authors have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory West (ANL-WEST). Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built by the user,more » it takes 4-5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  1. Development of yarn breakage detection software system based on machine vision

    NASA Astrophysics Data System (ADS)

    Wang, Wenyuan; Zhou, Ping; Lin, Xiangyu

    2017-10-01

    For questions spinning mills and yarn breakage cannot be detected in a timely manner, and save the cost of textile enterprises. This paper presents a software system based on computer vision for real-time detection of yarn breakage. The system and Windows8.1 system Tablet PC, cloud server to complete the yarn breakage detection and management. Running on the Tablet PC software system is designed to collect yarn and location information for analysis and processing. And will be processed after the information through the Wi-Fi and http protocol sent to the cloud server to store in the Microsoft SQL2008 database. In order to follow up on the yarn break information query and management. Finally sent to the local display on time display, and remind the operator to deal with broken yarn. The experimental results show that the system of missed test rate not more than 5%o, and no error detection.

  2. Agent Orange and long-term outcomes after radical prostatectomy.

    PubMed

    Ovadia, Aaron E; Terris, Martha K; Aronson, William J; Kane, Christopher J; Amling, Christopher L; Cooperberg, Matthew R; Freedland, Stephen J; Abern, Michael R

    2015-07-01

    To investigate the association between Agent Orange (AO) exposure and long-term prostate cancer (PC) outcomes. Data from 1,882 men undergoing radical prostatectomy for PC between 1988 and 2011 at Veterans Affairs Health Care Facilities were analyzed from the Shared Equal Access Regional Cancer Hospital database. Men were stratified by AO exposure (binary). Associations between AO exposure and biopsy and pathologic Gleason sum (GS) and pathologic stage were determined by logistic regression models adjusted for preoperative characteristics. Hazard ratios for biochemical recurrence (BCR), secondary treatment, metastases, and PC-specific mortality were determined by Cox models adjusted for preoperative characteristics. There were 333 (17.7%) men with AO exposure. AO-exposed men were younger (median 59 vs. 62 y), had lower preoperative prostate-specific antigen levels (5.8 vs. 6.7 ng/ml), lower clinical category (25% vs. 38% palpable), and higher body mass index (28.2 vs. 27.6 kg/m(2)), all P<0.01. Biopsy GS, pathologic GS, positive surgical margins, lymph node positivity, and extracapsular extension did not differ with AO exposure. At a median follow-up of 85 months, 702 (37.4%) patients had BCR, 603 (32.2%) patients received secondary treatment, 78 (4.1%) had metastases, and 39 (2.1%) died of PC. On multivariable analysis, AO exposure was not associated with BCR, secondary treatment, metastases, or PC mortality. AO exposure was not associated with worse preoperative characteristics such as elevated prostate-specific antigen levels or biopsy GS nor with BCR, secondary treatment, metastases, or PC death. Thus, as data on AO-exposed men mature, possible differences in PC outcomes observed previously are no longer apparent. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Evaluation of an Aggressive Prostate Biopsy Strategy in Men Younger than 50 years of Age.

    PubMed

    Goldberg, Hanan; Klaassen, Zachary; Chandrasekar, Thenappan; Wallis, Christopher J D; Toi, Ants; Sayyid, Rashid; Bhindi, Bimal; Nesbitt, Michael; Evans, Andrew; van der Kwast, Theo; Sweet, Joan; Perlis, Nathan; Hamilton, Robert J; Kulkarni, Girish S; Finelli, Antonio; Zlotta, Alexandre; Fleshner, Neil

    2018-05-11

    Longitudinal cohort studies and guidelines demonstrate that PSA ≥1 ng/mL in younger patients confer an increased risk of delayed prostate cancer (PC) death. In our institution we have used an aggressive biopsy strategy among younger patients with PSA of>1 ng/ml. Our objective was to determine the proportion of detected cancer and specifically, clinical significant cancer, with this strategy. The prostate biopsy (PB) database at Princess Margaret Cancer Centre was queried for patients younger than 50 who underwent a first PB between 2000 and 2016. We included only patients undergoing PB due to PSA>1 ng/mL, suspicious digital rectal examination, positive family history (PFH), or suspicious lesion on trans-rectal ultrasound. All clinical and pathological parameters were analyzed. Patients were stratified according to their specific PSA values. Multivariable logistic regression was performed to ascertain predictors of any PC diagnosis, and of clinically significant PC. Of 199 patients who met the inclusion criteria, 37 (19%) were diagnosed with PC and 8 (22%) had a Gleason score (GS)>7. Of those diagnosed with PC, 25 (68%) had a PSA>1.5 ng/ml and all men with GS>7 had PSA>1.5 ng/ml. Notably, 19 (51%) patients had PC exceeding Epstein criteria for active surveillance. Factors predicting PC included PFH, rising PSA and lower prostate volumes. Our results justify adopting an aggressive PB strategy for young men<50 years old with a PSA>1.5 ng/ml, while patients with PSAs<1.5 ng/ml are unlikely to have significant cancer. Special attention should be paid to patients with smaller prostates, and PFH. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  4. Do We Reap What We Sow? Exploring the Association between the Strength of European Primary Healthcare Systems and Inequity in Unmet Need

    PubMed Central

    Hanssens, Lise; Vyncke, Veerle; De Maeseneer, Jan; Willems, Sara

    2017-01-01

    Access to healthcare is inequitably distributed across different socioeconomic groups. Several vulnerable groups experience barriers in accessing healthcare, compared to their more wealthier counterparts. In response to this, many countries use resources to strengthen their primary care (PC) system, because in many European countries PC is the first entry-point to the healthcare system and plays a central role in the coordination of patients through the healthcare system. However it is unclear whether this strengthening of PC leads to less inequity in access to the whole healthcare system. This study investigates the association between strength indicators of PC and inequity in unmet need by merging data from the European Union Statistics on Income and Living Conditions database (2013) and the Primary Healthcare Activity Monitor for Europe (2010). The analyses reveal a significant association between the Gini coefficient for income inequality and inequity in unmet need. When the Gini coefficient of a country is one SD higher, the social inequity in unmet need in that particular country will be 4.960 higher. Furthermore, the accessibility and the workforce development of a country’s PC system is inverse associated with the social inequity of unmet need. More specifically, when the access- and workforce development indicator of a country PC system are one standard deviation higher, the inequity in unmet healthcare needs are respectively 2.200 and 4.951 lower. Therefore, policymakers should focus on reducing income inequality to tackle inequity in access, and strengthen PC (by increasing accessibility and better-developing its workforce) as this can influence inequity in unmet need. PMID:28046051

  5. [A comparative study of aggression towards Primary Care and Hospital Health professionals in a Madrid health area (2009-2014)].

    PubMed

    de-San-Segundo, M; Granizo, J J; Camacho, I; Martínez-de-Aramayona, M J; Fernández, M; Sánchez-Úriz, M Á

    2017-03-01

    The aim of this paper is perform an analysis on the incidents and attacks against medical personnel that occurred in the area covered by the Prevention Service Group, comparing the results in Primary Care (PC) with Hospital Care (HC). The information available in the database of the regional Madrid Register of Aggressions Conflict Health Worker between 2009 and 2014 was analysed. This included a total of 8,056 workers, of whom 1,605 were from PC. A total of 1,262 incidents have been reported, of which 61.2% took place in HC and 38.8% in PC (32.2 notifications/100,000 inhabitants, or 12.88 incidents/100 hospital workers compared to 168.98 notifications/100,000 inhabitants, or 30.53 incidents/100 PC workers). Nurses in CP have a higher incidence of assaults (47.4%), while in HC it is the physicians (53.1%) (P<.001). In PC the aggressor is usually the patient (56.9%), while in HC it is the relative or companion (45.3%) (P<.001). HC aggressions occur more frequently in emergency departments (35.5%) compared with 63.9% in PC, where they occur in the consulting room (P<.001). Although it is difficult to make comparisons with previous studies due to methodological differences, a higher incidence of aggression in PC is observed compared with HC. It is necessary to establish improvements in Madrid Register of Aggressions and Conflicts, designed to optimise data quality and use them for preventive purposes. Copyright © 2016 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Progressive calibration and averaging for tandem mass spectrometry statistical confidence estimation: Why settle for a single decoy?

    PubMed Central

    Keich, Uri; Noble, William Stafford

    2017-01-01

    Estimating the false discovery rate (FDR) among a list of tandem mass spectrum identifications is mostly done through target-decoy competition (TDC). Here we offer two new methods that can use an arbitrarily small number of additional randomly drawn decoy databases to improve TDC. Specifically, “Partial Calibration” utilizes a new meta-scoring scheme that allows us to gradually benefit from the increase in the number of identifications calibration yields and “Averaged TDC” (a-TDC) reduces the liberal bias of TDC for small FDR values and its variability throughout. Combining a-TDC with “Progressive Calibration” (PC), which attempts to find the “right” number of decoys required for calibration we see substantial impact in real datasets: when analyzing the Plasmodium falciparum data it typically yields almost the entire 17% increase in discoveries that “full calibration” yields (at FDR level 0.05) using 60 times fewer decoys. Our methods are further validated using a novel realistic simulation scheme and importantly, they apply more generally to the problem of controlling the FDR among discoveries from searching an incomplete database. PMID:29326989

  7. WAIS Searching of the Current Contents Database

    NASA Astrophysics Data System (ADS)

    Banholzer, P.; Grabenstein, M. E.

    The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.

  8. A Unified and Coherent Land Surface Emissivity Earth System Data Record

    NASA Astrophysics Data System (ADS)

    Knuteson, R. O.; Borbas, E. E.; Hulley, G. C.; Hook, S. J.; Anderson, M. C.; Pinker, R. T.; Hain, C.; Guillevic, P. C.

    2014-12-01

    Land Surface Temperature and Emissivity (LST&E) data are essential for a wide variety of studies from calculating the evapo-transpiration of plant canopies to retrieving atmospheric water vapor. LST&E products are generated from data acquired by sensors in low Earth orbit (LEO) and by sensors in geostationary Earth orbit (GEO). Although these products represent the same measure, they are produced at different spatial, spectral and temporal resolutions using different algorithms. The different approaches used to retrieve the temperatures and emissivities result in discrepancies and inconsistencies between the different products. NASA has identified a major need to develop long-term, consistent, and calibrated data and products that are valid across multiple missions and satellite sensors. This poster will introduce the land surface emissivity product of the NASA MEASUREs project called A Unified and Coherent Land Surface Temperature and Emissivity (LST&E) Earth System Data Record (ESDR). To develop a unified high spectral resolution emissivity database, the MODIS baseline-fit emissivity database (MODBF) produced at the University of Wisconsin-Madison and the ASTER Global Emissivity Database (ASTER GED) produced at JPL will be merged. The unified Emissivity ESDR will be produced globally at 5km in mean monthly time-steps and for 12 bands from 3.6-14.3 micron and extended to 417 bands using a PC regression approach. The poster will introduce this data product. LST&E is a critical ESDR for a wide variety of studies in particular ecosystem and climate modeling.

  9. Spares Management : Optimizing Hardware Usage for the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Gulbrandsen, K. A.

    1999-01-01

    The complexity of the Space Shuttle Main Engine (SSME), combined with mounting requirements to reduce operations costs have increased demands for accurate tracking, maintenance, and projections of SSME assets. The SSME Logistics Team is developing an integrated asset management process. This PC-based tool provides a user-friendly asset database for daily decision making, plus a variable-input hardware usage simulation with complex logic yielding output that addresses essential asset management issues. Cycle times on critical tasks are significantly reduced. Associated costs have decreased as asset data quality and decision-making capability has increased.

  10. An object-oriented, knowledge-based system for cardiovascular rehabilitation--phase II.

    PubMed Central

    Ryder, R. M.; Inamdar, B.

    1995-01-01

    The Heart Monitor is an object-oriented, knowledge-based system designed to support the clinical activities of cardiovascular (CV) rehabilitation. The original concept was developed as part of graduate research completed in 1992. This paper describes the second generation system which is being implemented in collaboration with a local heart rehabilitation program. The PC UNIX-based system supports an extensive patient database organized by clinical areas. In addition, a knowledge base is employed to monitor patient status. Rule-based automated reasoning is employed to assess risk factors contraindicative to exercise therapy and to monitor administrative and statutory requirements. PMID:8563285

  11. A SPDS Node to Support the Systematic Interpretation of Cosmic Ray Data

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The purpose of this project was to establish and maintain a Space Physics Data System (SPDS) node that supports the analysis and interpretation of current and future galactic cosmic ray (GCR) measurements by (1) providing on-line databases relevant to GCR propagation studies; (2) providing other on-line services, such as anonymous FTP access, mail list service and pointers to e-mail address books, to support the cosmic ray community; (3) providing a mechanism for those in the community who might wish to submit similar contributions for public access; (4) maintaining the node to assure that the databases remain current; and (5) investigating other possibilities, such as CD-ROM, for public dissemination of the data products. Shortly after the original grant to support these activities was established at Louisiana State University a detailed study of alternate choices for the node hardware was initiated. The chosen hardware was an Apple Workgroup Server 9150/120 consisting of a 120 MHz PowerPC 601 processor, 32 MB of memory, two I GB disks and one 2 GB disk. This hardware was ordered and installed and has been operating reliably ever since. A preliminary version of the database server was available during the first year effort and was used as part of the very successful SPDS demonstration during the Rome, Italy International Cosmic Ray Conference. For this server version we were able to establish the html and anonymous FTP server software, develop a Web page structure which can be easily modified to include new items, provide an on-line database of charge changing total cross sections, include the cross section prediction software of Silberberg & Tsao as well as Webber, Kish and Schrier for download access, and provide an on-line bibliography of the cross section measurement references by the Transport Collaboration. The preliminary version of this SPDS Cosmic Ray node was examined by members of the C&H SPDS committee and returned comments were used to refine the implementation.

  12. The integrated proactive surveillance system for prostate cancer.

    PubMed

    Wang, Haibin; Yatawara, Mahendra; Huang, Shao-Chi; Dudley, Kevin; Szekely, Christine; Holden, Stuart; Piantadosi, Steven

    2012-01-01

    In this paper, we present the design and implementation of the integrated proactive surveillance system for prostate cancer (PASS-PC). The integrated PASS-PC is a multi-institutional web-based system aimed at collecting a variety of data on prostate cancer patients in a standardized and efficient way. The integrated PASS-PC was commissioned by the Prostate Cancer Foundation (PCF) and built through the joint of efforts by a group of experts in medical oncology, genetics, pathology, nutrition, and cancer research informatics. Their main goal is facilitating the efficient and uniform collection of critical demographic, lifestyle, nutritional, dietary and clinical information to be used in developing new strategies in diagnosing, preventing and treating prostate cancer.The integrated PASS-PC is designed based on common industry standards - a three tiered architecture and a Service- Oriented Architecture (SOA). It utilizes open source software and programming languages such as HTML, PHP, CSS, JQuery, Drupal and MySQL. We also use a commercial database management system - Oracle 11g. The integrated PASS-PC project uses a "confederation model" that encourages participation of any interested center, irrespective of its size or location. The integrated PASS-PC utilizes a standardized approach to data collection and reporting, and uses extensive validation procedures to prevent entering erroneous data. The integrated PASS-PC controlled vocabulary is harmonized with the National Cancer Institute (NCI) Thesaurus. Currently, two cancer centers in the USA are participating in the integrated PASS-PC project.THE FINAL SYSTEM HAS THREE MAIN COMPONENTS: 1. National Prostate Surveillance Network (NPSN) website; 2. NPSN myConnect portal; 3. Proactive Surveillance System for Prostate Cancer (PASS-PC). PASS-PC is a cancer Biomedical Informatics Grid (caBIG) compatible product. The integrated PASS-PC provides a foundation for collaborative prostate cancer research. It has been built to meet the short term goal of gathering prostate cancer related data, but also with the prerequisites in place for future evolution into a cancer research informatics platform. In the future this will be vital for successful prostate cancer studies, care and treatment.

  13. The Integrated Proactive Surveillance System for Prostate Cancer

    PubMed Central

    Wang, Haibin; Yatawara, Mahendra; Huang, Shao-Chi; Dudley, Kevin; Szekely, Christine; Holden, Stuart; Piantadosi, Steven

    2012-01-01

    In this paper, we present the design and implementation of the integrated proactive surveillance system for prostate cancer (PASS-PC). The integrated PASS-PC is a multi-institutional web-based system aimed at collecting a variety of data on prostate cancer patients in a standardized and efficient way. The integrated PASS-PC was commissioned by the Prostate Cancer Foundation (PCF) and built through the joint of efforts by a group of experts in medical oncology, genetics, pathology, nutrition, and cancer research informatics. Their main goal is facilitating the efficient and uniform collection of critical demographic, lifestyle, nutritional, dietary and clinical information to be used in developing new strategies in diagnosing, preventing and treating prostate cancer. The integrated PASS-PC is designed based on common industry standards – a three tiered architecture and a Service- Oriented Architecture (SOA). It utilizes open source software and programming languages such as HTML, PHP, CSS, JQuery, Drupal and MySQL. We also use a commercial database management system – Oracle 11g. The integrated PASS-PC project uses a “confederation model” that encourages participation of any interested center, irrespective of its size or location. The integrated PASS-PC utilizes a standardized approach to data collection and reporting, and uses extensive validation procedures to prevent entering erroneous data. The integrated PASS-PC controlled vocabulary is harmonized with the National Cancer Institute (NCI) Thesaurus. Currently, two cancer centers in the USA are participating in the integrated PASS-PC project. The final system has three main components: 1. National Prostate Surveillance Network (NPSN) website; 2. NPSN myConnect portal; 3. Proactive Surveillance System for Prostate Cancer (PASS-PC). PASS-PC is a cancer Biomedical Informatics Grid (caBIG) compatible product. The integrated PASS-PC provides a foundation for collaborative prostate cancer research. It has been built to meet the short term goal of gathering prostate cancer related data, but also with the prerequisites in place for future evolution into a cancer research informatics platform. In the future this will be vital for successful prostate cancer studies, care and treatment. PMID:22505956

  14. Lower NIH stroke scale scores are required to accurately predict a good prognosis in posterior circulation stroke.

    PubMed

    Inoa, Violiza; Aron, Abraham W; Staff, Ilene; Fortunato, Gilbert; Sansing, Lauren H

    2014-01-01

    The NIH stroke scale (NIHSS) is an indispensable tool that aids in the determination of acute stroke prognosis and decision making. Patients with posterior circulation (PC) strokes often present with lower NIHSS scores, which may result in the withholding of thrombolytic treatment from these patients. However, whether these lower initial NIHSS scores predict better long-term prognoses is uncertain. We aimed to assess the utility of the NIHSS at presentation for predicting the functional outcome at 3 months in anterior circulation (AC) versus PC strokes. This was a retrospective analysis of a large prospectively collected database of adults with acute ischemic stroke. Univariate and multivariate analyses were conducted to identify factors associated with outcome. Additional analyses were performed to determine the receiver operating characteristic (ROC) curves for NIHSS scores and outcomes in AC and PC infarctions. Both the optimal cutoffs for maximal diagnostic accuracy and the cutoffs to obtain >80% sensitivity for poor outcomes were determined in AC and PC strokes. The analysis included 1,197 patients with AC stroke and 372 with PC stroke. The median initial NIHSS score for patients with AC strokes was 7 and for PC strokes it was 2. The majority (71%) of PC stroke patients had baseline NIHSS scores ≤4, and 15% of these 'minor' stroke patients had a poor outcome at 3 months. ROC analysis identified that the optimal NIHSS cutoff for outcome prediction after infarction in the AC was 8 and for infarction in the PC it was 4. To achieve >80% sensitivity for detecting patients with a subsequent poor outcome, the NIHSS cutoff for infarctions in the AC was 4 and for infarctions in the PC it was 2. The NIHSS cutoff that most accurately predicts outcomes is 4 points higher in AC compared to PC infarctions. There is potential for poor outcomes in patients with PC strokes and low NIHSS scores, suggesting that thrombolytic treatment should not be withheld from these patients based solely on the NIHSS. © 2014 S. Karger AG, Basel. © 2014 S. Karger AG, Basel.

  15. The Parallel Implementation of Algorithms for Finding the Reflection Symmetry of the Binary Images

    NASA Astrophysics Data System (ADS)

    Fedotova, S.; Seredin, O.; Kushnir, O.

    2017-05-01

    In this paper, we investigate the exact method of searching an axis of binary image symmetry, based on brute-force search among all potential symmetry axes. As a measure of symmetry, we use the set-theoretic Jaccard similarity applied to two subsets of pixels of the image which is divided by some axis. Brute-force search algorithm definitely finds the axis of approximate symmetry which could be considered as ground-truth, but it requires quite a lot of time to process each image. As a first step of our contribution we develop the parallel version of the brute-force algorithm. It allows us to process large image databases and obtain the desired axis of approximate symmetry for each shape in database. Experimental studies implemented on "Butterflies" and "Flavia" datasets have shown that the proposed algorithm takes several minutes per image to find a symmetry axis. However, in case of real-world applications we need computational efficiency which allows solving the task of symmetry axis search in real or quasi-real time. So, for the task of fast shape symmetry calculation on the common multicore PC we elaborated another parallel program, which based on the procedure suggested before in (Fedotova, 2016). That method takes as an initial axis the axis obtained by superfast comparison of two skeleton primitive sub-chains. This process takes about 0.5 sec on the common PC, it is considerably faster than any of the optimized brute-force methods including ones implemented in supercomputer. In our experiments for 70 percent of cases the found axis coincides with the ground-truth one absolutely, and for the rest of cases it is very close to the ground-truth.

  16. Distribution and pathological features of pancreatic, ampullary, biliary and duodenal cancers resected with pancreaticoduodenectomy.

    PubMed

    Chandrasegaram, Manju D; Chiam, Su C; Chen, John W; Khalid, Aisha; Mittinty, Murthy L; Neo, Eu L; Tan, Chuan P; Dolan, Paul M; Brooke-Smith, Mark E; Kanhere, Harsh; Worthley, Chris S

    2015-02-28

    Pancreatic cancer (PC) has the worst survival of all periampullary cancers. This may relate to histopathological differences between pancreatic cancers and other periampullary cancers. Our aim was to examine the distribution and histopathologic features of pancreatic, ampullary, biliary and duodenal cancers resected with a pancreaticoduodenectomy (PD) and to examine local trends of periampullary cancers resected with a PD. A retrospective review of PD between January 2000 and December 2012 at a public metropolitan database was performed. The institutional ethics committee approved this study. There were 142 PDs during the study period, of which 70 cases were pre-2010 and 72 post-2010, corresponding to a recent increase in the number of cases. Of the 142 cases, 116 were for periampullary cancers. There were also proportionately more PD for PC (26/60, 43% pre-2010 vs 39/56, 70% post-2010, P = 0.005). There were 65/116 (56%) pancreatic, 29/116 (25%), ampullary, 17/116 (15%) biliary and 5/116 (4%) duodenal cancers. Nodal involvement occurred more frequently in PC (78%) compared to ampullary (59%), biliary (47%) and duodenal cancers (20%), P = 0.002. Perineural invasion was also more frequent in PC (74%) compared to ampullary (34%), biliary (59%) and duodenal cancers (20%), P = 0.002. Microvascular invasion was seen in 57% pancreatic, 38% ampullary, 41% biliary and 20% duodenal cancers, P = 0.222. Overall, clear margins (R0) were achieved in fewer PC 41/65 (63%) compared to ampullary 27/29 (93%; P = 0.003) and biliary cancers 16/17 (94%; P = 0.014). This study highlights that almost half of PD was performed for cancers other than PC, mainly ampullary and biliary cancers. The volume of PD has increased in recent years with an increased proportion being for PC. PC had higher rates of nodal and perineural invasion compared to ampullary, biliary and duodenal cancers.

  17. Iodine-based contrast media, multiple myeloma and monoclonal gammopathies: literature review and ESUR Contrast Media Safety Committee guidelines.

    PubMed

    Stacul, Fulvio; Bertolotto, Michele; Thomsen, Henrik S; Pozzato, Gabriele; Ugolini, Donatella; Bellin, Marie-France; Bongartz, Georg; Clement, Olivier; Heinz-Peer, Gertraud; van der Molen, Aart; Reimer, Peter; Webb, Judith A W

    2018-02-01

    Many radiologists and clinicians still consider multiple myeloma (MM) and monoclonal gammopathies (MG) a contraindication for using iodine-based contrast media. The ESUR Contrast Media Safety Committee performed a systematic review of the incidence of post-contrast acute kidney injury (PC-AKI) in these patients. A systematic search in Medline and Scopus databases was performed for renal function deterioration studies in patients with MM or MG following administration of iodine-based contrast media. Data collection and analysis were performed according to the PRISMA statement 2009. Eligibility criteria and methods of analysis were specified in advance. Cohort and case-control studies reporting changes in renal function were included. Thirteen studies were selected that reported 824 iodine-based contrast medium administrations in 642 patients with MM or MG, in which 12 unconfounded cases of PC-AKI were found (1.6 %). The majority of patients had intravenous urography with high osmolality ionic contrast media after preparatory dehydration and purgation. MM and MG alone are not risk factors for PC-AKI. However, the risk of PC-AKI may become significant in dehydrated patients with impaired renal function. Hypercalcaemia may increase the risk of kidney damage, and should be corrected before contrast medium administration. Assessment for Bence-Jones proteinuria is not necessary. • Monoclonal gammopathies including multiple myeloma are a large spectrum of disorders. • In monoclonal gammopathy with normal renal function, PC-AKI risk is not increased. • Renal function is often reduced in myeloma, increasing the risk of PC-AKI. • Correction of hypercalcaemia is necessary in myeloma before iodine-based contrast medium administration. • Bence-Jones proteinuria assessment in myeloma is unnecessary before iodine-based contrast medium administration.

  18. Health reform and shifts in funding for sexually transmitted infection services.

    PubMed

    Drainoni, Mari-Lynn; Sullivan, Meg; Sequeira, Shwetha; Bacic, Janine; Hsu, Katherine

    2014-07-01

    In the Affordable Care Act era, no-cost-to-patient publicly funded sexually transmitted infection (STI) clinics have been challenged as the standard STI care delivery model. This study examined the impact of removing public funding and instituting a flat fee within an STI clinic under state-mandated insurance coverage. Cross-sectional database analysis examined changes in visit volumes, demographics, and payer mix for 4 locations in Massachusetts' largest safety net hospital (STI clinic, primary care [PC], emergency department [ED], obstetrics/gynecology [OB/GYN] for 3 periods: early health reform implementation, reform fully implemented but public STI clinic funding retained, termination of public funding and institution of a US$75 fee in STI clinic for those not using insurance). Sexually transmitted infection visits decreased 20% in STI clinic (P < 0.001), increased 107% in PC (P < 0.001), slightly decreased in ED, and did not change in OB/GYN. The only large demographic shift observed was in the sex of PC patients--women comprised 51% of PC patients seen for STI care in the first time period, but rose sharply to 70% in the third time period (P < 0.0001). After termination of public funding, 50% of STI clinic patients paid flat fee, 35% used public insurance, and 15% used private insurance. Mandatory insurance, public funding loss, and institution of a flat STI clinic fee were associated with overall decreases in STI visit volume, with significant STI clinic visit decreases and PC STI visit increases. This may indicate partial shifting of STI services into PC. Half of STI clinic patients chose to pay the flat fee even after reform was fully implemented.

  19. Primary Care–Mental Health Integration Programs in the Veterans Affairs Health System Serve a Different Patient Population Than Specialty Mental Health Clinics

    PubMed Central

    Szymanski, Benjamin R.; Zivin, Kara; McCarthy, John F.; Valenstein, Marcia; Pfeiffer, Paul N.

    2012-01-01

    Objective: To assess whether Primary Care–Mental Health Integration (PC-MHI) programs within the Veterans Affairs (VA) health system provide services to patient subgroups that may be underrepresented in specialty mental health care, including older patients and women, and to explore whether PC-MHI served individuals with less severe mental health disorders compared to specialty mental health clinics. Method: Data were obtained from the VA National Patient Care Database for a random sample of VA patients, and primary care patients with an ICD-9-CM mental health diagnosis (N = 243,806) in 2009 were identified. Demographic and clinical characteristics between patients who received mental health treatment exclusively in a specialty mental health clinic (n = 128,248) or exclusively in a PC-MHI setting (n = 8,485) were then compared. Characteristics of patients who used both types of services were also explored. Results: Compared to patients treated in specialty mental health clinics, PC-MHI service users were more likely to be aged 65 years or older (26.4% vs 17.9%, P < .001) and female (8.6% vs 7.7%, P = .003). PC-MHI patients were more likely than specialty mental health clinic patients to be diagnosed with a depressive disorder other than major depression, an unspecified anxiety disorder, or an adjustment disorder (P < .001) and less likely to be diagnosed with more severe disorders, including bipolar disorder, posttraumatic stress disorder, psychotic disorders, and alcohol or substance dependence (P < .001). Conclusions: Primary Care–Mental Health Integration within the VA health system reaches demographic subgroups that are traditionally less likely to use specialty mental health care. By treating patients with less severe mental health disorders, PC-MHI appears to expand upon, rather than duplicate, specialty care services. PMID:23106026

  20. PHYSICAL PROPERTIES OF THE CURRENT CENSUS OF NORTHERN WHITE DWARFS WITHIN 40 pc OF THE SUN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Limoges, M.-M.; Bergeron, P.; Lépine, S., E-mail: limoges@astro.umontreal.ca, E-mail: bergeron@astro.umontreal.ca, E-mail: slepine@chara.gsu.edu

    We present a detailed description of the physical properties of our current census of white dwarfs within 40 pc of the Sun, based on an exhaustive spectroscopic survey of northern hemisphere candidates from the SUPERBLINK proper motion database. Our method for selecting white dwarf candidates is based on a combination of theoretical color–magnitude relations and reduced proper motion diagrams. We reported in an earlier publication the discovery of nearly 200 new white dwarfs, and we present here the discovery of an additional 133 new white dwarfs, among which we identify 96 DA, 3 DB, 24 DC, 3 DQ, and 7more » DZ stars. We further identify 178 white dwarfs that lie within 40 pc of the Sun, representing a 40% increase of the current census, which now includes 492 objects. We estimate the completeness of our survey at between 66% and 78%, allowing for uncertainties in the distance estimates. We also perform a homogeneous model atmosphere analysis of this 40 pc sample and find a large fraction of massive white dwarfs, indicating that we are successfully recovering the more massive, and less luminous objects often missed in other surveys. We also show that the 40 pc sample is dominated by cool and old white dwarfs, which populate the faint end of the luminosity function, although trigonometric parallaxes will be needed to shape this part of the luminosity function more accurately. Finally, we identify 4 probable members of the 20 pc sample, 4 suspected double degenerate binaries, and we also report the discovery of two new ZZ Ceti pulsators.« less

  1. Increased prostate cancer specific mortality following radical prostatectomy in men presenting with voiding symptoms-A whole of population study.

    PubMed

    Ta, Anthony D; Papa, Nathan P; Lawrentschuk, Nathan; Millar, Jeremy L; Syme, Rodney; Giles, Graham G; Bolton, Damien M

    2015-09-01

    Whole of population studies reporting long-term outcomes following radical prostatectomy (RP) are scarce. We aimed to evaluate the long-term outcomes in men with prostate cancer (PC) treated with RP in a whole of population cohort. A secondary objective was to evaluate the influence of mode of presentation on PC specific mortality (PCSM). A prospective database of all cases of RP performed in Victoria, Australia between 1995 and 2000 was established within the Victorian Cancer Registry. Specimen histopathology reports and prostate-specific antigen (PSA) values were obtained by record linkage to pathology laboratories. Mode of presentation was recorded as either PSA screened (PSA testing offered in absence of voiding symptoms) or symptomatic (diagnosis of PC following presentation with voiding symptoms). Multivariate Cox and competing risk regression models were fitted to analyze all-cause mortality, biochemical recurrence, and PCSM. Between 1995 and 2000, 2,154 men underwent RP in Victoria. During median follow up of 10.2 years (range 0.26-13.5 years), 74 men died from PC. In addition to Gleason score and pathological stage, symptomatic presentation was associated with PCSM. After adjusting for stage and PSA, no difference in PCSM was found between men with Gleason score ≤ 6 and Gleason score 3 + 4 = 7. Men with Gleason score 4 + 3 had significantly greater cumulative incidence of PCSM compared with men with Gleason score 3 + 4. Primary Gleason pattern in Gleason 7 PC is an important prognosticator of survival. Our findings suggest that concomitant voiding symptoms should be considered in the work-up and treatment of PC.

  2. Assessment of the health effects of chemicals in humans: II. Construction of an adverse effects database for QSAR modeling.

    PubMed

    Matthews, Edwin J; Kruhlak, Naomi L; Weaver, James L; Benz, R Daniel; Contrera, Joseph F

    2004-12-01

    The FDA's Spontaneous Reporting System (SRS) database contains over 1.5 million adverse drug reaction (ADR) reports for 8620 drugs/biologics that are listed for 1191 Coding Symbols for Thesaurus of Adverse Reaction (COSTAR) terms of adverse effects. We have linked the trade names of the drugs to 1861 generic names and retrieved molecular structures for each chemical to obtain a set of 1515 organic chemicals that are suitable for modeling with commercially available QSAR software packages. ADR report data for 631 of these compounds were extracted and pooled for the first five years that each drug was marketed. Patient exposure was estimated during this period using pharmaceutical shipping units obtained from IMS Health. Significant drug effects were identified using a Reporting Index (RI), where RI = (# ADR reports / # shipping units) x 1,000,000. MCASE/MC4PC software was used to identify the optimal conditions for defining a significant adverse effect finding. Results suggest that a significant effect in our database is characterized by > or = 4 ADR reports and > or = 20,000 shipping units during five years of marketing, and an RI > or = 4.0. Furthermore, for a test chemical to be evaluated as active it must contain a statistically significant molecular structural alert, called a decision alert, in two or more toxicologically related endpoints. We also report the use of a composite module, which pools observations from two or more toxicologically related COSTAR term endpoints to provide signal enhancement for detecting adverse effects.

  3. Signal enhancement, not active suppression, follows the contingent capture of visual attention.

    PubMed

    Livingstone, Ashley C; Christie, Gregory J; Wright, Richard D; McDonald, John J

    2017-02-01

    Irrelevant visual cues capture attention when they possess a task-relevant feature. Electrophysiologically, this contingent capture of attention is evidenced by the N2pc component of the visual event-related potential (ERP) and an enlarged ERP positivity over the occipital hemisphere contralateral to the cued location. The N2pc reflects an early stage of attentional selection, but presently it is unclear what the contralateral ERP positivity reflects. One hypothesis is that it reflects the perceptual enhancement of the cued search-array item; another hypothesis is that it is time-locked to the preceding cue display and reflects active suppression of the cue itself. Here, we varied the time interval between a cue display and a subsequent target display to evaluate these competing hypotheses. The results demonstrated that the contralateral ERP positivity is tightly time-locked to the appearance of the search display rather than the cue display, thereby supporting the perceptual enhancement hypothesis and disconfirming the cue-suppression hypothesis. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Materials accounting system for an IBM PC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearse, R.C.; Thomas, R.J.; Henslee, S.P.

    1986-01-01

    We have adapted the Los Alamos MASS accounting system for use on an IBM PC/AT at the Fuels Manufacturing Facility (FMF) at Argonne National Laboratory-West (ANL-WEST) in Idaho Falls, Idaho. Cost of hardware and proprietary software was less than $10,000 per station. The system consists of three stations between which accounting information is transferred using floppy disks accompanying special nuclear material shipments. The programs were implemented in dBASEIII and were compiled using the proprietary software CLIPPER. Modifications to the inventory can be posted in just a few minutes, and operator/computer interaction is nearly instantaneous. After the records are built bymore » the user, it takes 4 to 5 seconds to post the results to the database files. A version of this system was specially adapted and is currently in use at the FMF facility at Argonne National Laboratory in Idaho Falls. Initial satisfaction is adequate and software and hardware problems are minimal.« less

  5. Palliative Care and Hospice Interventions in Decompensated Cirrhosis and Hepatocellular Carcinoma: A Rapid Review of Literature.

    PubMed

    Mudumbi, Sandhya K; Bourgeois, Claire E; Hoppman, Nicholas A; Smith, Catherine H; Verma, Manisha; Bakitas, Marie A; Brown, Cynthia J; Markland, Alayne D

    2018-04-26

    Patients with decompensated cirrhosis (DC) and/or hepatocellular carcinoma (HCC) have a high symptom burden and mortality and may benefit from palliative care (PC) and hospice interventions. Our aim was to search published literature to determine the impact of PC and hospice interventions for patients with DC/HCC. We searched electronic databases for adults with DC/HCC who received PC, using a rapid review methodology. Data were extracted for study design, participant and intervention characteristics, and three main groups of outcomes: healthcare resource utilization (HRU), end-of-life care (EOLC), and patient-reported outcomes. Of 2466 results, eight were included in final results. There were six retrospective cohort studies, one prospective cohort, and one quality improvement study. Five of eight studies had a high risk of bias and seven studied patients with HCC. A majority found a reduction in HRU (total cost of hospitalization, number of emergency department visits, hospital, and critical care admissions). Some studies found an impact on EOLC, including location of death (less likely to die in the hospital) and resuscitation (less likely to have resuscitation). One study evaluated survival and found hospice had no impact and another showed improvement of symptom burden. Studies included suggest that PC and hospice interventions in patients with DC/HCC reduce HRU, impact EOLC, and improve symptoms. Given the few number of studies, heterogeneity of interventions and outcomes, and high risk of bias, further high-quality research is needed on PC and hospice interventions with a greater focus on DC.

  6. The effect on cardiopulmonary function after thoracoplasty in pectus carinatum: a systematic literature review.

    PubMed

    Sigl, Stephan; Del Frari, Barbara; Harasser, Carina; Schwabegger, Anton H

    2018-03-01

    Creating an aesthetically appealing result using thoracoplasty, especially when correcting extensive deformities, but only causing low morbidity, is challenging. The frequency of thoracoplasties in cases of pectus carinatum (PC) has increased due to improved experience and modified surgical techniques, resulting in low morbidity and low complication rates. The indications for surgical treatment are still controversial and, in most cases, remain aesthetic or psychological rather than physiological. However, whether cardiopulmonary function changes after surgical repair remains a matter of controversy. We sought to investigate and shed light on published knowledge regarding this question. We searched MEDLINE and PubMed databases, using various defined search phrases and inclusion criteria, to identify articles on pre- and postoperative cardiopulmonary evaluation and outcomes. Six studies met the inclusion criteria: 5 studies evaluated patients with PC for cardiopulmonary outcomes after chest wall surgery and 1 did so following conservative compression treatment. In these studies, surgical and conservative correction of PC did not reduce absolute lung volumes and spirometric measurements and consequently had no pathogenic effect on cardiopulmonary function. The results of this systematic review suggest that surgical correction of PC has no symptomatic pathogenic effect on cardiopulmonary function. The results, however, revealed both heterogeneity in the examinations used and inconsistent methods within each study. Further prospective trials with a stronger methodological design are necessary to objectively confirm that surgical correction of PC does not impair cardiopulmonary function. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brickstad, B.; Bergman, M.

    A computerized procedure has been developed that predicts the growth of an initial circumferential surface crack through a pipe and further on to failure. The crack growth mechanism can either be fatigue or stress corrosion. Consideration is taken to complex crack shapes and for the through-wall cracks, crack opening areas and leak rates are also calculated. The procedure is based on a large number of three-dimensional finite element calculations of cracked pipes. The results from these calculations are stored in a database from which the PC-program, denoted LBBPIPE, reads all necessary information. In this paper, a sensitivity analysis is presentedmore » for cracked pipes subjected to both stress corrosion and vibration fatigue.« less

  8. Identification and characterization of low-mass stars and brown dwarfs using Virtual Observatory tools.

    NASA Astrophysics Data System (ADS)

    Aberasturi, M.; Solano, E.; Martín, E.

    2015-05-01

    Low-mass stars and brown dwarfs (with spectral types M, L, T and Y) are the most common objects in the Milky Way. A complete census of these objects is necessary to understand the theories about their complex structure and formation processes. In order to increase the number of known objects in the Solar neighborhood (d<30 pc), we have made use of the Virtual Observatory which allows an efficient handling of the huge amount of information available in astronomical databases. We also used the WFC3 installed in the Hubble Space Telescope to look for T5+ dwarfs binaries.

  9. The use of inexpensive computer-based scanning survey technology to perform medical practice satisfaction surveys.

    PubMed

    Shumaker, L; Fetterolf, D E; Suhrie, J

    1998-01-01

    The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.

  10. Earth Global Reference Atmospheric Model (GRAM99): Short Course

    NASA Technical Reports Server (NTRS)

    Leslie, Fred W.; Justus, C. G.

    2007-01-01

    Earth-GRAM is a FORTRAN software package that can run on a variety of platforms including PC's. For any time and location in the Earth's atmosphere, Earth-GRAM provides values of atmospheric quantities such as temperature, pressure, density, winds, constituents, etc.. Dispersions (perturbations) of these parameters are also provided and have realistic correlations, means, and variances - useful for Monte Carlo analysis. Earth-GRAM is driven by observations including a tropospheric database available from the National Climatic Data Center. Although Earth-GRAM can be run in a "stand-alone" mode, many users incorporate it into their trajectory codes. The source code is distributed free-of-charge to eligible recipients.

  11. Interpretation of Blazar Flux Variations as Music

    NASA Astrophysics Data System (ADS)

    Webb, J. R.

    2003-12-01

    Blazars are believed to be distant galaxies in the process of formation. They emit electromagnetic radiation (light) over the entire electromagnetic spectrum from radio waves to gamma-rays. The emission varies with time in most frequency ranges and the causes for the variation are yet to be adequately explained. Astronomers have been monitoring these objects with optical telescopes for over 50 years now and we have collected a large database of brightnesses over these fifty years. This paper presents some of these light curves, and adopts a computational method to translate the brightness fluctuations into musical tones. These tones are then converted to sound using a midi synthesizer on a PC.

  12. Evaluation of several state-of-charge algorithms

    NASA Astrophysics Data System (ADS)

    Espinosa, J. M.; Martin, M. E.; Burke, A. F.

    1988-09-01

    One of the important needs in marketing an electric vehicle is a device which reliably indicates battery state-of-charge for all types of driving. The purpose of the state-of-charge indicator is analogous to a gas gauge in an internal combustion engine powered vehicle. Many different approaches have been tried to accurately predict battery state-of-charge. This report evaluates several of these approaches. Four different algorithms were implemented into software on an IBM PC and tested using a battery test database for ALCO 2200 lead-acid batteries generated at the INEL. The database was obtained under controlled conditions which compare with the battery response in real EV use. Each algorithm is described in detail as to theory and operational functionality. Also discussed is the hardware and data requirements particular to implementing the individual algorithms. The algorithms were evaluated for accuracy using constant power, stepped power, and simulated vehicle (SFUDS79) discharge profiles. Attempts were made to explain the cause of differences between the predicted and actual state-of-charge and to provide possible remedies to correct them. Recommendations for future work on battery state-of-charge indicators are presented that utilize the hardware and software now in place in the INEL Battery Laboratory.

  13. Three-dimensional object recognition based on planar images

    NASA Astrophysics Data System (ADS)

    Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.

    1993-01-01

    This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.

  14. [Review of digital ground object spectral library].

    PubMed

    Zhou, Xiao-Hu; Zhou, Ding-Wu

    2009-06-01

    A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.

  15. Survey of Pc3-5 ULF velocity oscillations in SuperDARN THEMIS-mode data: Occurrence statistics and driving mechanisms

    NASA Astrophysics Data System (ADS)

    Shi, X.; Ruohoniemi, J. M.; Baker, J. B.; Lin, D.; Bland, E. C.; Hartinger, M.; Scales, W.

    2017-12-01

    Ultra-low frequency (ULF: 1 mHz-10 Hz) waves are believed to play an important role in the energization and transport of plasma within the magnetosphere-ionosphere system, as well as the transfer of energy from the solar wind. Most previous statistical studies of ionospheric ULF waves using Super Dual Auroral Radar Network (SuperDARN) data have been constrained to the Pc5 band ( 1-7 mHz) and/or one or two radars covering a limited range of latitudes. This is partially due to lack of a database cataloging high time resolution data and an efficient way to identify wave events. In this study, we conducted a comprehensive survey of ULF wave signatures in the Pc3-5 band using 6 s resolution data from all SuperDARN radars in the northern hemisphere operating in THEMIS-mode from 2010 to 2016. Numerical experiments were conducted to derive dynamic thresholds for automated detection of ULF waves at different frequencies using the Lomb-Scargle periodogram technique. The spatial occurrence distribution, frequency characteristics, seasonal effects, solar wind condition and geomagnetic activity level dependence have been studied. We found Pc5 events dominate at high latitudes with a most probable frequency of 2 mHz while Pc3-4 are relatively more common at mid-latitudes on the nightside with a most probable frequency of 11 mHz. At high latitudes the occurrence rate of poloidal Pc3-5 peaks in the dusk sector and in winter while at mid-latitudes the poloidal Pc3-4 occurrence rate peaks at pre-midnight. This pre-midnight occurrence peak becomes more prominent with increasing AE index value, in equinox and during southward IMF, which suggests many of these events are most likely Pi2 pulsations associated with magnetotail dynamics during active geomagnetic intervals.

  16. Neutrophil, lymphocyte and platelet counts, and risk of prostate cancer outcomes in white and black men: results from the SEARCH database.

    PubMed

    Vidal, Adriana C; Howard, Lauren E; de Hoedt, Amanda; Cooperberg, Matthew R; Kane, Christopher J; Aronson, William J; Terris, Martha K; Amling, Christopher L; Taioli, Emanuela; Fowke, Jay H; Freedland, Stephen J

    2018-06-01

    Systemic inflammation, as measured by C-reactive protein, has been linked with poor prostate cancer (PC) outcomes, predominantly in white men. Whether other immune measures like white blood cell counts are correlated with PC progression and whether results vary by race is unknown. We examined whether complete blood count (CBC) parameters were associated with PC outcomes and whether these associations varied by race. Analyses include 1,826 radical prostatectomy patients from six VA hospitals followed through medical record review for biochemical recurrence (BCR). Secondary outcomes included castration-resistant PC (CRPC), metastasis, all-cause mortality (ACM), and PC-specific mortality (PCSM). Cox-proportional hazards were used to assess the associations between pre-operative neutrophils, lymphocytes, platelets, neutrophil-lymphocyte ratio (NLR), and platelet-lymphocyte ratio (PLR) with each outcome. We used a Bonferroni-corrected p-value of 0.05/5 = 0.01 as the threshold for statistical significance. Of 1,826 men, 794 (43%) were black and 1,032 (57%) white. Neutrophil count (p < 0.001), NLR (p < 0.001), and PLR (p < 0.001) were significantly lower, while lymphocyte count (p < 0.001) was significantly higher in black versus white men. After adjusting for clinicopathological features, no CBC measures were significantly associated with BCR. There were no interactions between CBC and race in predicting BCR. Similarly, no CBC values were significantly associated with CRPC, metastases, or PCSM either among all men or when stratified by race. However, higher neutrophil count was associated with higher ACM risk in white men (p = 0.004). Pre-operative CBC measures were not associated with PC outcomes in black or white men undergoing radical prostatectomy, except for neutrophils-positive association with risk of ACM in white men. Whether circulating immune cell markers provide insight to the pathophysiology of PC progression or adverse treatment outcomes requires further study.

  17. Palliative care education in Latin America: A systematic review of training programs for healthcare professionals.

    PubMed

    Vindrola-Padros, Cecilia; Mertnoff, Rosa; Lasmarias, Cristina; Gómez-Batiste, Xavier

    2018-02-01

    The integration of palliative care (PC) education into medical and nursing curricula has been identified as an international priority. PC education has undergone significant development in Latin America, but gaps in the integration of PC courses into undergraduate and postgraduate curricula remain. The aim of our review was to systematically examine the delivery of PC education in Latin America in order to explore the content and method of delivery of current PC programs, identify gaps in the availability of education opportunities, and document common barriers encountered in the course of their implementation. We carried out a systematic review of peer-reviewed academic articles and grey literature. Peer-reviewed articles were obtained from the following databases: CINAHL Plus, Embase, the Web of Science, and Medline. Grey literature was obtained from the following directories: the International Association for Hospice and Palliative Care's Global Directory of Education in Palliative Care, the Worldwide Hospice Palliative Care Alliance's lists of palliative care resources, the Latin American Association for Palliative Care's training resources, and the Latin American Atlas of Palliative Care. The inclusion criteria were that the work: (1) focused on describing PC courses; (2) was aimed at healthcare professionals; and (3) was implemented in Latin America. The PRISMA checklist was employed to guide the reporting of methods and findings. We found 36 programs that were delivered in 8 countries. Most of the programs were composed of interdisciplinary teams, taught at a postgraduate level, focused on pain and symptom management, and utilized classroom-based methods. The tools for evaluating the courses were rarely reported. The main barriers during implementation included: a lack of recognition of the importance of PC education, a lack of funding, and the unavailability of trained teaching staff. Considerable work needs to be done to improve the delivery of PC education programs in Latin American countries. Practice-based methods and exposure to clinical settings should be integrated into ongoing courses to facilitate learning. A regional platform needs to be created to share experiences of successful training programs and foster the development of PC education throughout Latin America.

  18. Current evidence demonstrates similar effects of kilohertz-frequency and low-frequency current on quadriceps evoked torque and discomfort in healthy individuals: a systematic review with meta-analysis.

    PubMed

    da Silva, Vinicius Zacarias Maldaner; Durigan, João Luiz Quaglioti; Arena, Ross; de Noronha, Marcos; Gurney, Burke; Cipriano, Gerson

    2015-01-01

    Neuromuscular electrical stimulation (NMES) is widely utilized to enhance muscle performance. However, the optimal NMES waveform with respect to treatment effect has not been established. To investigate the effects of kilohertz-frequency alternating current (KFAC) and low-frequency pulsed current (PC) on quadriceps evoked torque and self-reported discomfort. PubMed, The Cochrane Library, EMBASE, MEDLINE, Physiotherapy Evidence Database (PEDro), SinoMed, ISI Web of Knowledge, and CINAHL were searched for randomized controlled trials (RCTs) and quasi-randomized controlled trials (QRCTs). Two reviewers independently selected potential studies according to the inclusion criteria, extracted data, and assessed methodological quality. Studies were eligible if they compared KFAC versus PC interventions. Studies that included outcome measures for percentage of maximal isometric voluntary contraction (%MIVC) torque and self-reported discomfort level were eligible for evaluation. Seven studies involving 127 individuals were included. The methodological quality of eligible trials was moderate, with a mean of 5 on the 10-point PEDro scale. Overall, PC was no better than KFAC in terms of evoked torque and there was no difference in self-reported discomfort level. KFAC and PC have similar effects on quadriceps evoked torque and self-reported discomfort level in healthy individuals. The small number and overall methodological quality of currently available studies included in this meta-analysis indicate that new RCTs are needed to better determine optimal NMES treatment parameters.

  19. 47 CFR 15.713 - TV bands database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database. 15.713 Section 15.713... TV bands database. (a) Purpose. The TV bands database serves the following functions: (1) To... databases. (b) Information in the TV bands database. (1) Facilities already recorded in Commission databases...

  20. Could texture features from preoperative CT image be used for predicting occult peritoneal carcinomatosis in patients with advanced gastric cancer?

    PubMed

    Kim, Hae Young; Kim, Young Hoon; Yun, Gabin; Chang, Won; Lee, Yoon Jin; Kim, Bohyoung

    2018-01-01

    To retrospectively investigate whether texture features obtained from preoperative CT images of advanced gastric cancer (AGC) patients could be used for the prediction of occult peritoneal carcinomatosis (PC) detected during operation. 51 AGC patients with occult PC detected during operation from January 2009 to December 2012 were included as occult PC group. For the control group, other 51 AGC patients without evidence of distant metastasis including PC, and whose clinical T and N stage could be matched to those of the patients of the occult PC group, were selected from the period of January 2011 to July 2012. Each group was divided into test (n = 41) and validation cohort (n = 10). Demographic and clinical data of these patients were acquired from the hospital database. Texture features including average, standard deviation, kurtosis, skewness, entropy, correlation, and contrast were obtained from manually drawn region of interest (ROI) over the omentum on the axial CT image showing the omentum at its largest cross sectional area. After using Fisher's exact and Wilcoxon signed-rank test for comparison of the clinical and texture features between the two groups of the test cohort, conditional logistic regression analysis was performed to determine significant independent predictor for occult PC. Using the optimal cut-off value from receiver operating characteristic (ROC) analysis for the significant variables, diagnostic sensitivity and specificity were determined in the test cohort. The cut-off value of the significant variables obtained from the test cohort was then applied to the validation cohort. Bonferroni correction was used to adjust P value for multiple comparisons. Between the two groups, there was no significant difference in the clinical features. Regarding the texture features, the occult PC group showed significantly higher average, entropy, standard deviation, and significantly lower correlation (P value < 0.004 for all). Conditional logistic regression analysis demonstrated that entropy was significant independent predictor for occult PC. When the cut-off value of entropy (> 7.141) was applied to the validation cohort, sensitivity and specificity for the prediction of occult PC were 80% and 90%, respectively. For AGC patients whose PC cannot be detected with routine imaging such as CT, texture analysis may be a useful adjunct for the prediction of occult PC.

  1. Prostate-Specific Antigen (PSA)–Based Population Screening for Prostate Cancer: An Evidence-Based Analysis

    PubMed Central

    Pron, G

    2015-01-01

    Background Prostate cancer (PC) is the most commonly diagnosed non-cutaneous cancer in men and their second or third leading cause of cancer death. Prostate-specific antigen (PSA) testing for PC has been in common practice for more than 20 years. Objectives A systematic review of the scientific literature was conducted to determine the effectiveness of PSA-based population screening programs for PC to inform policy decisions in a publicly funded health care system. Data Sources A systematic review of bibliographic databases was performed for systematic reviews or randomized controlled trials (RCT) of PSA-based population screening programs for PC. Review Methods A broad search strategy was employed to identify studies reporting on key outcomes of PC mortality and all-cause mortality. Results The search identified 5 systematic reviews and 6 RCTs. None of the systematic reviews found a statistically significant reduction in relative risk (RR) of PC mortality or overall mortality with PSA-based screening. PC mortality reductions were found to vary by country, by screening program, and by age of men at study entry. The European Randomized Study of Screening for Prostate Cancer found a statistically significant reduction in RR in PC mortality at 11-year follow-up (0.79; 95% CI, 0.67–0.92), although the absolute risk reduction was small (1.0/10,000 person-years). However, the primary treatment for PCs differed significantly between countries and between trial arms. The American Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO) found a statistically non-significant increase in RR for PC mortality with 13-year follow-up (1.09; 95% CI, 0.87–1.36). The degree of opportunistic screening in the control arm of the PLCO trial, however, was high. None of the RCTs found a reduction in all-cause mortality and all found a statistically significant increase in the detection of mainly low-risk, organ-confined PCs in the screening arm. Conclusions There was no evidence of a PC mortality reduction in the American PLCO trial, which investigated a screening program in a setting where opportunistic screening was already common practice. Given that opportunistic PSA screening practices in Canada are similar, it is unlikely that the introduction of a formal PSA screening program would reduce PC mortality. PMID:26366236

  2. Prostate-Specific Antigen (PSA)-Based Population Screening for Prostate Cancer: An Evidence-Based Analysis.

    PubMed

    Pron, G

    2015-01-01

    Prostate cancer (PC) is the most commonly diagnosed non-cutaneous cancer in men and their second or third leading cause of cancer death. Prostate-specific antigen (PSA) testing for PC has been in common practice for more than 20 years. A systematic review of the scientific literature was conducted to determine the effectiveness of PSA-based population screening programs for PC to inform policy decisions in a publicly funded health care system. A systematic review of bibliographic databases was performed for systematic reviews or randomized controlled trials (RCT) of PSA-based population screening programs for PC. A broad search strategy was employed to identify studies reporting on key outcomes of PC mortality and all-cause mortality. The search identified 5 systematic reviews and 6 RCTs. None of the systematic reviews found a statistically significant reduction in relative risk (RR) of PC mortality or overall mortality with PSA-based screening. PC mortality reductions were found to vary by country, by screening program, and by age of men at study entry. The European Randomized Study of Screening for Prostate Cancer found a statistically significant reduction in RR in PC mortality at 11-year follow-up (0.79; 95% CI, 0.67-0.92), although the absolute risk reduction was small (1.0/10,000 person-years). However, the primary treatment for PCs differed significantly between countries and between trial arms. The American Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO) found a statistically non-significant increase in RR for PC mortality with 13-year follow-up (1.09; 95% CI, 0.87-1.36). The degree of opportunistic screening in the control arm of the PLCO trial, however, was high. None of the RCTs found a reduction in all-cause mortality and all found a statistically significant increase in the detection of mainly low-risk, organ-confined PCs in the screening arm. There was no evidence of a PC mortality reduction in the American PLCO trial, which investigated a screening program in a setting where opportunistic screening was already common practice. Given that opportunistic PSA screening practices in Canada are similar, it is unlikely that the introduction of a formal PSA screening program would reduce PC mortality.

  3. Content based information retrieval in forensic image databases.

    PubMed

    Geradts, Zeno; Bijhold, Jurrien

    2002-03-01

    This paper gives an overview of the various available image databases and ways of searching these databases on image contents. The developments in research groups of searching in image databases is evaluated and compared with the forensic databases that exist. Forensic image databases of fingerprints, faces, shoeprints, handwriting, cartridge cases, drugs tablets, and tool marks are described. The developments in these fields appear to be valuable for forensic databases, especially that of the framework in MPEG-7, where the searching in image databases is standardized. In the future, the combination of the databases (also DNA-databases) and possibilities to combine these can result in stronger forensic evidence.

  4. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  5. 78 FR 51809 - Seventeenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-21

    ... Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal... Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases. SUMMARY: The FAA is... Databases being held jointly with EUROCAE WG-44--Aeronautical Databases. DATES: The meeting will be held...

  6. 78 FR 8684 - Fifteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint with EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases AGENCY: Federal... Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases. SUMMARY: The FAA is... Databases being held jointly with EUROCAE WG-44--Aeronautical Databases. DATES: The meeting will be held...

  7. 78 FR 25134 - Sixteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal... Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases. SUMMARY: The FAA is... Databases being held jointly with EUROCAE WG-44--Aeronautical Databases. DATES: The meeting will be held...

  8. 78 FR 66418 - Eighteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-05

    ... Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal... Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases. SUMMARY: The FAA is... Databases being held jointly with EUROCAE WG-44--Aeronautical Databases. DATES: The meeting will be held...

  9. [Case-based interactive PACS learning: introduction of a new concept for radiological education of students].

    PubMed

    Scherer, A; Kröpil, P; Heusch, P; Buchbender, C; Sewerin, P; Blondin, D; Lanzman, R S; Miese, F; Ostendorf, B; Bölke, E; Mödder, U; Antoch, G

    2011-11-01

    Medical curricula are currently being reformed in order to establish superordinated learning objectives, including, e.g., diagnostic, therapeutic and preventive competences. This requires a shifting from traditional teaching methods towards interactive and case-based teaching concepts. Conceptions, initial experiences and student evaluations of a novel radiological course Co-operative Learning In Clinical Radiology (CLICR) are presented in this article. A novel radiological teaching course (CLICR course), which combines different innovative teaching elements, was established and integrated into the medical curriculum. Radiological case vignettes were created for three clinical teaching modules. By using a PC with PACS (Picture Archiving and Communication System) access, web-based databases and the CASUS platform, a problem-oriented, case-based and independent way of learning was supported as an adjunct to the well established radiological courses and lectures. Student evaluations of the novel CLICR course and the radiological block course were compared. Student evaluations of the novel CLICR course were significantly better compared to the conventional radiological block course. Of the participating students 52% gave the highest rating for the novel CLICR course concerning the endpoint overall satisfaction as compared to 3% of students for the conventional block course. The innovative interactive concept of the course and the opportunity to use a web-based database were favorably accepted by the students. Of the students 95% rated the novel course concept as a substantial gain for the medical curriculum and 95% also commented that interactive working with the PACS and a web-based database (82%) promoted learning and understanding. Interactive, case-based teaching concepts such as the presented CLICR course are considered by both students and teachers as useful extensions to the radiological course program. These concepts fit well into competence-oriented curricula.

  10. Prognostic value of PD-L1 overexpression for pancreatic cancer: evidence from a meta-analysis.

    PubMed

    Zhuan-Sun, Yongxun; Huang, Fengting; Feng, Min; Zhao, Xinbao; Chen, Wenying; Zhu, Zhe; Zhang, Shineng

    2017-01-01

    Programmed death-ligand 1 (PD-L1) is an immune checkpoint that is often activated in cancer and plays a pivotal role in the initiation and progression of cancer. However, the clinicopathologic significance and prognostic value of PD-L1 in pancreatic cancer (PC) remains controversial. In this study, we conducted a meta-analysis to retrospectively evaluate the relationship between PD-L1 and PC. PubMed and other databases were searched for the clinical studies published up to March 21, 2017, to be included in the meta-analysis. Hazard ratios and their 95% CIs were calculated. Risk ratios (RRs) were extracted to assess the correlations between the clinicopathologic parameters and PD-L1 expression. Ten studies including 1,058 patients were included in the meta-analysis. The pooled results indicated that positive PD-L1 expression was correlated with a poor overall survival outcome in PC patients (hazard ratio =1.76, 95% CI: 1.43-2.17, P <0.00001). Interestingly, high PD-L1 expression was correlated with poor pathologic differentiation (RR =1.57, 95% CI: 1.25-1.98, P =0.0001) and neural invasion (RR =1.30, 95% CI: 1.03-1.64, P =0.03). However, there were no significant correlations between PD-L1 expression and other clinicopathologic characteristics. In summary, our meta-analysis implied that PD-L1 could serve as a negative predictor for the overall survival of PC patients, and high expression of PD-L1 was correlated with poor differentiation and neural invasion, indicating that anti-PD-L1 treatments should be evaluated in PC patients, especially in those who exhibit these two characteristics.

  11. Prognostic value of PD-L1 overexpression for pancreatic cancer: evidence from a meta-analysis

    PubMed Central

    Feng, Min; Zhao, Xinbao; Chen, Wenying; Zhu, Zhe; Zhang, Shineng

    2017-01-01

    Programmed death-ligand 1 (PD-L1) is an immune checkpoint that is often activated in cancer and plays a pivotal role in the initiation and progression of cancer. However, the clinicopathologic significance and prognostic value of PD-L1 in pancreatic cancer (PC) remains controversial. In this study, we conducted a meta-analysis to retrospectively evaluate the relationship between PD-L1 and PC. PubMed and other databases were searched for the clinical studies published up to March 21, 2017, to be included in the meta-analysis. Hazard ratios and their 95% CIs were calculated. Risk ratios (RRs) were extracted to assess the correlations between the clinicopathologic parameters and PD-L1 expression. Ten studies including 1,058 patients were included in the meta-analysis. The pooled results indicated that positive PD-L1 expression was correlated with a poor overall survival outcome in PC patients (hazard ratio =1.76, 95% CI: 1.43–2.17, P<0.00001). Interestingly, high PD-L1 expression was correlated with poor pathologic differentiation (RR =1.57, 95% CI: 1.25–1.98, P=0.0001) and neural invasion (RR =1.30, 95% CI: 1.03–1.64, P=0.03). However, there were no significant correlations between PD-L1 expression and other clinicopathologic characteristics. In summary, our meta-analysis implied that PD-L1 could serve as a negative predictor for the overall survival of PC patients, and high expression of PD-L1 was correlated with poor differentiation and neural invasion, indicating that anti-PD-L1 treatments should be evaluated in PC patients, especially in those who exhibit these two characteristics. PMID:29081663

  12. Anatomic comparison of traditional and enucleation partial nephrectomy specimens.

    PubMed

    Calaway, Adam C; Gondim, Dibson D; Flack, Chandra K; Jacob, Joseph M; Idrees, Muhammad T; Boris, Ronald S

    2017-05-01

    To compare pseudocapsule (PC) properties of clear cell renal cell carcinoma tumors removed via both traditional partial nephrectomy (PNx) and enucleative techniques as well as quantify the difference in volume of normal renal parenchyma removed between groups. A retrospective review of clear cell PNx specimens between 2011 and 2014 was performed. All patients undergoing tumor enucleation (TE) were included. A single pathologist reviewed the pathological specimens. This cohort was compared with a previously collected clear cell traditional PNx database. A total of 47 clear cell partial nephrectomies were reviewed (34 PNx and 13 TE). Invasion of tumor completely through the PC and positive surgical margins were seen in 2 (5.8%) and 1 (7.7%) of traditional and TE specimens, respectively (P = 0.82). PC mean (0.63 vs. 0.52mm), maximum (1.39 vs. 1.65mm), and minimum thickness (0.27 vs. 0.19mm) were similar between cohorts (P = 0.29, P = 0.36, and P = 0.44). Gross specimen volume varied considerably between the 2 groups (35.6 vs. 17.9cm 3 , P≤0.05) although tumor volume did not (12 vs. 14.2cm 3 , P = 0.64). The renal tumor consisted of only 37% of the total volume of the traditional PNx specimens compared to 80% of the volume in TEs (P<0.01). Four TE specimens (31%) were "true" TEs (no additional parenchyma identified outside of the PC). PC properties appear independent of surgical technique. True TEs are uncommon. Regardless, there is considerable volume discrepancy of normal renal parenchymal removed between enucleative and nonenucleative PNx groups. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Molecular and biochemical characterization of caffeine synthase and purine alkaloid concentration in guarana fruit.

    PubMed

    Schimpl, Flávia Camila; Kiyota, Eduardo; Mayer, Juliana Lischka Sampaio; Gonçalves, José Francisco de Carvalho; da Silva, José Ferreira; Mazzafera, Paulo

    2014-09-01

    Guarana seeds have the highest caffeine concentration among plants accumulating purine alkaloids, but in contrast with coffee and tea, practically nothing is known about caffeine metabolism in this Amazonian plant. In this study, the levels of purine alkaloids in tissues of five guarana cultivars were determined. Theobromine was the main alkaloid that accumulated in leaves, stems, inflorescences and pericarps of fruit, while caffeine accumulated in the seeds and reached levels from 3.3% to 5.8%. In all tissues analysed, the alkaloid concentration, whether theobromine or caffeine, was higher in young/immature tissues, then decreasing with plant development/maturation. Caffeine synthase activity was highest in seeds of immature fruit. A nucleotide sequence (PcCS) was assembled with sequences retrieved from the EST database REALGENE using sequences of caffeine synthase from coffee and tea, whose expression was also highest in seeds from immature fruit. The PcCS has 1083bp and the protein sequence has greater similarity and identity with the caffeine synthase from cocoa (BTS1) and tea (TCS1). A recombinant PcCS allowed functional characterization of the enzyme as a bifunctional CS, able to catalyse the methylation of 7-methylxanthine to theobromine (3,7-dimethylxanthine), and theobromine to caffeine (1,3,7-trimethylxanthine), respectively. Among several substrates tested, PcCS showed higher affinity for theobromine, differing from all other caffeine synthases described so far, which have higher affinity for paraxanthine. When compared to previous knowledge on the protein structure of coffee caffeine synthase, the unique substrate affinity of PcCS is probably explained by the amino acid residues found in the active site of the predicted protein. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  15. A Molecular Framework for Understanding DCIS

    DTIC Science & Technology

    2016-10-01

    well. Pathologic and Clinical Annotation Database A clinical annotation database titled the Breast Oncology Database has been established to...complement the procured SPORE sample characteristics and annotated pathology data. This Breast Oncology Database is an offsite clinical annotation...database adheres to CSMC Enterprise Information Services (EIS) research database security standards. The Breast Oncology Database consists of: 9 Baseline

  16. Freshwater Biological Traits Database (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Freshwater Biological Traits Database. This report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also...

  17. Database Access Systems.

    ERIC Educational Resources Information Center

    Dalrymple, Prudence W.; Roderer, Nancy K.

    1994-01-01

    Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…

  18. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  19. Databases in the Central Government : State-of-the-art and the Future

    NASA Astrophysics Data System (ADS)

    Ohashi, Tomohiro

    Management and Coordination Agency, Prime Minister’s Office, conducted a survey by questionnaire against all Japanese Ministries and Agencies, in November 1985, on a subject of the present status of databases produced or planned to be produced by the central government. According to the results, the number of the produced databases has been 132 in 19 Ministries and Agencies. Many of such databases have been possessed by Defence Agency, Ministry of Construction, Ministry of Agriculture, Forestry & Fisheries, and Ministry of International Trade & Industries and have been in the fields of architecture & civil engineering, science & technology, R & D, agriculture, forestry and fishery. However the ratio of the databases available for other Ministries and Agencies has amounted to only 39 percent of all produced databases and the ratio of the databases unavailable for them has amounted to 60 percent of all of such databases, because of in-house databases and so forth. The outline of such results of the survey is reported and the databases produced by the central government are introduced under the items of (1) databases commonly used by all Ministries and Agencies, (2) integrated databases, (3) statistical databases and (4) bibliographic databases. The future problems are also described from the viewpoints of technology developments and mutual uses of databases.

  20. Modeling the Historical Flood Events in France

    NASA Astrophysics Data System (ADS)

    Ali, Hani; Blaquière, Simon

    2017-04-01

    We will present the simulation results for different scenarios based on the flood model developed by AXA Global P&C CAT Modeling team. The model uses a Digital Elevation Model (DEM) with 75 m resolution, a hydrographic system (DB Carthage), daily rainfall data from "Météo France", water level from "HYDRO Banque" the French Hydrological Database (www.hydro.eaufrance.fr), for more than 1500 stations, hydrological model from IRSTEA and in-house hydraulic tool. In particular, the model re-simulates the most important and costly flood events that occurred during the past decade in France: we will present the re-simulated meteorological conditions since 1964 and estimate insurance loss incurred on current AXA portfolio of individual risks.

  1. NLM microcomputer-based tutorials (for microcomputers). Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, M.

    1990-04-01

    The package consists of TOXLEARN--a microcomputer-based training package for TOXLINE (Toxicology Information Online), CHEMLEARN-a microcomputer-based training package for CHEMLINE (Chemical Information Online), MEDTUTOR--a microcomputer-based training package for MEDLINE (Medical Information Online), and ELHILL LEARN--a microcomputer-based training package for the ELHILL search and retrieval software that supports the above-mentioned databases...Software Description: The programs were developed under PILOTplus using the NLM LEARN Programmer. They run on IBM-PC, XT, AT, PS/2, and fully compatible computers. The programs require 512K RAM memory, one disk drive, and DOS 2.0 or higher. The software supports most monochrome, color graphics, enhanced color graphics, or visual graphics displays.

  2. Remote sensing information sciences research group: Browse in the EOS era

    NASA Technical Reports Server (NTRS)

    Estes, John E.; Star, Jeffrey L.

    1989-01-01

    The problem of science data browse was examined. Given the tremendous data volumes that are planned for future space missions, particularly the Earth Observing System in the late 1990's, the need for access to large spatial databases must be understood. Work was continued to refine the concept of data browse. Further, software was developed to provide a testbed of the concepts, both to locate possibly interesting data, as well as view a small portion of the data. Build II was placed on a minicomputer and a PC in the laboratory, and provided accounts for use in the testbed. Consideration of the testbed software as an element of in-house data management plans was begun.

  3. AQUIS: A PC-based air inventory and permit manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1992-01-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate and track sources, emissions, stacks, permits, and related information. The system runs on IBM-compatible personal computers with dBASE IV and tracks more than 1,200 data items distributed among various source categories. AQUIS is currently operating at nine US Air Force facilities that have up to 1,000 sources. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user-specified information. In addition to six criteria pollutants, AQUIS calculates compound-specific emissions and allows users to enter their own emissionmore » estimates.« less

  4. Arduino-based noise robust online heart-rate detection.

    PubMed

    Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda

    2017-04-01

    This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.

  5. An Online Database Producer's Memoirs and Memories of an Online Pioneer and The Database Industry: Looking into the Future.

    ERIC Educational Resources Information Center

    Kollegger, James G.; And Others

    1988-01-01

    In the first of three articles, the producer of Energyline, Energynet, and Tele/Scope recalls the development of the databases and database business strategies. The second describes the development of biomedical online databases, and the third discusses future developments, including full text databases, database producers as online host, and…

  6. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  7. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse event reporting, confidential enquiries, disease-only registers and health surveys. Databases in group I can be used not only to assess effectiveness but also to assess diffusion and equity. Databases in group II can only assess diffusion. Group III has restricted scope for assessing HTs, except for analysis of adverse events. For use in costing, databases need to include unit costs or prices. Some databases included unit cost as well as a specific HT. A list of around 270 databases was identified at the level of UK, England and Wales or England (over 1000 including Scotland, Wales and Northern Ireland). Allocation of these to the above groups identified around 60 databases with some potential for HT assessment, roughly half to group I. Eighteen clinical registers were identified as having the greatest potential although the clinical administrative datasets had potential mainly owing to their inclusion of a wide range of technologies. Only two databases were identified that could directly be used in costing. The review of the potential capture of HTs prioritized by the UK's NHS R&D HTA programme showed that only 10% would be captured in these databases, mainly drugs prescribed in primary care. The review of the use of routine databases in any form of HT assessment indicated that clinical registers were mainly used for national comparative audit. Some databases have only been used in annual reports, usually time trend analysis. A few peer-reviewed papers used a clinical register to assess the effectiveness of a technology. Accessibility is suggested as a barrier to using most databases. Clinical administrative databases (group Ib) have mainly been used to build population needs indices and performance indicators. A review of the validity of used databases showed that although internal consistency checks were common, relatively few had any form of external audit. Some comparative audit databases have data scrutinised by participating units. Issues around coverage and coding have, in general, received little attention. NHS funding of databases has been mainly for 'Central Returns' for management purposes, which excludes those databases with the greatest potential for HT assessment. Funding for databases was various, but some are unfunded, relying on goodwill. The estimated total cost of databases in group I plus selected databases from groups II and III has been estimated at pound 50 million or around 0.1% of annual NHS spend. A few databases with limited potential for HT assessment account for the bulk of spending. Suggestions for policy include clarification of responsibility for the strategic development of databases, improved resourcing, and issues around coding, confidentiality, ownership and access, maintenance of clinical support, optimal use of information technology, filling gaps and remedying deficiencies. Recommendations for researchers include closer policy links between routine data and R&D, and selective investment in the more promising databases. Recommended research topics include optimal capture and coding of the range of HTs, international comparisons of the role, funding and use of routine data in healthcare systems and use of routine database in trials and in modelling. Independent evaluations are recommended for information strategies (such as those around the National Service Frameworks and various collaborations) and for electronic patient and health records.

  8. The 2014 Nucleic Acids Research Database Issue and an updated NAR online Molecular Biology Database Collection.

    PubMed

    Fernández-Suárez, Xosé M; Rigden, Daniel J; Galperin, Michael Y

    2014-01-01

    The 2014 Nucleic Acids Research Database Issue includes descriptions of 58 new molecular biology databases and recent updates to 123 databases previously featured in NAR or other journals. For convenience, the issue is now divided into eight sections that reflect major subject categories. Among the highlights of this issue are six databases of the transcription factor binding sites in various organisms and updates on such popular databases as CAZy, Database of Genomic Variants (DGV), dbGaP, DrugBank, KEGG, miRBase, Pfam, Reactome, SEED, TCDB and UniProt. There is a strong block of structural databases, which includes, among others, the new RNA Bricks database, updates on PDBe, PDBsum, ArchDB, Gene3D, ModBase, Nucleic Acid Database and the recently revived iPfam database. An update on the NCBI's MMDB describes VAST+, an improved tool for protein structure comparison. Two articles highlight the development of the Structural Classification of Proteins (SCOP) database: one describes SCOPe, which automates assignment of new structures to the existing SCOP hierarchy; the other one describes the first version of SCOP2, with its more flexible approach to classifying protein structures. This issue also includes a collection of articles on bacterial taxonomy and metagenomics, which includes updates on the List of Prokaryotic Names with Standing in Nomenclature (LPSN), Ribosomal Database Project (RDP), the Silva/LTP project and several new metagenomics resources. The NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/c/, has been expanded to 1552 databases. The entire Database Issue is freely available online on the Nucleic Acids Research website (http://nar.oxfordjournals.org/).

  9. 76 FR 11465 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-02

    ... separate systems of records: ``FHFA-OIG Audit Files Database,'' ``FHFA-OIG Investigative & Evaluative Files Database,'' ``FHFA-OIG Investigative & Evaluative MIS Database,'' and ``FHFA-OIG Hotline Database.'' These... Audit Files Database. FHFA-OIG-2: FHFA-OIG Investigative & Evaluative Files Database. FHFA-OIG-3: FHFA...

  10. A review of accessibility of administrative healthcare databases in the Asia-Pacific region.

    PubMed

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3-6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted based on requirements by data custodians. Compared with previous research, this study describes the landscape of databases in the selected countries with more granularity using an assessment tool developed for this purpose. A high number of databases were identified but most had restricted access, preventing their potential use to support research. We hope that this study helps to improve the understanding of the AHDB landscape, increase data sharing and database research in Asia-Pacific countries.

  11. Using linked administrative and disease-specific databases to study end-of-life care on a population level.

    PubMed

    Maetens, Arno; De Schreye, Robrecht; Faes, Kristof; Houttekier, Dirk; Deliens, Luc; Gielen, Birgit; De Gendt, Cindy; Lusyne, Patrick; Annemans, Lieven; Cohen, Joachim

    2016-10-18

    The use of full-population databases is under-explored to study the use, quality and costs of end-of-life care. Using the case of Belgium, we explored: (1) which full-population databases provide valid information about end-of-life care, (2) what procedures are there to use these databases, and (3) what is needed to integrate separate databases. Technical and privacy-related aspects of linking and accessing Belgian administrative databases and disease registries were assessed in cooperation with the database administrators and privacy commission bodies. For all relevant databases, we followed procedures in cooperation with database administrators to link the databases and to access the data. We identified several databases as fitting for end-of-life care research in Belgium: the InterMutualistic Agency's national registry of health care claims data, the Belgian Cancer Registry including data on incidence of cancer, and databases administrated by Statistics Belgium including data from the death certificate database, the socio-economic survey and fiscal data. To obtain access to the data, approval was required from all database administrators, supervisory bodies and two separate national privacy bodies. Two Trusted Third Parties linked the databases via a deterministic matching procedure using multiple encrypted social security numbers. In this article we describe how various routinely collected population-level databases and disease registries can be accessed and linked to study patterns in the use, quality and costs of end-of-life care in the full population and in specific diagnostic groups.

  12. Comparison of the NCI open database with seven large chemical structural databases.

    PubMed

    Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C

    2001-01-01

    Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.

  13. FRED, a Front End for Databases.

    ERIC Educational Resources Information Center

    Crystal, Maurice I.; Jakobson, Gabriel E.

    1982-01-01

    FRED (a Front End for Databases) was conceived to alleviate data access difficulties posed by the heterogeneous nature of online databases. A hardware/software layer interposed between users and databases, it consists of three subsystems: user-interface, database-interface, and knowledge base. Architectural alternatives for this database machine…

  14. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database that...

  15. Database Management: Building, Changing and Using Databases. Collected Papers and Abstracts of the Mid-Year Meeting of the American Society for Information Science (15th, Portland, Oregon, May 1986).

    ERIC Educational Resources Information Center

    American Society for Information Science, Washington, DC.

    This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…

  16. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  17. Creating a sampling frame for population-based veteran research: representativeness and overlap of VA and Department of Defense databases.

    PubMed

    Washington, Donna L; Sun, Su; Canning, Mark

    2010-01-01

    Most veteran research is conducted in Department of Veterans Affairs (VA) healthcare settings, although most veterans obtain healthcare outside the VA. Our objective was to determine the adequacy and relative contributions of Veterans Health Administration (VHA), Veterans Benefits Administration (VBA), and Department of Defense (DOD) administrative databases for representing the U.S. veteran population, using as an example the creation of a sampling frame for the National Survey of Women Veterans. In 2008, we merged the VHA, VBA, and DOD databases. We identified the number of unique records both overall and from each database. The combined databases yielded 925,946 unique records, representing 51% of the 1,802,000 U.S. women veteran population. The DOD database included 30% of the population (with 8% overlap with other databases). The VHA enrollment database contributed an additional 20% unique women veterans (with 6% overlap with VBA databases). VBA databases contributed an additional 2% unique women veterans (beyond 10% overlap with other databases). Use of VBA and DOD databases substantially expands access to the population of veterans beyond those in VHA databases, regardless of VA use. Adoption of these additional databases would enhance the value and generalizability of a wide range of studies of both male and female veterans.

  18. Exercise countermeasure protocol management expert system.

    PubMed

    Webster, L; Chen, J G; Flores, L; Tan, S

    1993-04-01

    Exercise will be used primarily to countermeasure against deconditioning on extended space flight. In this paper we describe the development and evaluation of an expert system for exercise countermeasure protocol management. Currently, the system includes two major subsystems: baseline prescription and prescription adjustment. The baseline prescription subsystem is designed to provide initial exercise prescriptions while prescription adjustment subsystem is designed to modify the initial prescription based on the exercised progress. The system runs under three different environments: PC, SUN workstation, and Symbolic machine. The inference engine, baseline prescription module, prescription adjustment module and explanation module are developed under the Symbolic environment by using the ART (Automated Reasoning Tool) software. The Sun environment handles database management features and interfaces with PC environment to obtain physical and physiological data from exercise units on-board during the flight. Eight subjects' data have been used to evaluate the system performance by comparing the prescription of nine experienced exercise physiologists and the one prescribed by the expert system. The results of the validation test indicated that the performance of the expert system was acceptable.

  19. Exercise countermeasure protocol management expert system

    NASA Technical Reports Server (NTRS)

    Webster, L.; Chen, J. G.; Flores, L.; Tan, S.

    1993-01-01

    Exercise will be used primarily to countermeasure against deconditioning on extended space flight. In this paper we describe the development and evaluation of an expert system for exercise countermeasure protocol management. Currently, the system includes two major subsystems: baseline prescription and prescription adjustment. The baseline prescription subsystem is designed to provide initial exercise prescriptions while prescription adjustment subsystem is designed to modify the initial prescription based on the exercised progress. The system runs under three different environments: PC, SUN workstation, and Symbolic machine. The inference engine, baseline prescription module, prescription adjustment module and explanation module are developed under the Symbolic environment by using the ART (Automated Reasoning Tool) software. The Sun environment handles database management features and interfaces with PC environment to obtain physical and physiological data from exercise units on-board during the flight. Eight subjects' data have been used to evaluate the system performance by comparing the prescription of nine experienced exercise physiologists and the one prescribed by the expert system. The results of the validation test indicated that the performance of the expert system was acceptable.

  20. A Computerized Data-Capture System for Animal Biosafety Level 4 Laboratories

    PubMed Central

    Bente, Dennis A; Friesen, Jeremy; White, Kyle; Koll, Jordan; Kobinger, Gary P

    2011-01-01

    The restrictive nature of an Animal Biosafety Level 4 (ABSL4) laboratory complicates even simple clinical evaluation including data capture. Typically, clinical data are recorded on paper during procedures, faxed out of the ABSL4, and subsequently manually entered into a computer. This system has many disadvantages including transcriptional errors. Here, we describe the development of a highly customizable, tablet-PC-based computerized data-capture system, allowing reliable collection of observational and clinical data from experimental animals in a restrictive biocontainment setting. A multidisciplinary team with skills in containment laboratory animal science, database design, and software engineering collaborated on the development of this system. The goals were to design an easy-to-use and flexible user interface on a touch-screen tablet PC with user-supportable processes for recovery, full auditing capabilities, and cost effectiveness. The system simplifies data capture, reduces the necessary time in an ABSL4 environment, offers timely reporting and review of data, facilitates statistical analysis, reduces potential of erroneous data entry, improves quality assurance of animal care, and advances the use and refinement of humane endpoints. PMID:22330712

  1. Functional analysis of mutations in a severe congenital neutropenia syndrome caused by glucose-6-phosphatase-β deficiency

    PubMed Central

    Lin, Su Ru; Pan, Chi-Jiunn; Mansfield, Brian C.; Chou, Janice Yang

    2016-01-01

    Glucose-6-phosphatase-β (G6Pase-β or G6PC3) deficiency is characterized by neutropenia and dysfunction in both neutrophils and macrophages. G6Pase-β is an enzyme embedded in the endoplasmic reticulum membrane that catalyzes the hydrolysis of glucose-6-phosphate (G6P) to glucose and phosphate. To date, 33 separate G6PC3 mutations have been identified in G6Pase-β-deficient patients but only the p.R253H and p.G260R missense mutations have been characterized functionally for pathogenicity. Here we functionally characterize 16 of the 19 known missense mutations using a sensitive assay, based on a recombinant adenoviral vector-mediated expression system, to demonstrate pathogenicity. Fourteen missense mutations completely abolish G6Pase-β enzymatic activity while the p.S139I and p.R189Q mutations retain 49% and 45%, respectively of wild type G6Pase-β activity. A database of residual enzymatic activity retained by the G6Pase-β mutations will serve as a reference for evaluating genotype-phenotype relationships. PMID:25492228

  2. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    PubMed

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  3. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  4. Officer Career Development: Longitudinal Sample--Fiscal Year 1982

    DTIC Science & Technology

    1991-10-01

    Those that wish to access the database to conduct additional analyses, link it to or combine it with other databases, enlarge the database for the...link it to or combine it with other databases, enlarge the database for the conduct of trend analyses, etc., will find this data dictionary an...analyses, link it to or combine it with other databases, enlarge the database for the conduct of trend analyses, etc., will find this data dictionary

  5. LexisNexis

    EPA Pesticide Factsheets

    LexisNexis provides access to electronic legal and non-legal research databases to the Agency's attorneys, administrative law judges, law clerks, investigators, and certain non-legal staff (e.g. staff in the Office of Public Affairs). The agency requires access to the following types of electronic databases: Legal databases, Non-legal databases, Public Records databases, and Financial databases.

  6. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  7. Pathogen Research Databases

    Science.gov Websites

    Hepatitis C Virus (HCV) database project is funded by the Division of Microbiology and Infectious Diseases of the National Institute of Allergies and Infectious Diseases (NIAID). The HCV database project started as a spin-off from the HIV database project. There are two databases for HCV, a sequence database

  8. 76 FR 41792 - Information Collection Being Submitted for Review and Approval to the Office of Management and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-15

    ... administrator from the private sector to create and operate TV band databases. The TV band database... database administrator will be responsible for operation of their database and coordination of the overall functioning of the database with other administrators, and will provide database access to TVBDs. The...

  9. shRNA target prediction informed by comprehensive enquiry (SPICE): a supporting system for high-throughput screening of shRNA library.

    PubMed

    Kamatuka, Kenta; Hattori, Masahiro; Sugiyama, Tomoyasu

    2016-12-01

    RNA interference (RNAi) screening is extensively used in the field of reverse genetics. RNAi libraries constructed using random oligonucleotides have made this technology affordable. However, the new methodology requires exploration of the RNAi target gene information after screening because the RNAi library includes non-natural sequences that are not found in genes. Here, we developed a web-based tool to support RNAi screening. The system performs short hairpin RNA (shRNA) target prediction that is informed by comprehensive enquiry (SPICE). SPICE automates several tasks that are laborious but indispensable to evaluate the shRNAs obtained by RNAi screening. SPICE has four main functions: (i) sequence identification of shRNA in the input sequence (the sequence might be obtained by sequencing clones in the RNAi library), (ii) searching the target genes in the database, (iii) demonstrating biological information obtained from the database, and (iv) preparation of search result files that can be utilized in a local personal computer (PC). Using this system, we demonstrated that genes targeted by random oligonucleotide-derived shRNAs were not different from those targeted by organism-specific shRNA. The system facilitates RNAi screening, which requires sequence analysis after screening. The SPICE web application is available at http://www.spice.sugysun.org/.

  10. Prairie Resources

    Science.gov Websites

    Search Prairie Resources for Students Plant Database Plant Database Butterfly Info Butterfly Info Insects Insect Database Frogs Frog Info Bird Database Bird Database Online Prairie Data Online Prairie Data

  11. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted based on requirements by data custodians. Conclusions Compared with previous research, this study describes the landscape of databases in the selected countries with more granularity using an assessment tool developed for this purpose. A high number of databases were identified but most had restricted access, preventing their potential use to support research. We hope that this study helps to improve the understanding of the AHDB landscape, increase data sharing and database research in Asia-Pacific countries. PMID:27123180

  12. Construction of databases: advances and significance in clinical research.

    PubMed

    Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian

    2015-12-01

    Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.

  13. Aquarius's Instrument Science Data System (ISDS) Automated to Acquire, Process, Trend Data and Produce Radiometric System Assessment Reports

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The Aquarius Radiometer, a subsystem of the Aquarius Instrument required a data acquisition ground system to support calibration and radiometer performance assessment. To support calibration and compose performance assessments, we developed an automated system which uploaded raw data to a ftp server and saved raw and processed data to a database. This paper details the overall functionalities of the Aquarius Instrument Science Data System (ISDS) and the individual electrical ground support equipment (EGSE) which produced data files that were infused into the ISDS. Real time EGSEs include an ICDS Simulator, Calibration GSE, Labview controlled power supply, and a chamber data acquisition system. ICDS Simulator serves as a test conductor primary workstation, collecting radiometer housekeeping (HK) and science data and passing commands and HK telemetry collection request to the radiometer. Calibration GSE (Radiometer Active Test Source) provides source choice from multiple targets for the radiometer external calibration. Power Supply GSE, controlled by labview, provides real time voltage and current monitoring of the radiometer. And finally the chamber data acquisition system produces data reflecting chamber vacuum pressure, thermistor temperatures, AVG and watts. Each GSE system produce text based data files every two to six minutes and automatically copies the data files to the Central Archiver PC. The Archiver PC stores the data files, schedules automated uploads of these files to an external FTP server, and accepts request to copy all data files to the ISDS for offline data processing and analysis. Aquarius Radiometer ISDS contains PHP and MATLab programs to parse, process and save all data to a MySQL database. Analysis tools (MATLab programs) in the ISDS system are capable of displaying radiometer science, telemetry and auxiliary data in near real time as well as performing data analysis and producing automated performance assessment reports of the Aquarius Radiometer.

  14. HST Archival Imaging of the Light Echoes of SN 1987A

    NASA Astrophysics Data System (ADS)

    Lawrence, S. S.; Hayon, M.; Sugerman, B. E. K.; Crotts, A. P. S.

    2002-12-01

    We have undertaken a search for light echo signals from Supernova 1987A that have been serendipitously recorded in images taken near the 30 Doradus region of the Large Magellanic Cloud by HST. We used the MAST interface to create a database of the 1282 WF/PC, WFPC2 and STIS images taken within 15 arcminutes of the supernova, between 1992 April and 2002 June. These 1282 images are grouped into 125 distinct epochs and pointings, with each epoch containing between 1 and 42 separate exposures. Sorting this database with various programs, aided by the STScI Visual Target Tuner, we have identified 63 pairs of WFPC2 imaging epochs that are not centered on the supernova but that have a significant amount of spatial overlap between their fields of view. These image data were downloaded from the public archive, cleaned of cosmic rays, and blinked to search for light echoes at radii larger than 2 arcminutes from the supernova. Our search to date has focused on those pairs of epochs with the largest degree of overlap. Of 16 pairs of epochs scanned to date, we have detected 3 strong light echoes and one faint, tentative echo signal. We will present direct and difference images of these and any further echoes, as well as the 3-D geometric, photometric and color properties of the echoing dust structures. In addition, a set of 20 epochs of WF/PC and WFPC2 imaging centered on SN 1987A remain to be searched for echoes within 2 arcminutes of the supernova. We will discuss our plans to integrate the high spatial-resolution HST snapshots of the echoes with our extensive, well-time-sampled, ground-based imaging data. We gratefully acknowledge the support of this undergraduate research project through an HST Archival Research Grant (HST-AR-09209.01-A).

  15. Uses and limitations of registry and academic databases.

    PubMed

    Williams, William G

    2010-01-01

    A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  16. The 2015 Nucleic Acids Research Database Issue and molecular biology database collection.

    PubMed

    Galperin, Michael Y; Rigden, Daniel J; Fernández-Suárez, Xosé M

    2015-01-01

    The 2015 Nucleic Acids Research Database Issue contains 172 papers that include descriptions of 56 new molecular biology databases, and updates on 115 databases whose descriptions have been previously published in NAR or other journals. Following the classification that has been introduced last year in order to simplify navigation of the entire issue, these articles are divided into eight subject categories. This year's highlights include RNAcentral, an international community portal to various databases on noncoding RNA; ValidatorDB, a validation database for protein structures and their ligands; SASBDB, a primary repository for small-angle scattering data of various macromolecular complexes; MoonProt, a database of 'moonlighting' proteins, and two new databases of protein-protein and other macromolecular complexes, ComPPI and the Complex Portal. This issue also includes an unusually high number of cancer-related databases and other databases dedicated to genomic basics of disease and potential drugs and drug targets. The size of NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/a/, remained approximately the same, following the addition of 74 new resources and removal of 77 obsolete web sites. The entire Database Issue is freely available online on the Nucleic Acids Research web site (http://nar.oxfordjournals.org/). Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  17. Prototype Food and Nutrient Database for Dietary Studies: Branded Food Products Database for Public Health Proof of Concept

    USDA-ARS?s Scientific Manuscript database

    The Prototype Food and Nutrient Database for Dietary Studies (Prototype FNDDS) Branded Food Products Database for Public Health is a proof of concept database. The database contains a small selection of food products which is being used to exhibit the approach for incorporation of the Branded Food ...

  18. Category-based guidance of spatial attention during visual search for feature conjunctions.

    PubMed

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Comparison of treatment patterns and economic outcomes among metastatic pancreatic cancer patients initiated on nab-paclitaxel plus gemcitabine versus FOLFIRINOX.

    PubMed

    McBride, Ali; Bonafede, Machaon; Cai, Qian; Princic, Nicole; Tran, Oth; Pelletier, Corey; Parisi, Monika; Patel, Manish

    2017-10-01

    The economic burden of metastatic pancreatic cancer (mPC) is substantial while treatment options are limited. Little is known about the treatment patterns and healthcare costs among mPC patients who initiated first-line gemcitabine plus nanoparticle albumin-bound paclitaxel (nab-P + G) and FOLFIRINOX. The MarketScan® claims databases were used to identify adults with ≥2 claims for pancreatic cancer, 1 claim for a secondary malignancy, completed ≥1 cycle of nab-P + G or FOLFIRINOX during 4/1/2013 and 3/31/2015, and had continuous plan enrollment for ≥6 months pre- and 3 months after the first-line treatment. Duration of therapy, per patient per month (PPPM) costs of total healthcare, mPC-related treatment, and supportive care were measured during first-line therapy. 550 mPC patients met selection criteria (nab-P + G, n = 294; FOLFIRINOX, n = 256). There was no difference in duration of therapy (p = 0.60) between nab-P + G and FOLFIRINOX. Compared with FOLFIRINOX, patients with nab-P + G had higher chemotherapy drug costs but lower treatment administration costs and supportive care costs (all p < 0.01). Patients treated with nab-P + G (vs FOLFIRINOX) had similar treatment duration but lower costs of outpatient prescriptions, treatment administration and supportive care. Lower supportive care costs in the nab-P + G cohort were mainly driven by lower utilization of pegfilgrastim and anti-emetics.

  20. A mapping review of the literature on UK-focused health and social care databases.

    PubMed

    Cooper, Chris; Rogers, Morwenna; Bethel, Alison; Briscoe, Simon; Lowe, Jenny

    2015-03-01

    Bibliographic databases are a day-to-day tool of the researcher: they offer the researcher easy and organised access to knowledge, but how much is actually known about the databases on offer? The focus of this paper is UK health and social care databases. These databases are often small, specialised by topic, and provide a complementary literature to the large, international databases. There is, however, good evidence that these databases are overlooked in systematic reviews, perhaps because little is known about what they can offer. To systematically locate and map, published and unpublished literature on the key UK health and social care bibliographic databases. Systematic searching and mapping. Two hundred and forty-two items were identified which specifically related to the 24 of the 34 databases under review. There is little published or unpublished literature specifically analysing the key UK health and social care databases. Since several UK databases have closed, others are at risk, and some are overlooked in reviews, better information is required to enhance our knowledge. Further research on UK health and social care databases is required. This paper suggests the need to develop the evidence base through a series of case studies on each of the databases. © 2014 The authors. Health Information and Libraries Journal © 2014 Health Libraries Journal.

  1. Comparison of Online Agricultural Information Services.

    ERIC Educational Resources Information Center

    Reneau, Fred; Patterson, Richard

    1984-01-01

    Outlines major online agricultural information services--agricultural databases, databases with agricultural services, educational databases in agriculture--noting services provided, access to the database, and costs. Benefits of online agricultural database sources (availability of agricultural marketing, weather, commodity prices, management…

  2. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  3. Building An Integrated Neurodegenerative Disease Database At An Academic Health Center

    PubMed Central

    Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.

    2010-01-01

    Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346

  4. Who's Gonna Pay the Piper for Free Online Databases?

    ERIC Educational Resources Information Center

    Jacso, Peter

    1996-01-01

    Discusses new pricing models for some online services and considers the possibilities for the traditional online database market. Topics include multimedia music databases, including copyright implications; other retail-oriented databases; and paying for free databases with advertising. (LRW)

  5. Simple re-instantiation of small databases using cloud computing.

    PubMed

    Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M

    2013-01-01

    Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.

  6. Simple re-instantiation of small databases using cloud computing

    PubMed Central

    2013-01-01

    Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380

  7. All set, indeed! N2pc components reveal simultaneous attentional control settings for multiple target colors.

    PubMed

    Grubert, Anna; Eimer, Martin

    2016-08-01

    To study whether top-down attentional control processes can be set simultaneously for different visual features, we employed a spatial cueing procedure to measure behavioral and electrophysiological markers of task-set contingent attentional capture during search for targets defined by 1 or 2 possible colors (one-color and two-color tasks). Search arrays were preceded by spatially nonpredictive color singleton cues. Behavioral spatial cueing effects indicative of attentional capture were elicited only by target-matching but not by distractor-color cues. However, when search displays contained 1 target-color and 1 distractor-color object among gray nontargets, N2pc components were triggered not only by target-color but also by distractor-color cues both in the one-color and two-color task, demonstrating that task-set nonmatching items attracted attention. When search displays contained 6 items in 6 different colors, so that participants had to adopt a fully feature-specific task set, the N2pc to distractor-color cues was eliminated in both tasks, indicating that nonmatching items were now successfully excluded from attentional processing. These results demonstrate that when observers adopt a feature-specific search mode, attentional task sets can be configured flexibly for multiple features within the same dimension, resulting in the rapid allocation of attention to task-set matching objects only. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. The opportunities and obstacles in developing a vascular birthmark database for clinical and research use.

    PubMed

    Sharma, Vishal K; Fraulin, Frankie Og; Harrop, A Robertson; McPhalen, Donald F

    2011-01-01

    Databases are useful tools in clinical settings. The authors review the benefits and challenges associated with the development and implementation of an efficient electronic database for the multidisciplinary Vascular Birthmark Clinic at the Alberta Children's Hospital, Calgary, Alberta. The content and structure of the database were designed using the technical expertise of a data analyst from the Calgary Health Region. Relevant clinical and demographic data fields were included with the goal of documenting ongoing care of individual patients, and facilitating future epidemiological studies of this patient population. After completion of this database, 10 challenges encountered during development were retrospectively identified. Practical solutions for these challenges are presented. THE CHALLENGES IDENTIFIED DURING THE DATABASE DEVELOPMENT PROCESS INCLUDED: identification of relevant data fields; balancing simplicity and user-friendliness with complexity and comprehensive data storage; database expertise versus clinical expertise; software platform selection; linkage of data from the previous spreadsheet to a new data management system; ethics approval for the development of the database and its utilization for research studies; ensuring privacy and limited access to the database; integration of digital photographs into the database; adoption of the database by support staff in the clinic; and maintaining up-to-date entries in the database. There are several challenges involved in the development of a useful and efficient clinical database. Awareness of these potential obstacles, in advance, may simplify the development of clinical databases by others in various surgical settings.

  9. Food Composition Database Format and Structure: A User Focused Approach

    PubMed Central

    Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836

  10. Online Databases in Physics.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; Verbeck, Alison F.

    1984-01-01

    This overview of 47 online sources for physics information available in the United States--including sub-field databases, transdisciplinary databases, and multidisciplinary databases-- notes content, print source, language, time coverage, and databank. Two discipline-specific databases (SPIN and PHYSICS BRIEFS) are also discussed. (EJS)

  11. 75 FR 65611 - Native American Tribal Insignia Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-26

    ... DEPARTMENT OF COMMERCE Patent and Trademark Office Native American Tribal Insignia Database ACTION... comprehensive database containing the official insignia of all federally- and State- recognized Native American... to create this database. The USPTO database of official tribal insignias assists trademark attorneys...

  12. 16 CFR 1102.6 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...

  13. MIPS: a database for genomes and protein sequences.

    PubMed Central

    Mewes, H W; Heumann, K; Kaps, A; Mayer, K; Pfeiffer, F; Stocker, S; Frishman, D

    1999-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Martinsried near Munich, Germany, develops and maintains genome oriented databases. It is commonplace that the amount of sequence data available increases rapidly, but not the capacity of qualified manual annotation at the sequence databases. Therefore, our strategy aims to cope with the data stream by the comprehensive application of analysis tools to sequences of complete genomes, the systematic classification of protein sequences and the active support of sequence analysis and functional genomics projects. This report describes the systematic and up-to-date analysis of genomes (PEDANT), a comprehensive database of the yeast genome (MYGD), a database reflecting the progress in sequencing the Arabidopsis thaliana genome (MATD), the database of assembled, annotated human EST clusters (MEST), and the collection of protein sequence data within the framework of the PIR-International Protein Sequence Database (described elsewhere in this volume). MIPS provides access through its WWW server (http://www.mips.biochem.mpg.de) to a spectrum of generic databases, including the above mentioned as well as a database of protein families (PROTFAM), the MITOP database, and the all-against-all FASTA database. PMID:9847138

  14. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  15. Active in-database processing to support ambient assisted living systems.

    PubMed

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  16. Improved Information Retrieval Performance on SQL Database Using Data Adapter

    NASA Astrophysics Data System (ADS)

    Husni, M.; Djanali, S.; Ciptaningtyas, H. T.; Wicaksana, I. G. N. A.

    2018-02-01

    The NoSQL databases, short for Not Only SQL, are increasingly being used as the number of big data applications increases. Most systems still use relational databases (RDBs), but as the number of data increases each year, the system handles big data with NoSQL databases to analyze and access data more quickly. NoSQL emerged as a result of the exponential growth of the internet and the development of web applications. The query syntax in the NoSQL database differs from the SQL database, therefore requiring code changes in the application. Data adapter allow applications to not change their SQL query syntax. Data adapters provide methods that can synchronize SQL databases with NotSQL databases. In addition, the data adapter provides an interface which is application can access to run SQL queries. Hence, this research applied data adapter system to synchronize data between MySQL database and Apache HBase using direct access query approach, where system allows application to accept query while synchronization process in progress. From the test performed using data adapter, the results obtained that the data adapter can synchronize between SQL databases, MySQL, and NoSQL database, Apache HBase. This system spends the percentage of memory resources in the range of 40% to 60%, and the percentage of processor moving from 10% to 90%. In addition, from this system also obtained the performance of database NoSQL better than SQL database.

  17. Active In-Database Processing to Support Ambient Assisted Living Systems

    PubMed Central

    de Morais, Wagner O.; Lundström, Jens; Wickström, Nicholas

    2014-01-01

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare. PMID:25120164

  18. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  19. A Model Based Mars Climate Database for the Mission Design

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access

  20. An Integrated Molecular Database on Indian Insects.

    PubMed

    Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil

    2018-01-01

    MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.

  1. Hiding in plain sight

    NASA Astrophysics Data System (ADS)

    Riedel, Adric Richard

    2012-05-01

    Since the first successful measurements of stellar trigonometric parallax in the 1830s, the study of nearby stars has focused on the highest proper motion stars (micro > 0.18″ yr-1). Those high proper motion stars have formed the backbone of the last 150 years of study of the Solar Neighborhood and the composition of the Galaxy. Statistically speaking, though, there is a population of stars that will have low proper motions when their space motions have been projected onto the sky. At the same time, over the last twenty years, populations of relatively young stars (less than ˜ 100 Myr), most of them with low proper motions, have been revealed near (< 100 pc) the Sun. This dissertation is the result of two related projects: A photometric search for nearby (< 25pc) southern-hemisphere M dwarf stars with low proper motions (micro < 0.18″ yr-1), and a search for nearby (< 100 pc) pre-main-sequence (< 125 Myr old) M dwarf systems. The projects rely on a variety of photometric, spectroscopic, and astrometric analyses (including parallaxes from our program) using data from telescopes at CTIO via the SMARTS Consortium and at Lowell Observatory. Within this dissertation, I describe the identification and confirmation of 23 new nearby low proper motion M dwarf systems within 25 pc, 8 of which are within 15 pc (50% of the anticipated low-proper-motion 15 pc sample). I also report photometric, spectroscopic, and astrometric parameters and identifications for a selection of 25 known and new candidate nearby young M dwarfs, including new low-mass members of the TW Hydra, beta Pictoris, Tucana-Horologium, Argus, and AB Doradus associations, following the methods of my Riedel et al. (2011) paper and its discovery of AP Col, the closest pre-main-sequence star to the Solar System. These low proper motion and nearby star discoveries are put into the context of the Solar Neighborhood as a whole by means of the new RECONS 25 pc Database, to which I have now added (including my Riedel et al. (2010) paper) 81 star systems (4% of the total). INDEX WORDS: Astronomy, Astrometry, Photometry, Spectroscopy, Kinematics, Proper motion, Parallax, Nearby stars, Low-mass stars, Young stars, Pre-main-sequence stars.

  2. The Danish Testicular Cancer database.

    PubMed

    Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob

    2016-01-01

    The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.

  3. Building an integrated neurodegenerative disease database at an academic health center.

    PubMed

    Xie, Sharon X; Baek, Young; Grossman, Murray; Arnold, Steven E; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M-Y; Trojanowski, John Q

    2011-07-01

    It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration. These comparative studies rely on powerful database tools to quickly generate data sets that match diverse and complementary criteria set by them. In this article, we present a novel integrated neurodegenerative disease (INDD) database, which was developed at the University of Pennsylvania (Penn) with the help of a consortium of Penn investigators. Because the work of these investigators are based on Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and frontotemporal lobar degeneration, it allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used the Microsoft SQL server as a platform, with built-in "backwards" functionality to provide Access as a frontend client to interface with the database. We used PHP Hypertext Preprocessor to create the "frontend" web interface and then used a master lookup table to integrate individual neurodegenerative disease databases. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Using the INDD database, we compared the results of a biomarker study with those using an alternative approach by querying individual databases separately. We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies on several neurodegenerative diseases. Copyright © 2011 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  4. Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana

    2002-01-01

    Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.

  5. Creating Your Own Database.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1982-01-01

    Outlines the important factors to be considered in selecting a database management system for use with a microcomputer and presents a series of guidelines for developing a database. General procedures, report generation, data manipulation, information storage, word processing, data entry, database indexes, and relational databases are among the…

  6. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  7. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  8. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  9. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  10. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  11. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  12. A Summary of the Naval Postgraduate School Research Program

    DTIC Science & Technology

    1989-08-30

    5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database

  13. [National Database of Genotypes--ethical and legal issues].

    PubMed

    Franková, Vera; Tesínová, Jolana; Brdicka, Radim

    2011-01-01

    National Database of Genotypes--ethical and legal issues The aim of the project National Database of Genotypes is to outline structure and rules for the database operation collecting information about genotypes of individual persons. The database should be used entirely for health care. Its purpose is to enable physicians to gain quick and easy access to the information about persons requiring specialized care due to their genetic constitution. In the future, another introduction of new genetic tests into the clinical practice can be expected thus the database of genotypes facilitates substantial financial savings by exclusion of duplicates of the expensive genetic testing. Ethical questions connected with the creating and functioning of such database concern mainly privacy protection, confidentiality of personal sensitive data, protection of database from misuse, consent with participation and public interests. Due to necessity of correct interpretation by qualified professional (= clinical geneticist), particular categorization of genetic data within the database is discussed. The function of proposed database has to be governed in concordance with the Czech legislation together with solving ethical problems.

  14. Lessons Learned From Developing Reactor Pressure Vessel Steel Embrittlement Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John

    Materials behaviors caused by neutron irradiation under fission and/or fusion environments can be little understood without practical examination. Easily accessible material information system with large material database using effective computers is necessary for design of nuclear materials and analyses or simulations of the phenomena. The developed Embrittlement Data Base (EDB) at ORNL is this comprehensive collection of data. EDB database contains power reactor pressure vessel surveillance data, the material test reactor data, foreign reactor data (through bilateral agreements authorized by NRC), and the fracture toughness data. The lessons learned from building EDB program and the associated database management activity regardingmore » Material Database Design Methodology, Architecture and the Embedded QA Protocol are described in this report. The development of IAEA International Database on Reactor Pressure Vessel Materials (IDRPVM) and the comparison of EDB database and IAEA IDRPVM database are provided in the report. The recommended database QA protocol and database infrastructure are also stated in the report.« less

  15. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  16. Database constraints applied to metabolic pathway reconstruction tools.

    PubMed

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes.

  17. Advanced life support study

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Summary reports on each of the eight tasks undertaken by this contract are given. Discussed here is an evaluation of a Closed Ecological Life Support System (CELSS), including modeling and analysis of Physical/Chemical Closed Loop Life Support (P/C CLLS); the Environmental Control and Life Support Systems (ECLSS) evolution - Intermodule Ventilation study; advanced technologies interface requirements relative to ECLSS; an ECLSS resupply analysis; the ECLSS module addition relocation systems engineering analysis; an ECLSS cost/benefit analysis to identify rack-level interface requirements of the alternate technologies evaluated in the ventilation study, with a comparison of these with the rack level interface requirements for the baseline technologies; advanced instrumentation - technology database enhancement; and a clean room survey and assessment of various ECLSS evaluation options for different growth scenarios.

  18. PC based temporary shielding administrative procedure (TSAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, D.E.; Pederson, G.E.; Hamby, P.N.

    1995-03-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative proceduremore » and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met.« less

  19. AQUIS: A PC-based source information manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1993-05-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less

  20. AQUIS: A PC-based source information manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Huber, C.C.; Tschanz, J.

    1993-01-01

    The Air Quality Utility Information System (AQUIS) was developed to calculate emissions and track them along with related information about sources, stacks, controls, and permits. The system runs on IBM- compatible personal computers with dBASE IV and tracks more than 1, 200 data items distributed among various source categories. AQUIS is currently operating at 11 US Air Force facilities, which have up to 1, 000 sources, and two headquarters. The system provides a flexible reporting capability that permits users who are unfamiliar with database structure to design and prepare reports containing user- specified information. In addition to the criteria pollutants,more » AQUIS calculates compound-specific emissions and allows users to enter their own emission estimates.« less

  1. A Proposed Collaborative Framework for Prefabricated Housing Construction Using RFID Technology

    NASA Astrophysics Data System (ADS)

    Charnwasununth, Phatsaphan; Yabuki, Nobuyoshi; Tongthong, Tanit

    Despite the popularity of prefabricated housing construction in Thailand and many other countries, due to the lack of collaboration in current practice, undesired low productivity and a number of mistakes are identified. This research proposes a framework to raise the collaborative level for improving productivity and reducing mistake occurrences at sites. In this framework, RFID system bridges the gap between the real situation and the design, and the proposed system can cope with the unexpected construction conditions by generating proper alternatives. This system is composed of PDAs, RFID readers, laptop PCs, and a desktop PC. Six main modules and a database system are implemented in laptop PCs for recording actual site conditions, generating working alternatives, providing related information, and evaluating the work.

  2. Research and realization of key technology in HILS interactive system

    NASA Astrophysics Data System (ADS)

    Liu, Che; Lu, Huiming; Wang, Fankai

    2018-03-01

    This paper designed HILS (Hardware In the Loop Simulation) interactive system based on xPC platform . Through the interface between C++ and MATLAB engine, establish the seamless data connection between Simulink and interactive system, complete data interaction between system and Simulink, realize the function development of model configuration, parameter modification and off line simulation. We establish the data communication between host and target machine through TCP/IP protocol to realize the model download and real-time simulation. Use database to store simulation data, implement real-time simulation monitoring and simulation data management. Realize system function integration by Qt graphic interface library and dynamic link library. At last, take the typical control system as an example to verify the feasibility of HILS interactive system.

  3. Identifying work-related motor vehicle crashes in multiple databases.

    PubMed

    Thomas, Andrea M; Thygerson, Steven M; Merrill, Ray M; Cook, Lawrence J

    2012-01-01

    To compare and estimate the magnitude of work-related motor vehicle crashes in Utah using 2 probabilistically linked statewide databases. Data from 2006 and 2007 motor vehicle crash and hospital databases were joined through probabilistic linkage. Summary statistics and capture-recapture were used to describe occupants injured in work-related motor vehicle crashes and estimate the size of this population. There were 1597 occupants in the motor vehicle crash database and 1673 patients in the hospital database identified as being in a work-related motor vehicle crash. We identified 1443 occupants with at least one record from either the motor vehicle crash or hospital database indicating work-relatedness that linked to any record in the opposing database. We found that 38.7 percent of occupants injured in work-related motor vehicle crashes identified in the motor vehicle crash database did not have a primary payer code of workers' compensation in the hospital database and 40.0 percent of patients injured in work-related motor vehicle crashes identified in the hospital database did not meet our definition of a work-related motor vehicle crash in the motor vehicle crash database. Depending on how occupants injured in work-related motor crashes are identified, we estimate the population to be between 1852 and 8492 in Utah for the years 2006 and 2007. Research on single databases may lead to biased interpretations of work-related motor vehicle crashes. Combining 2 population based databases may still result in an underestimate of the magnitude of work-related motor vehicle crashes. Improved coding of work-related incidents is needed in current databases.

  4. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  5. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  6. Utilisation of a thoracic oncology database to capture radiological and pathological images for evaluation of response to chemotherapy in patients with malignant pleural mesothelioma

    PubMed Central

    Carey, George B; Kazantsev, Stephanie; Surati, Mosmi; Rolle, Cleo E; Kanteti, Archana; Sadiq, Ahad; Bahroos, Neil; Raumann, Brigitte; Madduri, Ravi; Dave, Paul; Starkey, Adam; Hensing, Thomas; Husain, Aliya N; Vokes, Everett E; Vigneswaran, Wickii; Armato, Samuel G; Kindler, Hedy L; Salgia, Ravi

    2012-01-01

    Objective An area of need in cancer informatics is the ability to store images in a comprehensive database as part of translational cancer research. To meet this need, we have implemented a novel tandem database infrastructure that facilitates image storage and utilisation. Background We had previously implemented the Thoracic Oncology Program Database Project (TOPDP) database for our translational cancer research needs. While useful for many research endeavours, it is unable to store images, hence our need to implement an imaging database which could communicate easily with the TOPDP database. Methods The Thoracic Oncology Research Program (TORP) imaging database was designed using the Research Electronic Data Capture (REDCap) platform, which was developed by Vanderbilt University. To demonstrate proof of principle and evaluate utility, we performed a retrospective investigation into tumour response for malignant pleural mesothelioma (MPM) patients treated at the University of Chicago Medical Center with either of two analogous chemotherapy regimens and consented to at least one of two UCMC IRB protocols, 9571 and 13473A. Results A cohort of 22 MPM patients was identified using clinical data in the TOPDP database. After measurements were acquired, two representative CT images and 0–35 histological images per patient were successfully stored in the TORP database, along with clinical and demographic data. Discussion We implemented the TORP imaging database to be used in conjunction with our comprehensive TOPDP database. While it requires an additional effort to use two databases, our database infrastructure facilitates more comprehensive translational research. Conclusions The investigation described herein demonstrates the successful implementation of this novel tandem imaging database infrastructure, as well as the potential utility of investigations enabled by it. The data model presented here can be utilised as the basis for further development of other larger, more streamlined databases in the future. PMID:23103606

  7. The landslide database for Germany: Closing the gap at national level

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2015-11-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still has a long research history in Germany, but one focussed on the development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present paper reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to the 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this paper, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of three case studies in the German Central Uplands. The case study results exemplify database application in the analysis of landslide frequency and causes, impact statistics, and landslide susceptibility modeling. Using the example of these case studies, strengths and weaknesses of the database are discussed in detail. The paper concludes with a summary of the database project with regard to previous achievements and the strategic roadmap.

  8. Applications of Database Machines in Library Systems.

    ERIC Educational Resources Information Center

    Salmon, Stephen R.

    1984-01-01

    Characteristics and advantages of database machines are summarized and their applications to library functions are described. The ability to attach multiple hosts to the same database and flexibility in choosing operating and database management systems for different functions without loss of access to common database are noted. (EJS)

  9. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  10. Using a Semi-Realistic Database to Support a Database Course

    ERIC Educational Resources Information Center

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  11. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/.This searchable database is a tool that may be used to identify existing contracts and other...

  12. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/. This searchable database is a tool that may be used to identify existing contracts and other...

  13. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/ .This searchable database is a tool that may be used to identify existing contracts and other...

  14. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/.This searchable database is a tool that may be used to identify existing contracts and other...

  15. 78 FR 65644 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-01

    ... Files Database. FHFA-OIG-2: FHFA-OIG Investigative & Evaluative Files Database. FHFA-OIG-3: FHFA-OIG Investigative & Evaluative MIS Database. FHFA-OIG-4: FHFA-OIG Hotline Database. FHFA-OIG-5: FHFA-OIG... & Evaluative Files Database, published at 76 FR 11465 (March 2, 2011), is being amended to eliminate all...

  16. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  17. Online Petroleum Industry Bibliographic Databases: A Review.

    ERIC Educational Resources Information Center

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  18. Building Databases for Education. ERIC Digest.

    ERIC Educational Resources Information Center

    Klausmeier, Jane A.

    This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…

  19. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  20. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false TRS User Registration Database and... Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall validate... Database on a per-call basis. Emergency 911 calls are excepted from this requirement. (i) Validation shall...

  1. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  2. 47 CFR 15.713 - TV bands database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database. 15.713 Section 15.713... TV bands database. (a) Purpose. The TV bands database serves the following functions: (1) To... channels are determined based on the interference protection requirements in § 15.712. A database must...

  3. 47 CFR 15.713 - TV bands database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database. 15.713 Section 15.713... TV bands database. (a) Purpose. The TV bands database serves the following functions: (1) To... channels are determined based on the interference protection requirements in § 15.712. A database must...

  4. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  5. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  6. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  7. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false TRS User Registration Database and... Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall validate... Database on a per-call basis. Emergency 911 calls are excepted from this requirement. (i) Validation shall...

  8. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  9. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  10. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  11. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  12. PrimateLit Database

    Science.gov Websites

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  13. [1012.5676] The Exoplanet Orbit Database

    Science.gov Websites

    : The Exoplanet Orbit Database Authors: Jason T Wright, Onsi Fakhouri, Geoffrey W. Marcy, Eunkyu Han present a database of well determined orbital parameters of exoplanets. This database comprises parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets

  14. 77 FR 24925 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... CES Personnel Information System database of NIFA. This database is updated annually from data provided by 1862 and 1890 land-grant universities. This database is maintained by the Agricultural Research... reviewer. NIFA maintains a database of potential reviewers. Information in the database is used to match...

  15. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  16. Statewide Education Databases: Policy Issues. Discussion Draft.

    ERIC Educational Resources Information Center

    Hansen, Kenneth H.

    This essay reviews current policy issues regarding statewide educational databases. It begins by defining the major characteristics of a database and raising two questions: (1) Is it really necessary to have a statewide educational database? (2) What is the primary rationale for creating one? The limitations of databases in formulating educational…

  17. A Relational Database System for Student Use.

    ERIC Educational Resources Information Center

    Fertuck, Len

    1982-01-01

    Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)

  18. A Brief Review of RNA–Protein Interaction Database Resources

    PubMed Central

    Yi, Ying; Zhao, Yue; Huang, Yan; Wang, Dong

    2017-01-01

    RNA–Protein interactions play critical roles in various biological processes. By collecting and analyzing the RNA–Protein interactions and binding sites from experiments and predictions, RNA–Protein interaction databases have become an essential resource for the exploration of the transcriptional and post-transcriptional regulatory network. Here, we briefly review several widely used RNA–Protein interaction database resources developed in recent years to provide a guide of these databases. The content and major functions in databases are presented. The brief description of database helps users to quickly choose the database containing information they interested. In short, these RNA–Protein interaction database resources are continually updated, but the current state shows the efforts to identify and analyze the large amount of RNA–Protein interactions. PMID:29657278

  19. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  20. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  1. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. PylotDB - A Database Management, Graphing, and Analysis Tool Written in Python

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnette, Daniel W.

    2012-01-04

    PylotDB, written completely in Python, provides a user interface (UI) with which to interact with, analyze, graph data from, and manage open source databases such as MySQL. The UI mitigates the user having to know in-depth knowledge of the database application programming interface (API). PylotDB allows the user to generate various kinds of plots from user-selected data; generate statistical information on text as well as numerical fields; backup and restore databases; compare database tables across different databases as well as across different servers; extract information from any field to create new fields; generate, edit, and delete databases, tables, and fields;more » generate or read into a table CSV data; and similar operations. Since much of the database information is brought under control of the Python computer language, PylotDB is not intended for huge databases for which MySQL and Oracle, for example, are better suited. PylotDB is better suited for smaller databases that might be typically needed in a small research group situation. PylotDB can also be used as a learning tool for database applications in general.« less

  3. A computational platform to maintain and migrate manual functional annotations for BioCyc databases.

    PubMed

    Walsh, Jesse R; Sen, Taner Z; Dickerson, Julie A

    2014-10-12

    BioCyc databases are an important resource for information on biological pathways and genomic data. Such databases represent the accumulation of biological data, some of which has been manually curated from literature. An essential feature of these databases is the continuing data integration as new knowledge is discovered. As functional annotations are improved, scalable methods are needed for curators to manage annotations without detailed knowledge of the specific design of the BioCyc database. We have developed CycTools, a software tool which allows curators to maintain functional annotations in a model organism database. This tool builds on existing software to improve and simplify annotation data imports of user provided data into BioCyc databases. Additionally, CycTools automatically resolves synonyms and alternate identifiers contained within the database into the appropriate internal identifiers. Automating steps in the manual data entry process can improve curation efforts for major biological databases. The functionality of CycTools is demonstrated by transferring GO term annotations from MaizeCyc to matching proteins in CornCyc, both maize metabolic pathway databases available at MaizeGDB, and by creating strain specific databases for metabolic engineering.

  4. Multiple imputation as one tool to provide longitudinal databases for modelling human height and weight development.

    PubMed

    Aßmann, C

    2016-06-01

    Besides large efforts regarding field work, provision of valid databases requires statistical and informational infrastructure to enable long-term access to longitudinal data sets on height, weight and related issues. To foster use of longitudinal data sets within the scientific community, provision of valid databases has to address data-protection regulations. It is, therefore, of major importance to hinder identifiability of individuals from publicly available databases. To reach this goal, one possible strategy is to provide a synthetic database to the public allowing for pretesting strategies for data analysis. The synthetic databases can be established using multiple imputation tools. Given the approval of the strategy, verification is based on the original data. Multiple imputation by chained equations is illustrated to facilitate provision of synthetic databases as it allows for capturing a wide range of statistical interdependencies. Also missing values, typically occurring within longitudinal databases for reasons of item non-response, can be addressed via multiple imputation when providing databases. The provision of synthetic databases using multiple imputation techniques is one possible strategy to ensure data protection, increase visibility of longitudinal databases and enhance the analytical potential.

  5. Performance assessment of EMR systems based on post-relational database.

    PubMed

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  6. Overlap and diversity in antimicrobial peptide databases: compiling a non-redundant set of sequences.

    PubMed

    Aguilera-Mendoza, Longendri; Marrero-Ponce, Yovani; Tellez-Ibarra, Roberto; Llorente-Quesada, Monica T; Salgado, Jesús; Barigye, Stephen J; Liu, Jun

    2015-08-01

    The large variety of antimicrobial peptide (AMP) databases developed to date are characterized by a substantial overlap of data and similarity of sequences. Our goals are to analyze the levels of redundancy for all available AMP databases and use this information to build a new non-redundant sequence database. For this purpose, a new software tool is introduced. A comparative study of 25 AMP databases reveals the overlap and diversity among them and the internal diversity within each database. The overlap analysis shows that only one database (Peptaibol) contains exclusive data, not present in any other, whereas all sequences in the LAMP_Patent database are included in CAMP_Patent. However, the majority of databases have their own set of unique sequences, as well as some overlap with other databases. The complete set of non-duplicate sequences comprises 16 990 cases, which is almost half of the total number of reported peptides. On the other hand, the diversity analysis identifies the most and least diverse databases and proves that all databases exhibit some level of redundancy. Finally, we present a new parallel-free software, named Dover Analyzer, developed to compute the overlap and diversity between any number of databases and compile a set of non-redundant sequences. These results are useful for selecting or building a suitable representative set of AMPs, according to specific needs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Melanoma of the Skin in the Danish Cancer Registry and the Danish Melanoma Database: A Validation Study.

    PubMed

    Pedersen, Sidsel Arnspang; Schmidt, Sigrun Alba Johannesdottir; Klausen, Siri; Pottegård, Anton; Friis, Søren; Hölmich, Lisbet Rosenkrantz; Gaist, David

    2018-05-01

    The nationwide Danish Cancer Registry and the Danish Melanoma Database both record data on melanoma for purposes of monitoring, quality assurance, and research. However, the data quality of the Cancer Registry and the Melanoma Database has not been formally evaluated. We estimated the positive predictive value (PPV) of melanoma diagnosis for random samples of 200 patients from the Cancer Registry (n = 200) and the Melanoma Database (n = 200) during 2004-2014, using the Danish Pathology Registry as "gold standard" reference. We further validated tumor characteristics in the Cancer Registry and the Melanoma Database. Additionally, we estimated the PPV of in situ melanoma diagnoses in the Melanoma Database, and the sensitivity of melanoma diagnoses in 2004-2014. The PPVs of melanoma in the Cancer Registry and the Melanoma Database were 97% (95% CI = 94, 99) and 100%. The sensitivity was 90% in the Cancer Registry and 77% in the Melanoma Database. The PPV of in situ melanomas in the Melanoma Database was 97% and the sensitivity was 56%. In the Melanoma Database, we observed PPVs of ulceration of 75% and Breslow thickness of 96%. The PPV of histologic subtypes varied between 87% and 100% in the Cancer Registry and 93% and 100% in the Melanoma Database. The PPVs for anatomical localization were 83%-95% in the Cancer Registry and 93%-100% in the Melanoma Database. The data quality in both the Cancer Registry and the Melanoma Database is high, supporting their use in epidemiologic studies.

  8. The 24th annual Nucleic Acids Research database issue: a look back and upcoming changes

    PubMed Central

    Rigden, Daniel J

    2017-01-01

    Abstract This year's Database Issue of Nucleic Acids Research contains 152 papers that include descriptions of 54 new databases and update papers on 98 databases, of which 16 have not been previously featured in NAR. As always, these databases cover a broad range of molecular biology subjects, including genome structure, gene expression and its regulation, proteins, protein domains, and protein–protein interactions. Following the recent trend, an increasing number of new and established databases deal with the issues of human health, from cancer-causing mutations to drugs and drug targets. In accordance with this trend, three recently compiled databases that have been selected by NAR reviewers and editors as ‘breakthrough’ contributions, denovo-db, the Monarch Initiative, and Open Targets, cover human de novo gene variants, disease-related phenotypes in model organisms, and a bioinformatics platform for therapeutic target identification and validation, respectively. We expect these databases to attract the attention of numerous researchers working in various areas of genetics and genomics. Looking back at the past 12 years, we present here the ‘golden set’ of databases that have consistently served as authoritative, comprehensive, and convenient data resources widely used by the entire community and offer some lessons on what makes a successful database. The Database Issue is freely available online at the https://academic.oup.com/nar web site. An updated version of the NAR Molecular Biology Database Collection is available at http://www.oxfordjournals.org/nar/database/a/. PMID:28053160

  9. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  10. Healthcare databases in Europe for studying medicine use and safety during pregnancy.

    PubMed

    Charlton, Rachel A; Neville, Amanda J; Jordan, Sue; Pierini, Anna; Damase-Michel, Christine; Klungsøyr, Kari; Andersen, Anne-Marie Nybo; Hansen, Anne Vinkel; Gini, Rosa; Bos, Jens H J; Puccini, Aurora; Hurault-Delarue, Caroline; Brooks, Caroline J; de Jong-van den Berg, Lolkje T W; de Vries, Corinne S

    2014-06-01

    The aim of this study was to describe a number of electronic healthcare databases in Europe in terms of the population covered, the source of the data captured and the availability of data on key variables required for evaluating medicine use and medicine safety during pregnancy. A sample of electronic healthcare databases that captured pregnancies and prescription data was selected on the basis of contacts within the EUROCAT network. For each participating database, a database inventory was completed. Eight databases were included, and the total population covered was 25 million. All databases recorded live births, seven captured stillbirths and five had full data available on spontaneous pregnancy losses and induced terminations. In six databases, data were usually available to determine the date of the woman's last menstrual period, whereas in the remainder, algorithms were needed to establish a best estimate for at least some pregnancies. In seven databases, it was possible to use data recorded in the databases to identify pregnancies where the offspring had a congenital anomaly. Information on confounding variables was more commonly available in databases capturing data recorded by primary-care practitioners. All databases captured maternal co-prescribing and a measure of socioeconomic status. This study suggests that within Europe, electronic healthcare databases may be valuable sources of data for evaluating medicine use and safety during pregnancy. The suitability of a particular database, however, will depend on the research question, the type of medicine to be evaluated, the prevalence of its use and any adverse outcomes of interest. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd.

  11. Variability in Standard Outcomes of Posterior Lumbar Fusion Determined by National Databases.

    PubMed

    Joseph, Jacob R; Smith, Brandon W; Park, Paul

    2017-01-01

    National databases are used with increasing frequency in spine surgery literature to evaluate patient outcomes. The differences between individual databases in relationship to outcomes of lumbar fusion are not known. We evaluated the variability in standard outcomes of posterior lumbar fusion between the University HealthSystem Consortium (UHC) database and the Healthcare Cost and Utilization Project National Inpatient Sample (NIS). NIS and UHC databases were queried for all posterior lumbar fusions (International Classification of Diseases, Ninth Revision code 81.07) performed in 2012. Patient demographics, comorbidities (including obesity), length of stay (LOS), in-hospital mortality, and complications such as urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, durotomy, and surgical site infection were collected using specific International Classification of Diseases, Ninth Revision codes. Analysis included 21,470 patients from the NIS database and 14,898 patients from the UHC database. Demographic data were not significantly different between databases. Obesity was more prevalent in UHC (P = 0.001). Mean LOS was 3.8 days in NIS and 4.55 in UHC (P < 0.0001). Complications were significantly higher in UHC, including urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, surgical site infection, and durotomy. In-hospital mortality was similar between databases. NIS and UHC databases had similar demographic patient populations undergoing posterior lumbar fusion. However, the UHC database reported significantly higher complication rate and longer LOS. This difference may reflect academic institutions treating higher-risk patients; however, a definitive reason for the variability between databases is unknown. The inability to precisely determine the basis of the variability between databases highlights the limitations of using administrative databases for spinal outcome analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315

  13. BioWarehouse: a bioinformatics database warehouse toolkit.

    PubMed

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  14. An Investigation of Multidimensional Voice Program Parameters in Three Different Databases for Voice Pathology Detection and Classification.

    PubMed

    Al-Nasheri, Ahmed; Muhammad, Ghulam; Alsulaiman, Mansour; Ali, Zulfiqar; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H; Bencherif, Mohamed A

    2017-01-01

    Automatic voice-pathology detection and classification systems may help clinicians to detect the existence of any voice pathologies and the type of pathology from which patients suffer in the early stages. The main aim of this paper is to investigate Multidimensional Voice Program (MDVP) parameters to automatically detect and classify the voice pathologies in multiple databases, and then to find out which parameters performed well in these two processes. Samples of the sustained vowel /a/ of normal and pathological voices were extracted from three different databases, which have three voice pathologies in common. The selected databases in this study represent three distinct languages: (1) the Arabic voice pathology database; (2) the Massachusetts Eye and Ear Infirmary database (English database); and (3) the Saarbruecken Voice Database (German database). A computerized speech lab program was used to extract MDVP parameters as features, and an acoustical analysis was performed. The Fisher discrimination ratio was applied to rank the parameters. A t test was performed to highlight any significant differences in the means of the normal and pathological samples. The experimental results demonstrate a clear difference in the performance of the MDVP parameters using these databases. The highly ranked parameters also differed from one database to another. The best accuracies were obtained by using the three highest ranked MDVP parameters arranged according to the Fisher discrimination ratio: these accuracies were 99.68%, 88.21%, and 72.53% for the Saarbruecken Voice Database, the Massachusetts Eye and Ear Infirmary database, and the Arabic voice pathology database, respectively. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Molecular formula and METLIN Personal Metabolite Database matching applied to the identification of compounds generated by LC/TOF-MS.

    PubMed

    Sana, Theodore R; Roark, Joseph C; Li, Xiangdong; Waddell, Keith; Fischer, Steven M

    2008-09-01

    In an effort to simplify and streamline compound identification from metabolomics data generated by liquid chromatography time-of-flight mass spectrometry, we have created software for constructing Personalized Metabolite Databases with content from over 15,000 compounds pulled from the public METLIN database (http://metlin.scripps.edu/). Moreover, we have added extra functionalities to the database that (a) permit the addition of user-defined retention times as an orthogonal searchable parameter to complement accurate mass data; and (b) allow interfacing to separate software, a Molecular Formula Generator (MFG), that facilitates reliable interpretation of any database matches from the accurate mass spectral data. To test the utility of this identification strategy, we added retention times to a subset of masses in this database, representing a mixture of 78 synthetic urine standards. The synthetic mixture was analyzed and screened against this METLIN urine database, resulting in 46 accurate mass and retention time matches. Human urine samples were subsequently analyzed under the same analytical conditions and screened against this database. A total of 1387 ions were detected in human urine; 16 of these ions matched both accurate mass and retention time parameters for the 78 urine standards in the database. Another 374 had only an accurate mass match to the database, with 163 of those masses also having the highest MFG score. Furthermore, MFG calculated a formula for a further 849 ions that had no match to the database. Taken together, these results suggest that the METLIN Personal Metabolite database and MFG software offer a robust strategy for confirming the formula of database matches. In the event of no database match, it also suggests possible formulas that may be helpful in interpreting the experimental results.

  16. Analysis of Landslide Hazard Impact Using the Landslide Database for Germany

    NASA Astrophysics Data System (ADS)

    Klose, M.; Damm, B.

    2014-12-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still shows a comprehensive research history in Germany, but only one focused on development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this contribution, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of different case studies in the German Central Uplands. The case study results exemplify database application in analysis of vulnerability to landslides, impact statistics, and hazard or cost modeling.

  17. NLTE4 Plasma Population Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 159 NLTE4 Plasma Population Kinetics Database (Web database for purchase)   This database contains benchmark results for simulation of plasma population kinetics and emission spectra. The data were contributed by the participants of the 4th Non-LTE Code Comparison Workshop who have unrestricted access to the database. The only limitation for other users is in hidden labeling of the output results. Guest users can proceed to the database entry page without entering userid and password.

  18. The MAR databases: development and implementation of databases specific for marine metagenomics

    PubMed Central

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen

    2018-01-01

    Abstract We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. PMID:29106641

  19. Evaluating the quality of Marfan genotype-phenotype correlations in existing FBN1 databases.

    PubMed

    Groth, Kristian A; Von Kodolitsch, Yskert; Kutsche, Kerstin; Gaustadnes, Mette; Thorsen, Kasper; Andersen, Niels H; Gravholt, Claus H

    2017-07-01

    Genetic FBN1 testing is pivotal for confirming the clinical diagnosis of Marfan syndrome. In an effort to evaluate variant causality, FBN1 databases are often used. We evaluated the current databases regarding FBN1 variants and validated associated phenotype records with a new Marfan syndrome geno-phenotyping tool called the Marfan score. We evaluated four databases (UMD-FBN1, ClinVar, the Human Gene Mutation Database (HGMD), and Uniprot) containing 2,250 FBN1 variants supported by 4,904 records presented in 307 references. The Marfan score calculated for phenotype data from the records quantified variant associations with Marfan syndrome phenotype. We calculated a Marfan score for 1,283 variants, of which we confirmed the database diagnosis of Marfan syndrome in 77.1%. This represented only 35.8% of the total registered variants; 18.5-33.3% (UMD-FBN1 versus HGMD) of variants associated with Marfan syndrome in the databases could not be confirmed by the recorded phenotype. FBN1 databases can be imprecise and incomplete. Data should be used with caution when evaluating FBN1 variants. At present, the UMD-FBN1 database seems to be the biggest and best curated; therefore, it is the most comprehensive database. However, the need for better genotype-phenotype curated databases is evident, and we hereby present such a database.Genet Med advance online publication 01 December 2016.

  20. Database Constraints Applied to Metabolic Pathway Reconstruction Tools

    PubMed Central

    Vilaplana, Jordi; Solsona, Francesc; Teixido, Ivan; Usié, Anabel; Karathia, Hiren; Alves, Rui; Mateo, Jordi

    2014-01-01

    Our group developed two biological applications, Biblio-MetReS and Homol-MetReS, accessing the same database of organisms with annotated genes. Biblio-MetReS is a data-mining application that facilitates the reconstruction of molecular networks based on automated text-mining analysis of published scientific literature. Homol-MetReS allows functional (re)annotation of proteomes, to properly identify both the individual proteins involved in the process(es) of interest and their function. It also enables the sets of proteins involved in the process(es) in different organisms to be compared directly. The efficiency of these biological applications is directly related to the design of the shared database. We classified and analyzed the different kinds of access to the database. Based on this study, we tried to adjust and tune the configurable parameters of the database server to reach the best performance of the communication data link to/from the database system. Different database technologies were analyzed. We started the study with a public relational SQL database, MySQL. Then, the same database was implemented by a MapReduce-based database named HBase. The results indicated that the standard configuration of MySQL gives an acceptable performance for low or medium size databases. Nevertheless, tuning database parameters can greatly improve the performance and lead to very competitive runtimes. PMID:25202745

  1. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies

    PubMed Central

    Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431

  2. A Review of Databases Used in Orthopaedic Surgery Research and an Analysis of Database Use in Arthroscopy: The Journal of Arthroscopic and Related Surgery.

    PubMed

    Weinreb, Jeffrey H; Yoshida, Ryu; Cote, Mark P; O'Sullivan, Michael B; Mazzocca, Augustus D

    2017-01-01

    The purpose of this study was to evaluate how database use has changed over time in Arthroscopy: The Journal of Arthroscopic and Related Surgery and to inform readers about available databases used in orthopaedic literature. An extensive literature search was conducted to identify databases used in Arthroscopy and other orthopaedic literature. All articles published in Arthroscopy between January 1, 2006, and December 31, 2015, were reviewed. A database was defined as a national, widely available set of individual patient encounters, applicable to multiple patient populations, used in orthopaedic research in a peer-reviewed journal, not restricted by encounter setting or visit duration, and with information available in English. Databases used in Arthroscopy included PearlDiver, the American College of Surgeons National Surgical Quality Improvement Program, the Danish Common Orthopaedic Database, the Swedish National Knee Ligament Register, the Hospital Episodes Statistics database, and the National Inpatient Sample. Database use increased significantly from 4 articles in 2013 to 11 articles in 2015 (P = .012), with no database use between January 1, 2006, and December 31, 2012. Database use increased significantly between January 1, 2006, and December 31, 2015, in Arthroscopy. Level IV, systematic review of Level II through IV studies. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  3. Patent Databases. . .A Survey of What Is Available from DIALOG, Questel, SDC, Pergamon and INPADOC.

    ERIC Educational Resources Information Center

    Kulp, Carol S.

    1984-01-01

    Presents survey of two groups of databases covering patent literature: patent literature only and general literature that includes patents relevant to subject area of database. Description of databases and comparison tables for patent and general databases (cost, country coverage, years covered, update frequency, file size, and searchable data…

  4. Database Search Strategies & Tips. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 17 articles presenting strategies and tips for searching databases online appear in this collection, which is one in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  5. NASA STI Database, Aerospace Database and ARIN coverage of 'space law'

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1992-01-01

    The space-law coverage provided by the NASA STI Database, the Aerospace Database, and ARIN is briefly described. Particular attention is given to the space law content of the two Databases and of ARIN, the NASA Thesauras space law terminology, space law publication forms, and the availability of the space law literature.

  6. Hardwood log defect photographic database, software and user's guide

    Treesearch

    R. Edward Thomas

    2009-01-01

    Computer software and user's guide for Hardwood Log Defect Photographic Database. The database contains photographs and information on external hardwood log defects and the corresponding internal characteristics. This database allows users to search for specific defect types, sizes, and locations by tree species. For every defect, the database contains photos of...

  7. Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database

    NASA Technical Reports Server (NTRS)

    Levack, Daniel

    1993-01-01

    The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.

  8. The Primate Life History Database: A unique shared ecological data resource

    PubMed Central

    Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.

    2011-01-01

    Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066

  9. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    PubMed

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  10. Production and distribution of scientific and technical databases - Comparison among Japan, US and Europe

    NASA Astrophysics Data System (ADS)

    Onodera, Natsuo; Mizukami, Masayuki

    This paper estimates several quantitative indice on production and distribution of scientific and technical databases based on various recent publications and attempts to compare the indice internationally. Raw data used for the estimation are brought mainly from the Database Directory (published by MITI) for database production and from some domestic and foreign study reports for database revenues. The ratio of the indice among Japan, US and Europe for usage of database is similar to those for general scientific and technical activities such as population and R&D expenditures. But Japanese contributions to production, revenue and over-countory distribution of databases are still lower than US and European countries. International comparison of relative database activities between public and private sectors is also discussed.

  11. Large scale database scrubbing using object oriented software components.

    PubMed

    Herting, R L; Barnes, M R

    1998-01-01

    Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.

  12. Interactive Database of Pulsar Flux Density Measurements

    NASA Astrophysics Data System (ADS)

    Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.

    2012-12-01

    The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.

  13. Research on high availability architecture of SQL and NoSQL

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao

    2017-03-01

    With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.

  14. James Webb Space Telescope XML Database: From the Beginning to Today

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Fatig, Curtis C.

    2005-01-01

    The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.

  15. Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App

    NASA Astrophysics Data System (ADS)

    Nurnawati, E. K.; Ermawati, E.

    2018-02-01

    An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.

  16. The Israeli National Genetic database: a 10-year experience.

    PubMed

    Zlotogora, Joël; Patrinos, George P

    2017-03-16

    The Israeli National and Ethnic Mutation database ( http://server.goldenhelix.org/israeli ) was launched in September 2006 on the ETHNOS software to include clinically relevant genomic variants reported among Jewish and Arab Israeli patients. In 2016, the database was reviewed and corrected according to ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar ) and ExAC ( http://exac.broadinstitute.org ) database entries. The present article summarizes some key aspects from the development and continuous update of the database over a 10-year period, which could serve as a paradigm of successful database curation for other similar resources. In September 2016, there were 2444 entries in the database, 890 among Jews, 1376 among Israeli Arabs, and 178 entries among Palestinian Arabs, corresponding to an ~4× data content increase compared to when originally launched. While the Israeli Arab population is much smaller than the Jewish population, the number of pathogenic variants causing recessive disorders reported in the database is higher among Arabs (934) than among Jews (648). Nevertheless, the number of pathogenic variants classified as founder mutations in the database is smaller among Arabs (175) than among Jews (192). In 2016, the entire database content was compared to that of other databases such as ClinVar and ExAC. We show that a significant difference in the percentage of pathogenic variants from the Israeli genetic database that were present in ExAC was observed between the Jewish population (31.8%) and the Israeli Arab population (20.6%). The Israeli genetic database was launched in 2006 on the ETHNOS software and is available online ever since. It allows querying the database according to the disorder and the ethnicity; however, many other features are not available, in particular the possibility to search according to the name of the gene. In addition, due to the technical limitations of the previous ETHNOS software, new features and data are not included in the present online version of the database and upgrade is currently ongoing.

  17. Surgical research using national databases

    PubMed Central

    Leland, Hyuma; Heckmann, Nathanael

    2016-01-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945

  18. Surgical research using national databases.

    PubMed

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  19. The Design and Product of National 1:1000000 Cartographic Data of Topographic Map

    NASA Astrophysics Data System (ADS)

    Wang, Guizhi

    2016-06-01

    National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.

  20. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.

  1. National Transportation Atlas Databases : 1999

    DOT National Transportation Integrated Search

    1999-01-01

    The National Transportation Atlas Databases -- 1999 (NTAD99) is a set of national : geographic databases of transportation facilities. These databases include geospatial : information for transportation modal networks and intermodal terminals, and re...

  2. National Transportation Atlas Databases : 2001

    DOT National Transportation Integrated Search

    2001-01-01

    The National Transportation Atlas Databases-2001 (NTAD-2001) is a set of national geographic databases of transportation facilities. These databases include geospatial information for transportation modal networks and intermodal terminals and related...

  3. National Transportation Atlas Databases : 1996

    DOT National Transportation Integrated Search

    1996-01-01

    The National Transportation Atlas Databases -- 1996 (NTAD96) is a set of national : geographic databases of transportation facilities. These databases include geospatial : information for transportation modal networks and intermodal terminals, and re...

  4. National Transportation Atlas Databases : 2000

    DOT National Transportation Integrated Search

    2000-01-01

    The National Transportation Atlas Databases-2000 (NTAD-2000) is a set of national geographic databases of transportation facilities. These databases include geospatial information for transportation modal networks and intermodal terminals and related...

  5. National Transportation Atlas Databases : 1997

    DOT National Transportation Integrated Search

    1997-01-01

    The National Transportation Atlas Databases -- 1997 (NTAD97) is a set of national : geographic databases of transportation facilities. These databases include geospatial : information for transportation modal networks and intermodal terminals, and re...

  6. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...

  7. 9 CFR 55.25 - Animal identification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... CWD National Database or in an approved State database. The second animal identification must be... CWD National Database or in an approved State database. The means of animal identification must be...

  8. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...

  9. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...

  10. 9 CFR 55.25 - Animal identification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CWD National Database or in an approved State database. The second animal identification must be... CWD National Database or in an approved State database. The means of animal identification must be...

  11. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...

  12. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... applicant poses a security threat based on a search of the following databases: (1) Interpol and other international databases, as appropriate. (2) Terrorist watchlists and related databases. (3) Any other databases...

  13. Searching for Controlled Trials of Complementary and Alternative Medicine: A Comparison of 15 Databases

    PubMed Central

    Cogo, Elise; Sampson, Margaret; Ajiferuke, Isola; Manheimer, Eric; Campbell, Kaitryn; Daniel, Raymond; Moher, David

    2011-01-01

    This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records—these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases. PMID:19468052

  14. A systematic review of administrative and clinical databases of infants admitted to neonatal units.

    PubMed

    Statnikov, Yevgeniy; Ibrahim, Buthaina; Modi, Neena

    2017-05-01

    High quality information, increasingly captured in clinical databases, is a useful resource for evaluating and improving newborn care. We conducted a systematic review to identify neonatal databases, and define their characteristics. We followed a preregistered protocol using MesH terms to search MEDLINE, EMBASE, CINAHL, Web of Science and OVID Maternity and Infant Care Databases for articles identifying patient level databases covering more than one neonatal unit. Full-text articles were reviewed and information extracted on geographical coverage, criteria for inclusion, data source, and maternal and infant characteristics. We identified 82 databases from 2037 publications. Of the country-specific databases there were 39 regional and 39 national. Sixty databases restricted entries to neonatal unit admissions by birth characteristic or insurance cover; 22 had no restrictions. Data were captured specifically for 53 databases; 21 administrative sources; 8 clinical sources. Two clinical databases hold the largest range of data on patient characteristics, USA's Pediatrix BabySteps Clinical Data Warehouse and UK's National Neonatal Research Database. A number of neonatal databases exist that have potential to contribute to evaluating neonatal care. The majority is created by entering data specifically for the database, duplicating information likely already captured in other administrative and clinical patient records. This repetitive data entry represents an unnecessary burden in an environment where electronic patient records are increasingly used. Standardisation of data items is necessary to facilitate linkage within and between countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Searching for controlled trials of complementary and alternative medicine: a comparison of 15 databases.

    PubMed

    Cogo, Elise; Sampson, Margaret; Ajiferuke, Isola; Manheimer, Eric; Campbell, Kaitryn; Daniel, Raymond; Moher, David

    2011-01-01

    This project aims to assess the utility of bibliographic databases beyond the three major ones (MEDLINE, EMBASE and Cochrane CENTRAL) for finding controlled trials of complementary and alternative medicine (CAM). Fifteen databases were searched to identify controlled clinical trials (CCTs) of CAM not also indexed in MEDLINE. Searches were conducted in May 2006 using the revised Cochrane highly sensitive search strategy (HSSS) and the PubMed CAM Subset. Yield of CAM trials per 100 records was determined, and databases were compared over a standardized period (2005). The Acudoc2 RCT, Acubriefs, Index to Chiropractic Literature (ICL) and Hom-Inform databases had the highest concentrations of non-MEDLINE records, with more than 100 non-MEDLINE records per 500. Other productive databases had ratios between 500 and 1500 records to 100 non-MEDLINE records-these were AMED, MANTIS, PsycINFO, CINAHL, Global Health and Alt HealthWatch. Five databases were found to be unproductive: AGRICOLA, CAIRSS, Datadiwan, Herb Research Foundation and IBIDS. Acudoc2 RCT yielded 100 CAM trials in the most recent 100 records screened. Acubriefs, AMED, Hom-Inform, MANTIS, PsycINFO and CINAHL had more than 25 CAM trials per 100 records screened. Global Health, ICL and Alt HealthWatch were below 25 in yield. There were 255 non-MEDLINE trials from eight databases in 2005, with only 10% indexed in more than one database. Yield varied greatly between databases; the most productive databases from both sampling methods were Acubriefs, Acudoc2 RCT, AMED and CINAHL. Low overlap between databases indicates comprehensive CAM literature searches will require multiple databases.

  16. Intelligent communication assistant for databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobson, G.; Shaked, V.; Rowley, S.

    1983-01-01

    An intelligent communication assistant for databases, called FRED (front end for databases) is explored. FRED is designed to facilitate access to database systems by users of varying levels of experience. FRED is a second generation of natural language front-ends for databases and intends to solve two critical interface problems existing between end-users and databases: connectivity and communication problems. The authors report their experiences in developing software for natural language query processing, dialog control, and knowledge representation, as well as the direction of future work. 10 references.

  17. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  18. Human Ageing Genomic Resources: new and updated databases

    PubMed Central

    Tacutu, Robi; Thornton, Daniel; Johnson, Emily; Budovsky, Arie; Barardo, Diogo; Craig, Thomas; Diana, Eugene; Lehmann, Gilad; Toren, Dmitri; Wang, Jingwei; Fraifeld, Vadim E

    2018-01-01

    Abstract In spite of a growing body of research and data, human ageing remains a poorly understood process. Over 10 years ago we developed the Human Ageing Genomic Resources (HAGR), a collection of databases and tools for studying the biology and genetics of ageing. Here, we present HAGR’s main functionalities, highlighting new additions and improvements. HAGR consists of six core databases: (i) the GenAge database of ageing-related genes, in turn composed of a dataset of >300 human ageing-related genes and a dataset with >2000 genes associated with ageing or longevity in model organisms; (ii) the AnAge database of animal ageing and longevity, featuring >4000 species; (iii) the GenDR database with >200 genes associated with the life-extending effects of dietary restriction; (iv) the LongevityMap database of human genetic association studies of longevity with >500 entries; (v) the DrugAge database with >400 ageing or longevity-associated drugs or compounds; (vi) the CellAge database with >200 genes associated with cell senescence. All our databases are manually curated by experts and regularly updated to ensure a high quality data. Cross-links across our databases and to external resources help researchers locate and integrate relevant information. HAGR is freely available online (http://genomics.senescence.info/). PMID:29121237

  19. Mutation databases for inherited renal disease: are they complete, accurate, clinically relevant, and freely available?

    PubMed

    Savige, Judy; Dagher, Hayat; Povey, Sue

    2014-07-01

    This study examined whether gene-specific DNA variant databases for inherited diseases of the kidney fulfilled the Human Variome Project recommendations of being complete, accurate, clinically relevant and freely available. A recent review identified 60 inherited renal diseases caused by mutations in 132 genes. The disease name, MIM number, gene name, together with "mutation" or "database," were used to identify web-based databases. Fifty-nine diseases (98%) due to mutations in 128 genes had a variant database. Altogether there were 349 databases (a median of 3 per gene, range 0-6), but no gene had two databases with the same number of variants, and 165 (50%) databases included fewer than 10 variants. About half the databases (180, 54%) had been updated in the previous year. Few (77, 23%) were curated by "experts" but these included nine of the 11 with the most variants. Even fewer databases (41, 12%) included clinical features apart from the name of the associated disease. Most (223, 67%) could be accessed without charge, including those for 50 genes (40%) with the maximum number of variants. Future efforts should focus on encouraging experts to collaborate on a single database for each gene affected in inherited renal disease, including both unpublished variants, and clinical phenotypes. © 2014 WILEY PERIODICALS, INC.

  20. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion?

    PubMed

    Lawrence, D W

    2008-12-01

    To assess what is lost if only one literature database is searched for articles relevant to injury prevention and safety promotion (IPSP) topics. Serial textword (keyword, free-text) searches using multiple synonym terms for five key IPSP topics (bicycle-related brain injuries, ethanol-impaired driving, house fires, road rage, and suicidal behaviors among adolescents) were conducted in four of the bibliographic databases that are most used by IPSP professionals: EMBASE, MEDLINE, PsycINFO, and Web of Science. Through a systematic procedure, an inventory of articles on each topic in each database was conducted to identify the total unduplicated count of all articles on each topic, the number of articles unique to each database, and the articles available if only one database is searched. No single database included all of the relevant articles on any topic, and the database with the broadest coverage differed by topic. A search of only one literature database will return 16.7-81.5% (median 43.4%) of the available articles on any of five key IPSP topics. Each database contributed unique articles to the total bibliography for each topic. A literature search performed in only one database will, on average, lead to a loss of more than half of the available literature on a topic.

  1. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    PubMed

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.

  2. A Dynamic Approach to Make CDS/ISIS Databases Interoperable over the Internet Using the OAI Protocol

    ERIC Educational Resources Information Center

    Jayakanth, F.; Maly, K.; Zubair, M.; Aswath, L.

    2006-01-01

    Purpose: A dynamic approach to making legacy databases, like CDS/ISIS, interoperable with OAI-compliant digital libraries (DLs). Design/methodology/approach: There are many bibliographic databases that are being maintained using legacy database systems. CDS/ISIS is one such legacy database system. It was designed and developed specifically for…

  3. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov Websites

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available data collection and analysis methods. Finding consistent and transparent LCI data for life cycle and maintain the database. The 2009 U.S. Life Cycle Inventory (LCI) Data Stakeholder meeting was an

  4. Short Fiction on Film: A Relational DataBase.

    ERIC Educational Resources Information Center

    May, Charles

    Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…

  5. Multi-Sensor Scene Synthesis and Analysis

    DTIC Science & Technology

    1981-09-01

    Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database

  6. The EpiSLI Database: A Publicly Available Database on Speech and Language

    ERIC Educational Resources Information Center

    Tomblin, J. Bruce

    2010-01-01

    Purpose: This article describes a database that was created in the process of conducting a large-scale epidemiologic study of specific language impairment (SLI). As such, this database will be referred to as the EpiSLI database. Children with SLI have unexpected and unexplained difficulties learning and using spoken language. Although there is no…

  7. Energy Consumption Database

    Science.gov Websites

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  8. Heterogeneous database integration in biomedicine.

    PubMed

    Sujansky, W

    2001-08-01

    The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.

  9. RESIS-II: An Updated Version of the Original Reservoir Sedimentation Survey Information System (RESIS) Database

    USGS Publications Warehouse

    Ackerman, Katherine V.; Mixon, David M.; Sundquist, Eric T.; Stallard, Robert F.; Schwarz, Gregory E.; Stewart, David W.

    2009-01-01

    The Reservoir Sedimentation Survey Information System (RESIS) database, originally compiled by the Soil Conservation Service (now the Natural Resources Conservation Service) in collaboration with the Texas Agricultural Experiment Station, is the most comprehensive compilation of data from reservoir sedimentation surveys throughout the conterminous United States (U.S.). The database is a cumulative historical archive that includes data from as early as 1755 and as late as 1993. The 1,823 reservoirs included in the database range in size from farm ponds to the largest U.S. reservoirs (such as Lake Mead). Results from 6,617 bathymetric surveys are available in the database. This Data Series provides an improved version of the original RESIS database, termed RESIS-II, and a report describing RESIS-II. The RESIS-II relational database is stored in Microsoft Access and includes more precise location coordinates for most of the reservoirs than the original database but excludes information on reservoir ownership. RESIS-II is anticipated to be a template for further improvements in the database.

  10. Relational Database for the Geology of the Northern Rocky Mountains - Idaho, Montana, and Washington

    USGS Publications Warehouse

    Causey, J. Douglas; Zientek, Michael L.; Bookstrom, Arthur A.; Frost, Thomas P.; Evans, Karl V.; Wilson, Anna B.; Van Gosen, Bradley S.; Boleneus, David E.; Pitts, Rebecca A.

    2008-01-01

    A relational database was created to prepare and organize geologic map-unit and lithologic descriptions for input into a spatial database for the geology of the northern Rocky Mountains, a compilation of forty-three geologic maps for parts of Idaho, Montana, and Washington in U.S. Geological Survey Open File Report 2005-1235. Not all of the information was transferred to and incorporated in the spatial database due to physical file limitations. This report releases that part of the relational database that was completed for that earlier product. In addition to descriptive geologic information for the northern Rocky Mountains region, the relational database contains a substantial bibliography of geologic literature for the area. The relational database nrgeo.mdb (linked below) is available in Microsoft Access version 2000, a proprietary database program. The relational database contains data tables and other tables used to define terms, relationships between the data tables, and hierarchical relationships in the data; forms used to enter data; and queries used to extract data.

  11. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics.

    PubMed

    Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L

    2016-11-04

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/ .

  12. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics

    PubMed Central

    Deutsch, Eric W.; Sun, Zhi; Campbell, David S.; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S.; Moritz, Robert L.

    2016-01-01

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances – a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ~20,000 primary isoforms plus contaminants to a very large database that includes almost all non-redundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/. PMID:27577934

  13. A database of new zeolite-like materials.

    PubMed

    Pophale, Ramdas; Cheeseman, Phillip A; Deem, Michael W

    2011-07-21

    We here describe a database of computationally predicted zeolite-like materials. These crystals were discovered by a Monte Carlo search for zeolite-like materials. Positions of Si atoms as well as unit cell, space group, density, and number of crystallographically unique atoms were explored in the construction of this database. The database contains over 2.6 M unique structures. Roughly 15% of these are within +30 kJ mol(-1) Si of α-quartz, the band in which most of the known zeolites lie. These structures have topological, geometrical, and diffraction characteristics that are similar to those of known zeolites. The database is the result of refinement by two interatomic potentials that both satisfy the Pauli exclusion principle. The database has been deposited in the publicly available PCOD database and in www.hypotheticalzeolites.net/database/deem/. This journal is © the Owner Societies 2011

  14. The MAR databases: development and implementation of databases specific for marine metagenomics.

    PubMed

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  16. Screening for genes and subnetworks associated with pancreatic cancer based on the gene expression profile.

    PubMed

    Long, Jin; Liu, Zhe; Wu, Xingda; Xu, Yuanhong; Ge, Chunlin

    2016-05-01

    The present study aimed to screen for potential genes and subnetworks associated with pancreatic cancer (PC) using the gene expression profile. The expression profile GSE 16515 was downloaded from the Gene Expression Omnibus database, which included 36 PC tissue samples and 16 normal samples. Limma package in R language was used to screen differentially expressed genes (DEGs), which were grouped as up‑ and downregulated genes. Then, PFSNet was applied to perform subnetwork analysis for all the DEGs. Moreover, Gene Ontology (GO) and REACTOME pathway enrichment analysis of up‑ and downregulated genes was performed, followed by protein‑protein interaction (PPI) network construction using Search Tool for the Retrieval of Interacting Genes Search Tool for the Retrieval of Interacting Genes. In total, 1,989 DEGs including 1,461 up‑ and 528 downregulated genes were screened out. Subnetworks including pancreatic cancer in PC tissue samples and intercellular adhesion in normal samples were identified, respectively. A total of 8 significant REACTOME pathways for upregulated DEGs, such as hemostasis and cell cycle, mitotic were identified. Moreover, 4 significant REACTOME pathways for downregulated DEGs, including regulation of β‑cell development and transmembrane transport of small molecules were screened out. Additionally, DEGs with high connectivity degrees, such as CCNA2 (cyclin A2) and PBK (PDZ binding kinase), of the module in the protein‑protein interaction network were mainly enriched with cell‑division cycle. CCNA2 and PBK of the module and their relative pathway cell‑division cycle, and two subnetworks (pancreatic cancer and intercellular adhesion subnetworks) may be pivotal for further understanding of the molecular mechanism of PC.

  17. Mobile teledermatopathology: using a tablet PC as a novel and cost-efficient method to remotely diagnose dermatopathology cases.

    PubMed

    Speiser, Jodi J; Hughes, Ian; Mehta, Vikas; Wojcik, Eva M; Hutchens, Kelli A

    2014-01-01

    : Dermatopathology has relatively few studies regarding teledermatopathology and none have addressed the use of new technologies, such as the tablet PC. We hypothesized that the combination of our existing dynamic nonrobotic system with a tablet PC could provide a novel and cost-efficient method to remotely diagnose dermatopathology cases. 93 cases diagnosed by conventional light microscopy at least 5 months earlier by the participating dermatopathologist were retrieved by an electronic pathology database search. A high-resolution video camera (Nikon DS-L2, version 4.4) mounted on a microscope was used to transmit digital video of a slide to an Apple iPAD2 (Apple Inc, Cupertino, CA) at the pathologist's remote location via live streaming at an interval time of 500 ms and a resolution of 1280/960 pixels. Concordance to the original diagnosis and the seconds elapsed to reaching the diagnosis were recorded. 24.7% (23/93) of cases were melanocytic, 70.9% (66/93) were nonmelanocytic, and 4.4% (4/93) were inflammatory. About 92.5% (86/93) of cases were diagnosed on immediate viewing (<5 seconds), with the average time to diagnosis at 40.2 seconds (range: 10-218 seconds). Of the cases diagnosed immediately, 98.8% (85/86) of the telediagnoses were concordant with the original. Telepathology performed via a tablet PC may serve as a reliable and rapid technique for the diagnosis of routine cases with some diagnostic caveats in mind. Our study established a novel and cost-efficient solution for those institutions that may not have the capital to purchase either a dynamic robotic system or a virtual slide system.

  18. Object-based target templates guide attention during visual search.

    PubMed

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Mutations in the satellite cell gene MEGF10 cause a recessive congenital myopathy with minicores.

    PubMed

    Boyden, Steven E; Mahoney, Lane J; Kawahara, Genri; Myers, Jennifer A; Mitsuhashi, Satomi; Estrella, Elicia A; Duncan, Anna R; Dey, Friederike; DeChene, Elizabeth T; Blasko-Goehringer, Jessica M; Bönnemann, Carsten G; Darras, Basil T; Mendell, Jerry R; Lidov, Hart G W; Nishino, Ichizo; Beggs, Alan H; Kunkel, Louis M; Kang, Peter B

    2012-05-01

    We ascertained a nuclear family in which three of four siblings were affected with an unclassified autosomal recessive myopathy characterized by severe weakness, respiratory impairment, scoliosis, joint contractures, and an unusual combination of dystrophic and myopathic features on muscle biopsy. Whole genome sequence from one affected subject was filtered using linkage data and variant databases. A single gene, MEGF10, contained nonsynonymous mutations that co-segregated with the phenotype. Affected subjects were compound heterozygous for missense mutations c.976T > C (p.C326R) and c.2320T > C (p.C774R). Screening the MEGF10 open reading frame in 190 patients with genetically unexplained myopathies revealed a heterozygous mutation, c.211C > T (p.R71W), in one additional subject with a similar clinical and histological presentation as the discovery family. All three mutations were absent from at least 645 genotyped unaffected control subjects. MEGF10 contains 17 atypical epidermal growth factor-like domains, each of which contains eight cysteine residues that likely form disulfide bonds. Both the p.C326R and p.C774R mutations alter one of these residues, which are completely conserved in vertebrates. Previous work showed that murine Megf10 is required for preserving the undifferentiated, proliferative potential of satellite cells, myogenic precursors that regenerate skeletal muscle in response to injury or disease. Here, knockdown of megf10 in zebrafish by four different morpholinos resulted in abnormal phenotypes including unhatched eggs, curved tails, impaired motility, and disorganized muscle tissue, corroborating the pathogenicity of the human mutations. Our data establish the importance of MEGF10 in human skeletal muscle and suggest satellite cell dysfunction as a novel myopathic mechanism.

  20. Systematically Studying Kinase Inhibitor Induced Signaling Network Signatures by Integrating Both Therapeutic and Side Effects

    PubMed Central

    Shao, Hongwei; Peng, Tao; Ji, Zhiwei; Su, Jing; Zhou, Xiaobo

    2013-01-01

    Substantial effort in recent years has been devoted to analyzing data based large-scale biological networks, which provide valuable insight into the topologies of complex biological networks but are rarely context specific and cannot be used to predict the responses of cell signaling proteins to specific ligands or compounds. In this work, we proposed a novel strategy to investigate kinase inhibitor induced pathway signatures by integrating multiplex data in Library of Integrated Network-based Cellular Signatures (LINCS), e.g. KINOMEscan data and cell proliferation/mitosis imaging data. Using this strategy, we first established a PC9 cell line specific pathway model to investigate the pathway signatures in PC9 cell line when perturbed by a small molecule kinase inhibitor GW843682. This specific pathway revealed the role of PI3K/AKT in modulating the cell proliferation process and the absence of two anti-proliferation links, which indicated a potential mechanism of abnormal expansion in PC9 cell number. Incorporating the pathway model for side effects on primary human hepatocytes, it was used to screen 27 kinase inhibitors in LINCS database and PF02341066, known as Crizotinib, was finally suggested with an optimal concentration 4.6 uM to suppress PC9 cancer cell expansion while avoiding severe damage to primary human hepatocytes. Drug combination analysis revealed that the synergistic effect region can be predicted straightforwardly based on a threshold which is an inherent property of each kinase inhibitor. Furthermore, this integration strategy can be easily extended to other specific cell lines to be a powerful tool for drug screen before clinical trials. PMID:24339888

  1. Characterization of N-Acetylglucosamine Biosynthesis in Pneumocystis species. A New Potential Target for Therapy

    PubMed Central

    Kottom, Theodore J.; Hebrink, Deanne M.; Jenson, Paige E.; Ramirez-Prado, Jorge H.

    2017-01-01

    N-acetylglucosamine (GlcNAc) serves as an essential structural sugar on the cell surface of organisms. For example, GlcNAc is a major component of bacterial peptidoglycan, it is an important building block of fungal cell walls, including a major constituent of chitin and mannoproteins, and it is also required for extracellular matrix generation by animal cells. Herein, we provide evidence for a uridine diphospho (UDP)–GlcNAc pathway in Pneumocystis species. Using an in silico search of the Pneumocystis jirovecii and P. murina (Pm) genomic databases, we determined the presence of at least four proteins implicated in the Saccharomyces cerevisiae UDP-GlcNAc biosynthetic pathway. These genes, termed GFA1, GNA1, AGM1, and UDP-GlcNAc pyrophosphorylase (UAP1), were either confirmed to be present in the Pneumocystis genomes by PCR, or, in the case of Pm uap1 (Pmuap1), functionally confirmed by direct enzymatic activity assay. Expression analysis using quantitative PCR of Pneumocystis pneumonia in mice demonstrated abundant expression of the Pm uap1 transcript. A GlcNAc-binding recombinant protein and a novel GlcNAc-binding immune detection method both verified the presence of GlcNAc in P. carinii (Pc) lysates. Studies of Pc cell wall fractions using high-performance gas chromatography/mass spectrometry documented the presence of GlcNAc glycosyl residues. Pc was shown to synthesize GlcNAc in vitro. The competitive UDP-GlcNAc substrate synthetic inhibitor, nikkomycin Z, suppressed incorporation of GlcNAc by Pc preparations. Finally, treatment of rats with Pneumocystis pneumonia using nikkomycin Z significantly reduced organism burdens. Taken together, these data support an important role for GlcNAc generation in the cell surface of Pneumocystis organisms. PMID:27632412

  2. [Second neoplasm after treatment of localized prostate cancer].

    PubMed

    Arias, E; Astudillo, P; Manterola, C

    2012-01-01

    Prostate cancer (PC) treatment in early stages is radical prostatectomy (RP) or external radiotherapy (ER). There is some uncertainty regarding the development of new ER induced malignant tumors or second primary tumor (SPT), a fact influencing the choice of therapy. The purpose of this study is to determine the best therapeutic alternative for localized PC, in regards to incidence and time of development of. A systematic review of the literature is proposed by means of evaluation of studies conducted with localized PC and treated with RP or ER, published between 1990 and 2010. The Mega searchers used were Cochrane Library and Trip Database, and the data bases used were MEDLINE, OVID, Science Direct, SciELO and LiLACS, using MeSH terms and free words. The studies selected were analyzed using the MINCIR score of methodological quality (MQ) to compare articles with different design. The variables were considered to be number of patients treated, localization of lesions, global incidence of STP and MQ of the studies. Averages, medians and weighted averages (WA) were calculated. The study groups were compared using the 95% confidence intervals of the medians. Eleven articles fulfilled the screening criteria (retrospective cohorts and case series); providing 13 series for the study. The average of MQ was 14.7 points (13 and 16 points). The most frequent localizations of STP were bladder, rectum and long. The WA of the global incidence of STP for the series was 3.6% (4.1% for ER and 2.2% RP) CONCLUSION: The information existing did not make it possible to demonstrated an association between the appearance of STP and therapies for localized PC, it even though there was a superior tendency in irradiated patients. Copyright © 2011 AEU. Published by Elsevier Espana. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rayl, K.D.; Gaasterland, T.

    This paper presents an overview of the purpose, content, and design of a subset of the currently available biological databases, with an emphasis on protein databases. Databases included in this summary are 3D-ALI, Berlin RNA databank, Blocks, DSSP, EMBL Nucleotide Database, EMP, ENZYME, FSSP, GDB, GenBank, HSSP, LiMB, PDB, PIR, PKCDD, ProSite, and SWISS-PROT. The goal is to provide a starting point for researchers who wish to take advantage of the myriad available databases. Rather than providing a complete explanation of each database, we present its content and form by explaining the details of typical entries. Pointers to more completemore » ``user guides`` are included, along with general information on where to search for a new database.« less

  4. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  5. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  6. United States Army Medical Materiel Development Activity: 1997 Annual Report.

    DTIC Science & Technology

    1997-01-01

    business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was

  7. Maintaining Multimedia Data in a Geospatial Database

    DTIC Science & Technology

    2012-09-01

    at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database produced result sets from zero to 100,000, it was...excelled given multiple conditions. A different look at PostgreSQL and MySQL as spatial databases was offered. Given their results, as each database... MySQL ................................................................................................14  B.  BENCHMARKING DATA RETRIEVED FROM TABLE

  8. WMC Database Evaluation. Case Study Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palounek, Andrea P. T

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  9. The Sequenced Angiosperm Genomes and Genome Databases.

    PubMed

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.

  10. The Sequenced Angiosperm Genomes and Genome Databases

    PubMed Central

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology. PMID:29706973

  11. Navigating through the Jungle of Allergens: Features and Applications of Allergen Databases.

    PubMed

    Radauer, Christian

    2017-01-01

    The increasing number of available data on allergenic proteins demanded the establishment of structured, freely accessible allergen databases. In this review article, features and applications of 6 of the most widely used allergen databases are discussed. The WHO/IUIS Allergen Nomenclature Database is the official resource of allergen designations. Allergome is the most comprehensive collection of data on allergens and allergen sources. AllergenOnline is aimed at providing a peer-reviewed database of allergen sequences for prediction of allergenicity of proteins, such as those planned to be inserted into genetically modified crops. The Structural Database of Allergenic Proteins (SDAP) provides a database of allergen sequences, structures, and epitopes linked to bioinformatics tools for sequence analysis and comparison. The Immune Epitope Database (IEDB) is the largest repository of T-cell, B-cell, and major histocompatibility complex protein epitopes including epitopes of allergens. AllFam classifies allergens into families of evolutionarily related proteins using definitions from the Pfam protein family database. These databases contain mostly overlapping data, but also show differences in terms of their targeted users, the criteria for including allergens, data shown for each allergen, and the availability of bioinformatics tools. © 2017 S. Karger AG, Basel.

  12. The future application of GML database in GIS

    NASA Astrophysics Data System (ADS)

    Deng, Yuejin; Cheng, Yushu; Jing, Lianwen

    2006-10-01

    In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.

  13. MIPS: analysis and annotation of proteins from whole genomes

    PubMed Central

    Mewes, H. W.; Amid, C.; Arnold, R.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Münsterkötter, M.; Pagel, P.; Strack, N.; Stümpflen, V.; Warfsmann, J.; Ruepp, A.

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein–protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de). PMID:14681354

  14. MIPS: analysis and annotation of proteins from whole genomes.

    PubMed

    Mewes, H W; Amid, C; Arnold, R; Frishman, D; Güldener, U; Mannhaupt, G; Münsterkötter, M; Pagel, P; Strack, N; Stümpflen, V; Warfsmann, J; Ruepp, A

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).

  15. Comparison of Ethnic-specific Databases in Heidelberg Retina Tomography-3 to Discriminate Between Early Glaucoma and Normal Chinese Eyes.

    PubMed

    Tan, Xiu Ling; Yap, Sae Cheong; Li, Xiang; Yip, Leonard W

    2017-01-01

    To compare the diagnostic accuracy of the 3 race-specific normative databases in Heidelberg Retina Tomography (HRT)-3, in differentiating between early glaucomatous and healthy normal Chinese eyes. 52 healthy volunteers and 25 glaucoma patients were recruited for this prospective cross-sectional study. All underwent standardized interviews, ophthalmic examination, perimetry and HRT optic disc imaging. Area under the curve (AUC) receiver operating characteristics, sensitivity and specificity were derived to assess the discriminating abilities of the 3 normative databases, for both Moorfields Regression Analysis (MRA) and Glaucoma Probability Score (GPS). A significantly higher percentage (65%) of patients were classified as "within normal limits" using the MRA-Indian database, as compared to the MRA-Caucasian and MRA-African-American databases. However, for GPS, this was observed using the African-American database. For MRA, the highest sensitivity was obtained with both Caucasian and African-American databases (68%), while the highest specificity was from the Indian database (94%). The AUC for discrimination between glaucomatous and normal eyes by MRA-Caucasian, MRA-African-American and MRA-Indian databases were 0.77 (95% CI, 0.67-0.88), 0.79 (0.69-0.89) and 0.73 (0.63-0.84) respectively. For GPS, the highest sensitivity was obtained using either Caucasian or Indian databases (68%). The highest specificity was seen with the African-American database (98%). The AUC for GPS-Caucasian, GPS-African-American and GPS-Indian databases were 0.76 (95% CI, 0.66-0.87), 0.77 (0.67-0.87) and 0.76 (0.66-0.87) respectively. Comparison of the 3 ethnic databases did not reveal significant differences to differentiate early glaucomatous from normal Chinese eyes.

  16. Characterizing the genetic structure of a forensic DNA database using a latent variable approach.

    PubMed

    Kruijver, Maarten

    2016-07-01

    Several problems in forensic genetics require a representative model of a forensic DNA database. Obtaining an accurate representation of the offender database can be difficult, since databases typically contain groups of persons with unregistered ethnic origins in unknown proportions. We propose to estimate the allele frequencies of the subpopulations comprising the offender database and their proportions from the database itself using a latent variable approach. We present a model for which parameters can be estimated using the expectation maximization (EM) algorithm. This approach does not rely on relatively small and possibly unrepresentative population surveys, but is driven by the actual genetic composition of the database only. We fit the model to a snapshot of the Dutch offender database (2014), which contains close to 180,000 profiles, and find that three subpopulations suffice to describe a large fraction of the heterogeneity in the database. We demonstrate the utility and reliability of the approach with three applications. First, we use the model to predict the number of false leads obtained in database searches. We assess how well the model predicts the number of false leads obtained in mock searches in the Dutch offender database, both for the case of familial searching for first degree relatives of a donor and searching for contributors to three-person mixtures. Second, we study the degree of partial matching between all pairs of profiles in the Dutch database and compare this to what is predicted using the latent variable approach. Third, we use the model to provide evidence to support that the Dutch practice of estimating match probabilities using the Balding-Nichols formula with a native Dutch reference database and θ=0.03 is conservative. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. GlycomeDB – integration of open-access carbohydrate structure databases

    PubMed Central

    Ranzinger, René; Herget, Stephan; Wetter, Thomas; von der Lieth, Claus-Wilhelm

    2008-01-01

    Background Although carbohydrates are the third major class of biological macromolecules, after proteins and DNA, there is neither a comprehensive database for carbohydrate structures nor an established universal structure encoding scheme for computational purposes. Funding for further development of the Complex Carbohydrate Structure Database (CCSD or CarbBank) ceased in 1997, and since then several initiatives have developed independent databases with partially overlapping foci. For each database, different encoding schemes for residues and sequence topology were designed. Therefore, it is virtually impossible to obtain an overview of all deposited structures or to compare the contents of the various databases. Results We have implemented procedures which download the structures contained in the seven major databases, e.g. GLYCOSCIENCES.de, the Consortium for Functional Glycomics (CFG), the Kyoto Encyclopedia of Genes and Genomes (KEGG) and the Bacterial Carbohydrate Structure Database (BCSDB). We have created a new database called GlycomeDB, containing all structures, their taxonomic annotations and references (IDs) for the original databases. More than 100000 datasets were imported, resulting in more than 33000 unique sequences now encoded in GlycomeDB using the universal format GlycoCT. Inconsistencies were found in all public databases, which were discussed and corrected in multiple feedback rounds with the responsible curators. Conclusion GlycomeDB is a new, publicly available database for carbohydrate sequences with a unified, all-encompassing structure encoding format and NCBI taxonomic referencing. The database is updated weekly and can be downloaded free of charge. The JAVA application GlycoUpdateDB is also available for establishing and updating a local installation of GlycomeDB. With the advent of GlycomeDB, the distributed islands of knowledge in glycomics are now bridged to form a single resource. PMID:18803830

  18. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    NASA Technical Reports Server (NTRS)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  19. Functional integration of automated system databases by means of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Dubovoi, Volodymyr M.; Nikitenko, Olena D.; Kalimoldayev, Maksat; Kotyra, Andrzej; Gromaszek, Konrad; Iskakova, Aigul

    2017-08-01

    The paper presents approaches for functional integration of automated system databases by means of artificial intelligence. The peculiarities of turning to account the database in the systems with the usage of a fuzzy implementation of functions were analyzed. Requirements for the normalization of such databases were defined. The question of data equivalence in conditions of uncertainty and collisions in the presence of the databases functional integration is considered and the model to reveal their possible occurrence is devised. The paper also presents evaluation method of standardization of integrated database normalization.

  20. The Role of IMAT Solutions for Training Development at the Royal Netherlands Air Force. IMAT Follow-up Research Part 1

    DTIC Science & Technology

    2005-09-01

    e.g. the transformation of a fragment to an instructional fragment. "* IMAT Database: A Jasmine ® database is used as central database in IMAT for the...storage of fragments. This is an object-oriented relational database. Jasmine ® was, amongst other factors, chosen for its ability to handle multimedia...to the Jasmine ® database, which is used in IMAT as central database. 3.1.1.1 Ontologies In IMAT, the proposed solution on problems with information

  1. Partitioning medical image databases for content-based queries on a Grid.

    PubMed

    Montagnat, J; Breton, V; E Magnin, I

    2005-01-01

    In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.

  2. Barriers and facilitators for the implementation of primary prevention and health promotion activities in primary care: a synthesis through meta-ethnography.

    PubMed

    Rubio-Valera, Maria; Pons-Vigués, Mariona; Martínez-Andrés, María; Moreno-Peral, Patricia; Berenguera, Anna; Fernández, Ana

    2014-01-01

    Evidence supports the implementation of primary prevention and health promotion (PP&HP) activities but primary care (PC) professionals show resistance to implementing these activities. The aim was to synthesize the available qualitative research on barriers and facilitators identified by PC physicians and nurses in the implementation of PP&HP in adults. A systematic search of three databases was conducted and supported by manual searches. The 35 articles included were translated into each other and a new interpretation of the concepts extracted was generated. The factors affecting the implementation of PP&HP activities in PC according to professionals were fitted into a five-level ecological model: intrapersonal factors, interpersonal processes, institutional factors, community factors and public policy. At the intrapersonal level we find professionals' beliefs about PP&HP, experiences, skills and knowledge, and selfconcept. The attitudes and behavior towards PP&HP of patients, specialists, practice managers and colleagues (interpersonal factors) affect the feasibility of implementing PP&HP. Institutional level: PC is perceived as well-placed to implement PP&HP but workload, lack of time and referral resources, and the predominance of the biomedical model (which prioritizes disease treatment) hamper the implementation of PP&HP. The effectiveness of financial incentives and tools such as guidelines and alarms/reminders is conditioned by professionals' attitudes to them. Community factors include patients' social and cultural characteristics (religion, financial resources, etc.), local referral resources, mass-media messages and pharmaceutical industry campaigns, and the importance given to PP&HP in the curriculum in university. Finally, policies affect the distribution of resources, thus affecting the implementation of PP&HP. Research on barriers and facilitators in the implementation of PP&HP activities in multirisk management is scarce. The conceptual overview provided by this synthesis resulted in the development of practical recommendations for the design of PP&HP in PC. However, the effectiveness of these recommendations needs to be demonstrated.

  3. Spirituality as an ethical challenge in Indian palliative care: A systematic review.

    PubMed

    Gielen, Joris; Bhatnagar, Sushma; Chaturvedi, Santosh K

    2016-10-01

    Spiritual care is recognized as an essential component of palliative care (PC). However, patients' experience of spirituality is heavily context dependent. In addition, Western definitions and findings regarding spirituality may not be applicable to patients of non-Western origin, such as Indian PC patients. Given the particular sociocultural, religious, and economic conditions in which PC programs in India operate, we decided to undertake a systematic review of the literature on spirituality among Indian PC patients. We intended to assess how spirituality has been interpreted and operationalized in studies of this population, to determine which dimensions of spirituality are important for patients, and to analyze its ethical implications. In January of 2015, we searched five databases (ATLA, CINAHL, EMBASE, PsycINFO, and PubMed) using a combination of controlled and noncontrolled vocabulary. A content analysis of all selected reports was undertaken to assess the interpretation and dimensions of spirituality. Data extraction from empirical studies was done using a data-extraction sheet. A total of 39 empirical studies (12 qualitative, 21 quantitative, and 6 mixed-methods) and 18 others (10 reviews, 4 opinion articles, and 4 case studies) were retrieved. To date, no systematic review on spirituality in Indian PC has been published. Spirituality was the main focus of only six empirical studies. The content analysis revealed three dimensions of spirituality: (1) the relational dimension, (2) the existential dimension, and (3) the values dimension. Religion is prominent in all these dimensions. Patients' experiences of spirituality are determined by the specifically Indian context, which leads to particular ethical issues. Since spiritual well-being greatly impacts quality of life, and because of the substantial presence of people of Indian origin living outside the subcontinent, the findings of our review have international relevance. Moreover, our review illustrates that spirituality can be an ethical challenge and that more ethical reflection on provision of spiritual care is needed.

  4. Barriers and Facilitators for the Implementation of Primary Prevention and Health Promotion Activities in Primary Care: A Synthesis through Meta-Ethnography

    PubMed Central

    Rubio-Valera, Maria; Pons-Vigués, Mariona; Martínez-Andrés, María; Moreno-Peral, Patricia; Berenguera, Anna; Fernández, Ana

    2014-01-01

    Background Evidence supports the implementation of primary prevention and health promotion (PP&HP) activities but primary care (PC) professionals show resistance to implementing these activities. The aim was to synthesize the available qualitative research on barriers and facilitators identified by PC physicians and nurses in the implementation of PP&HP in adults. Methods and Findings A systematic search of three databases was conducted and supported by manual searches. The 35 articles included were translated into each other and a new interpretation of the concepts extracted was generated. The factors affecting the implementation of PP&HP activities in PC according to professionals were fitted into a five-level ecological model: intrapersonal factors, interpersonal processes, institutional factors, community factors and public policy. At the intrapersonal level we find professionals' beliefs about PP&HP, experiences, skills and knowledge, and selfconcept. The attitudes and behavior towards PP&HP of patients, specialists, practice managers and colleagues (interpersonal factors) affect the feasibility of implementing PP&HP. Institutional level: PC is perceived as well-placed to implement PP&HP but workload, lack of time and referral resources, and the predominance of the biomedical model (which prioritizes disease treatment) hamper the implementation of PP&HP. The effectiveness of financial incentives and tools such as guidelines and alarms/reminders is conditioned by professionals' attitudes to them. Community factors include patients' social and cultural characteristics (religion, financial resources, etc.), local referral resources, mass-media messages and pharmaceutical industry campaigns, and the importance given to PP&HP in the curriculum in university. Finally, policies affect the distribution of resources, thus affecting the implementation of PP&HP. Conclusions Research on barriers and facilitators in the implementation of PP&HP activities in multirisk management is scarce. The conceptual overview provided by this synthesis resulted in the development of practical recommendations for the design of PP&HP in PC. However, the effectiveness of these recommendations needs to be demonstrated. PMID:24586867

  5. Open Microphone Speech Understanding: Correct Discrimination Of In Domain Speech

    NASA Technical Reports Server (NTRS)

    Hieronymus, James; Aist, Greg; Dowding, John

    2006-01-01

    An ideal spoken dialogue system listens continually and determines which utterances were spoken to it, understands them and responds appropriately while ignoring the rest This paper outlines a simple method for achieving this goal which involves trading a slightly higher false rejection rate of in domain utterances for a higher correct rejection rate of Out of Domain (OOD) utterances. The system recognizes semantic entities specified by a unification grammar which is specialized by Explanation Based Learning (EBL). so that it only uses rules which are seen in the training data. The resulting grammar has probabilities assigned to each construct so that overgeneralizations are not a problem. The resulting system only recognizes utterances which reduce to a valid logical form which has meaning for the system and rejects the rest. A class N-gram grammar has been trained on the same training data. This system gives good recognition performance and offers good Out of Domain discrimination when combined with the semantic analysis. The resulting systems were tested on a Space Station Robot Dialogue Speech Database and a subset of the OGI conversational speech database. Both systems run in real time on a PC laptop and the present performance allows continuous listening with an acceptably low false acceptance rate. This type of open microphone system has been used in the Clarissa procedure reading and navigation spoken dialogue system which is being tested on the International Space Station.

  6. Differential receptor dependencies: expression and significance of muscarinic M1 receptors in the biology of prostate cancer.

    PubMed

    Mannan Baig, Abdul; Khan, Naveed A; Effendi, Vardah; Rana, Zohaib; Ahmad, H R; Abbas, Farhat

    2017-01-01

    Recent reports on acetylcholine muscarinic receptor subtype 3 (CHRM3) have shown its growth-promoting role in prostate cancer. Additional studies report the proliferative effect of the cholinergic agonist carbachol on prostate cancer by its agonistic action on CHRM3. This study shows that the type 1 acetylcholine muscarinic receptor (CHRM1) contributes toward the proliferation and growth of prostate cancer. We used growth and cytotoxic assays, the prostate cancer microarray database and CHRM downstream pathways' homology of CHRM subtypes to uncover multiple signals leading to the growth of prostate cancer. Growth assays showed that pilocarpine stimulates the proliferation of prostate cancer. Moreover, it shows that carbachol exerts an additional agonistic action on nicotinic cholinergic receptor of prostate cancer cells that can be blocked by tubocurarine. With the use of selective CHRM1 antagonists such as pirenzepine and dicyclomine, a considerable inhibition of proliferation of prostate cancer cell lines was observed in dose ranging from 15-60 µg/ml of dicyclomine. The microarray database of prostate cancer shows a dominant expression of CHRM1 in prostate cancer compared with other cholinergic subtypes. The bioinformatics of prostate cancer and CHRM pathways show that the downstream signalling include PIP3-AKT-CaM-mediated growth in LNCaP and PC3 cells. Our study suggests that antagonism of CHRM1 may be a potential therapeutic target against prostate cancer.

  7. An interactive program for computer-aided map design, display, and query: EMAPKGS2

    USGS Publications Warehouse

    Pouch, G.W.

    1997-01-01

    EMAPKGS2 is a user-friendly, PC-based electronic mapping tool for use in hydrogeologic exploration and appraisal. EMAPKGS2 allows the analyst to construct maps interactively from data stored in a relational database, perform point-oriented spatial queries such as locating all wells within a specified radius, perform geographic overlays, and export the data to other programs for further analysis. EMAPKGS2 runs under Microsoft?? Windows??? 3.1 and compatible operating systems. EMAPKGS2 is a public domain program available from the Kansas Geological Survey. EMAPKGS2 is the centerpiece of WHEAT, the Windows-based Hydrogeologic Exploration and Appraisal Toolkit, a suite of user-friendly Microsoft?? Windows??? programs for natural resource exploration and management. The principal goals in development of WHEAT have been ease of use, hardware independence, low cost, and end-user extensibility. WHEAT'S native data format is a Microsoft?? Access?? database. WHEAT stores a feature's geographic coordinates as attributes so they can be accessed easily by the user. The WHEAT programs are designed to be used in conjunction with other Microsoft?? Windows??? software to allow the natural resource scientist to perform work easily and effectively. WHEAT and EMAPKGS have been used at several of Kansas' Groundwater Management Districts and the Kansas Geological Survey on groundwater management operations, groundwater modeling projects, and geologic exploration projects. ?? 1997 Elsevier Science Ltd.

  8. A multimedia comprehensive informatics system with decision support tools for a multi-site collaboration research of stroke rehabilitation

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Documet, Jorge; Garrison, Kathleen A.; Winstein, Carolee J.; Liu, Brent

    2012-02-01

    Stroke is a major cause of adult disability. The Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (I-CARE) clinical trial aims to evaluate a therapy for arm rehabilitation after stroke. A primary outcome measure is correlative analysis between stroke lesion characteristics and standard measures of rehabilitation progress, from data collected at seven research facilities across the country. Sharing and communication of brain imaging and behavioral data is thus a challenge for collaboration. A solution is proposed as a web-based system with tools supporting imaging and informatics related data. In this system, users may upload anonymized brain images through a secure internet connection and the system will sort the imaging data for storage in a centralized database. Users may utilize an annotation tool to mark up images. In addition to imaging informatics, electronic data forms, for example, clinical data forms, are also integrated. Clinical information is processed and stored in the database to enable future data mining related development. Tele-consultation is facilitated through the development of a thin-client image viewing application. For convenience, the system supports access through desktop PC, laptops, and iPAD. Thus, clinicians may enter data directly into the system via iPAD while working with participants in the study. Overall, this comprehensive imaging informatics system enables users to collect, organize and analyze stroke cases efficiently.

  9. Teleeducation and telepathology for open and distance education.

    PubMed

    Szymas, J

    2000-01-01

    Our experience in creating and using telepathology system and multimedia database for education is described. This program packet currently works in the Department of Pathology of University Medical School in Poznan. It is used for self-education, tests, services and for the examinations in pathology, i.e., for dental students and for medical students in terms of self-education and individual examination services. The system is implemented on microcomputers compatible with IBM PC and works in the network system Netware 5.1. Some modules are available through the Internet. The program packet described here accomplishes the TELEMIC system for telepathology, ASSISTANT, which is the administrator for the databases, and EXAMINATOR, which is the executive program. The realization of multi-user module allows students to work on several working areas, on random be chosen different sets of problems contemporary. The possibility to work in the exercise mode will image files and questions is an attractive way for self-education. The standard format of the notation files enables to elaborate the results by commercial statistic packets in order to estimate the scale of answers and to find correlation between the obtained results. The method of multi-criterion grading excludes unlimited mutual compensation of the criteria, differentiates the importance of particular courses and introduces the quality criteria. The packet is part of the integrated management information system of the department of pathology. Applications for other telepathological systems are presented.

  10. The Computerized Laboratory Notebook concept for genetic toxicology experimentation and testing.

    PubMed

    Strauss, G H; Stanford, W L; Berkowitz, S J

    1989-03-01

    We describe a microcomputer system utilizing the Computerized Laboratory Notebook (CLN) concept developed in our laboratory for the purpose of automating the Battery of Leukocyte Tests (BLT). The BLT was designed to evaluate blood specimens for toxic, immunotoxic, and genotoxic effects after in vivo exposure to putative mutagens. A system was developed with the advantages of low cost, limited spatial requirements, ease of use for personnel inexperienced with computers, and applicability to specific testing yet flexibility for experimentation. This system eliminates cumbersome record keeping and repetitive analysis inherent in genetic toxicology bioassays. Statistical analysis of the vast quantity of data produced by the BLT would not be feasible without a central database. Our central database is maintained by an integrated package which we have adapted to develop the CLN. The clonal assay of lymphocyte mutagenesis (CALM) section of the CLN is demonstrated. PC-Slaves expand the microcomputer to multiple workstations so that our computerized notebook can be used next to a hood while other work is done in an office and instrument room simultaneously. Communication with peripheral instruments is an indispensable part of many laboratory operations, and we present a representative program, written to acquire and analyze CALM data, for communicating with both a liquid scintillation counter and an ELISA plate reader. In conclusion we discuss how our computer system could easily be adapted to the needs of other laboratories.

  11. CycADS: an annotation database system to ease the development and update of BioCyc databases

    PubMed Central

    Vellozo, Augusto F.; Véron, Amélie S.; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E.; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano

    2011-01-01

    In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org PMID:21474551

  12. FreeSolv: A database of experimental and calculated hydration free energies, with input files

    PubMed Central

    Mobley, David L.; Guthrie, J. Peter

    2014-01-01

    This work provides a curated database of experimental and calculated hydration free energies for small neutral molecules in water, along with molecular structures, input files, references, and annotations. We call this the Free Solvation Database, or FreeSolv. Experimental values were taken from prior literature and will continue to be curated, with updated experimental references and data added as they become available. Calculated values are based on alchemical free energy calculations using molecular dynamics simulations. These used the GAFF small molecule force field in TIP3P water with AM1-BCC charges. Values were calculated with the GROMACS simulation package, with full details given in references cited within the database itself. This database builds in part on a previous, 504-molecule database containing similar information. However, additional curation of both experimental data and calculated values has been done here, and the total number of molecules is now up to 643. Additional information is now included in the database, such as SMILES strings, PubChem compound IDs, accurate reference DOIs, and others. One version of the database is provided in the Supporting Information of this article, but as ongoing updates are envisioned, the database is now versioned and hosted online. In addition to providing the database, this work describes its construction process. The database is available free-of-charge via http://www.escholarship.org/uc/item/6sd403pz. PMID:24928188

  13. The 2018 Nucleic Acids Research database issue and the online molecular biology database collection.

    PubMed

    Rigden, Daniel J; Fernández, Xosé M

    2018-01-04

    The 2018 Nucleic Acids Research Database Issue contains 181 papers spanning molecular biology. Among them, 82 are new and 84 are updates describing resources that appeared in the Issue previously. The remaining 15 cover databases most recently published elsewhere. Databases in the area of nucleic acids include 3DIV for visualisation of data on genome 3D structure and RNArchitecture, a hierarchical classification of RNA families. Protein databases include the established SMART, ELM and MEROPS while GPCRdb and the newcomer STCRDab cover families of biomedical interest. In the area of metabolism, HMDB and Reactome both report new features while PULDB appears in NAR for the first time. This issue also contains reports on genomics resources including Ensembl, the UCSC Genome Browser and ENCODE. Update papers from the IUPHAR/BPS Guide to Pharmacology and DrugBank are highlights of the drug and drug target section while a number of proteomics databases including proteomicsDB are also covered. The entire Database Issue is freely available online on the Nucleic Acids Research website (https://academic.oup.com/nar). The NAR online Molecular Biology Database Collection has been updated, reviewing 138 entries, adding 88 new resources and eliminating 47 discontinued URLs, bringing the current total to 1737 databases. It is available at http://www.oxfordjournals.org/nar/database/c/. © The Author(s) 2018. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Migration from relational to NoSQL database

    NASA Astrophysics Data System (ADS)

    Ghotiya, Sunita; Mandal, Juhi; Kandasamy, Saravanakumar

    2017-11-01

    Data generated by various real time applications, social networking sites and sensor devices is of very huge amount and unstructured, which makes it difficult for Relational database management systems to handle the data. Data is very precious component of any application and needs to be analysed after arranging it in some structure. Relational databases are only able to deal with structured data, so there is need of NoSQL Database management System which can deal with semi -structured data also. Relational database provides the easiest way to manage the data but as the use of NoSQL is increasing it is becoming necessary to migrate the data from Relational to NoSQL databases. Various frameworks has been proposed previously which provides mechanisms for migration of data stored at warehouses in SQL, middle layer solutions which can provide facility of data to be stored in NoSQL databases to handle data which is not structured. This paper provides a literature review of some of the recent approaches proposed by various researchers to migrate data from relational to NoSQL databases. Some researchers proposed mechanisms for the co-existence of NoSQL and Relational databases together. This paper provides a summary of mechanisms which can be used for mapping data stored in Relational databases to NoSQL databases. Various techniques for data transformation and middle layer solutions are summarised in the paper.

  15. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies.

    PubMed

    Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.

  16. Metagenomic Taxonomy-Guided Database-Searching Strategy for Improving Metaproteomic Analysis.

    PubMed

    Xiao, Jinqiu; Tanca, Alessandro; Jia, Ben; Yang, Runqing; Wang, Bo; Zhang, Yu; Li, Jing

    2018-04-06

    Metaproteomics provides a direct measure of the functional information by investigating all proteins expressed by a microbiota. However, due to the complexity and heterogeneity of microbial communities, it is very hard to construct a sequence database suitable for a metaproteomic study. Using a public database, researchers might not be able to identify proteins from poorly characterized microbial species, while a sequencing-based metagenomic database may not provide adequate coverage for all potentially expressed protein sequences. To address this challenge, we propose a metagenomic taxonomy-guided database-search strategy (MT), in which a merged database is employed, consisting of both taxonomy-guided reference protein sequences from public databases and proteins from metagenome assembly. By applying our MT strategy to a mock microbial mixture, about two times as many peptides were detected as with the metagenomic database only. According to the evaluation of the reliability of taxonomic attribution, the rate of misassignments was comparable to that obtained using an a priori matched database. We also evaluated the MT strategy with a human gut microbial sample, and we found 1.7 times as many peptides as using a standard metagenomic database. In conclusion, our MT strategy allows the construction of databases able to provide high sensitivity and precision in peptide identification in metaproteomic studies, enabling the detection of proteins from poorly characterized species within the microbiota.

  17. USDA food and nutrient databases provide the infrastructure for food and nutrition research, policy, and practice.

    PubMed

    Ahuja, Jaspreet K C; Moshfegh, Alanna J; Holden, Joanne M; Harris, Ellen

    2013-02-01

    The USDA food and nutrient databases provide the basic infrastructure for food and nutrition research, nutrition monitoring, policy, and dietary practice. They have had a long history that goes back to 1892 and are unique, as they are the only databases available in the public domain that perform these functions. There are 4 major food and nutrient databases released by the Beltsville Human Nutrition Research Center (BHNRC), part of the USDA's Agricultural Research Service. These include the USDA National Nutrient Database for Standard Reference, the Dietary Supplement Ingredient Database, the Food and Nutrient Database for Dietary Studies, and the USDA Food Patterns Equivalents Database. The users of the databases are diverse and include federal agencies, the food industry, health professionals, restaurants, software application developers, academia and research organizations, international organizations, and foreign governments, among others. Many of these users have partnered with BHNRC to leverage funds and/or scientific expertise to work toward common goals. The use of the databases has increased tremendously in the past few years, especially the breadth of uses. These new uses of the data are bound to increase with the increased availability of technology and public health emphasis on diet-related measures such as sodium and energy reduction. Hence, continued improvement of the databases is important, so that they can better address these challenges and provide reliable and accurate data.

  18. Using Geocoded Databases in Teaching Urban Historical Geography.

    ERIC Educational Resources Information Center

    Miller, Roger P.

    1986-01-01

    Provides information regarding hardware and software requirements for using geocoded databases in urban historical geography. Reviews 11 IBM and Apple Macintosh database programs and describes the pen plotter and digitizing table interface used with the databases. (JDH)

  19. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  20. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    NASA Astrophysics Data System (ADS)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  1. TabSQL: a MySQL tool to facilitate mapping user data to public databases.

    PubMed

    Xia, Xiao-Qin; McClelland, Michael; Wang, Yipeng

    2010-06-23

    With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data.

  2. TabSQL: a MySQL tool to facilitate mapping user data to public databases

    PubMed Central

    2010-01-01

    Background With advances in high-throughput genomics and proteomics, it is challenging for biologists to deal with large data files and to map their data to annotations in public databases. Results We developed TabSQL, a MySQL-based application tool, for viewing, filtering and querying data files with large numbers of rows. TabSQL provides functions for downloading and installing table files from public databases including the Gene Ontology database (GO), the Ensembl databases, and genome databases from the UCSC genome bioinformatics site. Any other database that provides tab-delimited flat files can also be imported. The downloaded gene annotation tables can be queried together with users' data in TabSQL using either a graphic interface or command line. Conclusions TabSQL allows queries across the user's data and public databases without programming. It is a convenient tool for biologists to annotate and enrich their data. PMID:20573251

  3. Choosing an Optimal Database for Protein Identification from Tandem Mass Spectrometry Data.

    PubMed

    Kumar, Dhirendra; Yadav, Amit Kumar; Dash, Debasis

    2017-01-01

    Database searching is the preferred method for protein identification from digital spectra of mass to charge ratios (m/z) detected for protein samples through mass spectrometers. The search database is one of the major influencing factors in discovering proteins present in the sample and thus in deriving biological conclusions. In most cases the choice of search database is arbitrary. Here we describe common search databases used in proteomic studies and their impact on final list of identified proteins. We also elaborate upon factors like composition and size of the search database that can influence the protein identification process. In conclusion, we suggest that choice of the database depends on the type of inferences to be derived from proteomics data. However, making additional efforts to build a compact and concise database for a targeted question should generally be rewarding in achieving confident protein identifications.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, Jennifer; Sandberg, Tami

    The Wind-Wildlife Impacts Literature Database (WILD), formerly known as the Avian Literature Database, was created in 1997. The goal of the database was to begin tracking the research that detailed the potential impact of wind energy development on birds. The Avian Literature Database was originally housed on a proprietary platform called Livelink ECM from Open- Text and maintained by in-house technical staff. The initial set of records was added by library staff. A vital part of the newly launched Drupal-based WILD database is the Bibliography module. Many of the resources included in the database have digital object identifiers (DOI). Themore » bibliographic information for any item that has a DOI can be imported into the database using this module. This greatly reduces the amount of manual data entry required to add records to the database. The content available in WILD is international in scope, which can be easily discerned by looking at the tags available in the browse menu.« less

  5. A World Wide Web (WWW) server database engine for an organelle database, MitoDat.

    PubMed

    Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S

    1996-03-01

    We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.

  6. Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Stoter, J.

    2016-06-01

    Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

  7. Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Carter, R.J.

    The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less

  8. Hydrogen Leak Detection Sensor Database

    NASA Technical Reports Server (NTRS)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  9. Computer Science Research in Europe.

    DTIC Science & Technology

    1984-08-29

    most attention, multi- database and its structure, and (3) the dependencies between databases Distributed Systems and multi- databases . Having...completed a multi- database Newcastle University, UK system for distributed data management, At the University of Newcastle the INRIA is now working on a real...communications re- INRIA quirements of distributed database A project called SIRIUS was estab- systems, protocols for checking the lished in 1977 at the

  10. Going public: accessing urban data and producing population estimates using the urban FIA database

    Treesearch

    Chris Edgar; Mark Hatfield

    2015-01-01

    In this presentation we describe the urban forest inventory database (U-FIADB) and demonstrate how to use the database to produce population estimates. Examples from the recently completed City of Austin inventory will be used to demonstrate the capabilities of the database. We will identify several features of U-FIADB that are different from the FIA database (FIADB)...

  11. SolCyc: a database hub at the Sol Genomics Network (SGN) for the manual curation of metabolic networks in Solanum and Nicotiana specific databases

    PubMed Central

    Foerster, Hartmut; Bombarely, Aureliano; Battey, James N D; Sierro, Nicolas; Ivanov, Nikolai V; Mueller, Lukas A

    2018-01-01

    Abstract SolCyc is the entry portal to pathway/genome databases (PGDBs) for major species of the Solanaceae family hosted at the Sol Genomics Network. Currently, SolCyc comprises six organism-specific PGDBs for tomato, potato, pepper, petunia, tobacco and one Rubiaceae, coffee. The metabolic networks of those PGDBs have been computationally predicted by the pathologic component of the pathway tools software using the manually curated multi-domain database MetaCyc (http://www.metacyc.org/) as reference. SolCyc has been recently extended by taxon-specific databases, i.e. the family-specific SolanaCyc database, containing only curated data pertinent to species of the nightshade family, and NicotianaCyc, a genus-specific database that stores all relevant metabolic data of the Nicotiana genus. Through manual curation of the published literature, new metabolic pathways have been created in those databases, which are complemented by the continuously updated, relevant species-specific pathways from MetaCyc. At present, SolanaCyc comprises 199 pathways and 29 superpathways and NicotianaCyc accounts for 72 pathways and 13 superpathways. Curator-maintained, taxon-specific databases such as SolanaCyc and NicotianaCyc are characterized by an enrichment of data specific to these taxa and free of falsely predicted pathways. Both databases have been used to update recently created Nicotiana-specific databases for Nicotiana tabacum, Nicotiana benthamiana, Nicotiana sylvestris and Nicotiana tomentosiformis by propagating verifiable data into those PGDBs. In addition, in-depth curation of the pathways in N.tabacum has been carried out which resulted in the elimination of 156 pathways from the 569 pathways predicted by pathway tools. Together, in-depth curation of the predicted pathway network and the supplementation with curated data from taxon-specific databases has substantially improved the curation status of the species–specific N.tabacum PGDB. The implementation of this strategy will significantly advance the curation status of all organism-specific databases in SolCyc resulting in the improvement on database accuracy, data analysis and visualization of biochemical networks in those species. Database URL https://solgenomics.net/tools/solcyc/ PMID:29762652

  12. Literature searches on Ayurveda: An update.

    PubMed

    Aggithaya, Madhur G; Narahari, Saravu R

    2015-01-01

    The journals that publish on Ayurveda are increasingly indexed by popular medical databases in recent years. However, many Eastern journals are not indexed biomedical journal databases such as PubMed. Literature searches for Ayurveda continue to be challenging due to the nonavailability of active, unbiased dedicated databases for Ayurvedic literature. In 2010, authors identified 46 databases that can be used for systematic search of Ayurvedic papers and theses. This update reviewed our previous recommendation and identified current and relevant databases. To update on Ayurveda literature search and strategy to retrieve maximum publications. Author used psoriasis as an example to search previously listed databases and identify new. The population, intervention, control, and outcome table included keywords related to psoriasis and Ayurvedic terminologies for skin diseases. Current citation update status, search results, and search options of previous databases were assessed. Eight search strategies were developed. Hundred and five journals, both biomedical and Ayurveda, which publish on Ayurveda, were identified. Variability in databases was explored to identify bias in journal citation. Five among 46 databases are now relevant - AYUSH research portal, Annotated Bibliography of Indian Medicine, Digital Helpline for Ayurveda Research Articles (DHARA), PubMed, and Directory of Open Access Journals. Search options in these databases are not uniform, and only PubMed allows complex search strategy. "The Researches in Ayurveda" and "Ayurvedic Research Database" (ARD) are important grey resources for hand searching. About 44/105 (41.5%) journals publishing Ayurvedic studies are not indexed in any database. Only 11/105 (10.4%) exclusive Ayurveda journals are indexed in PubMed. AYUSH research portal and DHARA are two major portals after 2010. It is mandatory to search PubMed and four other databases because all five carry citations from different groups of journals. The hand searching is important to identify Ayurveda publications that are not indexed elsewhere. Availability information of citations in Ayurveda libraries from National Union Catalogue of Scientific Serials in India if regularly updated will improve the efficacy of hand searching. A grey database (ARD) contains unpublished PG/Ph.D. theses. The AYUSH portal, DHARA (funded by Ministry of AYUSH), and ARD should be merged to form single larger database to limit Ayurveda literature searches.

  13. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    PubMed

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Differences in the Reporting of Racial and Socioeconomic Disparities among Three Large National Databases for Breast Reconstruction.

    PubMed

    Kamali, Parisa; Zettervall, Sara L; Wu, Winona; Ibrahim, Ahmed M S; Medin, Caroline; Rakhorst, Hinne A; Schermerhorn, Marc L; Lee, Bernard T; Lin, Samuel J

    2017-04-01

    Research derived from large-volume databases plays an increasing role in the development of clinical guidelines and health policy. In breast cancer research, the Surveillance, Epidemiology and End Results, National Surgical Quality Improvement Program, and Nationwide Inpatient Sample databases are widely used. This study aims to compare the trends in immediate breast reconstruction and identify the drawbacks and benefits of each database. Patients with invasive breast cancer and ductal carcinoma in situ were identified from each database (2005-2012). Trends of immediate breast reconstruction over time were evaluated. Patient demographics and comorbidities were compared. Subgroup analysis of immediate breast reconstruction use per race was conducted. Within the three databases, 1.2 million patients were studied. Immediate breast reconstruction in invasive breast cancer patients increased significantly over time in all databases. A similar significant upward trend was seen in ductal carcinoma in situ patients. Significant differences in immediate breast reconstruction rates were seen among races; and the disparity differed among the three databases. Rates of comorbidities were similar among the three databases. There has been a significant increase in immediate breast reconstruction; however, the extent of the reporting of overall immediate breast reconstruction rates and of racial disparities differs significantly among databases. The Nationwide Inpatient Sample and the National Surgical Quality Improvement Program report similar findings, with the Surveillance, Epidemiology and End Results database reporting results significantly lower in several categories. These findings suggest that use of the Surveillance, Epidemiology and End Results database may not be universally generalizable to the entire U.S.

  15. Can different primary care databases produce comparable estimates of burden of disease: results of a study exploring venous leg ulceration.

    PubMed

    Petherick, Emily S; Pickett, Kate E; Cullum, Nicky A

    2015-08-01

    Primary care databases from the UK have been widely used to produce evidence on the epidemiology and health service usage of a wide range of conditions. To date there have been few evaluations of the comparability of estimates between different sources of these data. To estimate the comparability of two widely used primary care databases, the Health Improvement Network Database (THIN) and the General Practice Research Database (GPRD) using venous leg ulceration as an exemplar condition. Cross prospective cohort comparison. GPRD and the THIN databases using data from 1998 to 2006. A data set was extracted from both databases containing all cases of persons aged 20 years or greater with a database diagnosis of venous leg ulceration recorded in the databases for the period 1998-2006. Annual rates of incidence and prevalence of venous leg ulceration were calculated within each database and standardized to the European standard population and compared using standardized rate ratios. Comparable estimates of venous leg ulcer incidence from the GPRD and THIN databases could be obtained using data from 2000 to 2006 and of prevalence using data from 2001 to 2006. Recent data collected by these two databases are more likely to produce comparable results of the burden venous leg ulceration. These results require confirmation in other disease areas to enable researchers to have confidence in the comparability of findings from these two widely used primary care research resources. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Process of formulating USDA's Expanded Flavonoid Database for the Assessment of Dietary intakes: a new tool for epidemiological research.

    PubMed

    Bhagwat, Seema A; Haytowitz, David B; Wasswa-Kintu, Shirley I; Pehrsson, Pamela R

    2015-08-14

    The scientific community continues to be interested in potential links between flavonoid intakes and beneficial health effects associated with certain chronic diseases such as CVD, some cancers and type 2 diabetes. Three separate flavonoid databases (Flavonoids, Isoflavones and Proanthocyanidins) developed by the USDA Agricultural Research Service since 1999 with frequent updates have been used to estimate dietary flavonoid intakes, and investigate their health effects. However, each of these databases contains only a limited number of foods. The USDA has constructed a new Expanded Flavonoids Database for approximately 2900 commonly consumed foods, using analytical values from their existing flavonoid databases (Flavonoid Release 3.1 and Isoflavone Release 2.0) as the foundation to calculate values for all the twenty-nine flavonoid compounds included in these two databases. Thus, the new database provides full flavonoid profiles for twenty-nine predominant dietary flavonoid compounds for every food in the database. Original analytical values in Flavonoid Release 3.1 and Isoflavone Release 2.0 for corresponding foods were retained in the newly constructed database. Proanthocyanidins are not included in the expanded database. The process of formulating the new database includes various calculation techniques. This article describes the process of populating values for the twenty-nine flavonoid compounds for every food in the dataset, along with challenges encountered and resolutions suggested. The new expanded flavonoid database released on the Nutrient Data Laboratory's website would provide uniformity in estimations of flavonoid content in foods and will be a valuable tool for epidemiological studies to assess dietary intakes.

  17. NeMedPlant: a database of therapeutic applications and chemical constituents of medicinal plants from north-east region of India

    PubMed Central

    Meetei, Potshangbam Angamba; Singh, Pankaj; Nongdam, Potshangbam; Prabhu, N Prakash; Rathore, RS; Vindal, Vaibhav

    2012-01-01

    The North-East region of India is one of the twelve mega biodiversity region, containing many rare and endangered species. A curated database of medicinal and aromatic plants from the regions called NeMedPlant is developed. The database contains traditional, scientific and medicinal information about plants and their active constituents, obtained from scholarly literature and local sources. The database is cross-linked with major biochemical databases and analytical tools. The integrated database provides resource for investigations into hitherto unexplored medicinal plants and serves to speed up the discovery of natural productsbased drugs. Availability The database is available for free at http://bif.uohyd.ac.in/nemedplant/orhttp://202.41.85.11/nemedplant/ PMID:22419844

  18. The BDNYC database of low-mass stars, brown dwarfs, and planetary mass companions

    NASA Astrophysics Data System (ADS)

    Cruz, Kelle; Rodriguez, David; Filippazzo, Joseph; Gonzales, Eileen; Faherty, Jacqueline K.; Rice, Emily; BDNYC

    2018-01-01

    We present a web-interface to a database of low-mass stars, brown dwarfs, and planetary mass companions. Users can send SELECT SQL queries to the database, perform searches by coordinates or name, check the database inventory on specified objects, and even plot spectra interactively. The initial version of this database contains information for 198 objects and version 2 will contain over 1000 objects. The database currently includes photometric data from 2MASS, WISE, and Spitzer and version 2 will include a significant portion of the publicly available optical and NIR spectra for brown dwarfs. The database is maintained and curated by the BDNYC research group and we welcome contributions from other researchers via GitHub.

  19. A reservoir morphology database for the conterminous United States

    USGS Publications Warehouse

    Rodgers, Kirk D.

    2017-09-13

    The U.S. Geological Survey, in cooperation with the Reservoir Fisheries Habitat Partnership, combined multiple national databases to create one comprehensive national reservoir database and to calculate new morphological metrics for 3,828 reservoirs. These new metrics include, but are not limited to, shoreline development index, index of basin permanence, development of volume, and other descriptive metrics based on established morphometric formulas. The new database also contains modeled chemical and physical metrics. Because of the nature of the existing databases used to compile the Reservoir Morphology Database and the inherent missing data, some metrics were not populated. One comprehensive database will assist water-resource managers in their understanding of local reservoir morphology and water chemistry characteristics throughout the continental United States.

  20. Microbial properties database editor tutorial

    USDA-ARS?s Scientific Manuscript database

    A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbialrelevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pathogens. Physical prop...

  1. NATIVE HEALTH DATABASES: NATIVE HEALTH RESEARCH DATABASE (NHRD)

    EPA Science Inventory

    The Native Health Databases contain bibliographic information and abstracts of health-related articles, reports, surveys, and other resource documents pertaining to the health and health care of American Indians, Alaska Natives, and Canadian First Nations. The databases provide i...

  2. Freshwater Biological Traits Database (Data Sources)

    EPA Science Inventory

    When EPA release the final report, Freshwater Biological Traits Database, it referenced numerous data sources that are included below. The Traits Database report covers the development of a database of freshwater biological traits with additional traits that are relevan...

  3. Microbial Properties Database Editor Tutorial

    EPA Science Inventory

    A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbial-relevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pat...

  4. TS-SRP/PACK - COMPUTER PROGRAMS TO CHARACTERIZE ALLOYS AND PREDICT CYCLIC LIFE USING THE TOTAL STRAIN VERSION OF STRAINRANGE PARTITIONING

    NASA Technical Reports Server (NTRS)

    Saltsman, J. F.

    1994-01-01

    TS-SRP/PACK is a set of computer programs for characterizing and predicting fatigue and creep-fatigue resistance of metallic materials in the high-temperature, long-life regime for isothermal and nonisothermal fatigue. The programs use the total strain version of the Strainrange Partitioning (TS-SRP). The user should be thoroughly familiar with the TS-SRP method before attempting to use any of these programs. The document for this program includes a theory manual as well as a detailed user's manual with a tutorial to guide the user in the proper use of TS-SRP. An extensive database has also been developed in a parallel effort. This database is an excellent source of high-temperature, creep-fatigue test data and can be used with other life-prediction methods as well. Five programs are included in TS-SRP/PACK along with the alloy database. The TABLE program is used to print the datasets, which are in NAMELIST format, in a reader friendly format. INDATA is used to create new datasets or add to existing ones. The FAIL program is used to characterize the failure behavior of an alloy as given by the constants in the strainrange-life relations used by the total strain version of SRP (TS-SRP) and the inelastic strainrange-based version of SRP. The program FLOW is used to characterize the flow behavior (the constitutive response) of an alloy as given by the constants in the flow equations used by TS-SRP. Finally, LIFE is used to predict the life of a specified cycle, using the constants characterizing failure and flow behavior determined by FAIL and FLOW. LIFE is written in interpretive BASIC to avoid compiling and linking every time the equation constants are changed. Four out of five programs in this package are written in FORTRAN 77 for IBM PC series and compatible computers running MS-DOS and are designed to read data using the NAMELIST format statement. The fifth is written in BASIC version 3.0 for IBM PC series and compatible computers running MS-DOS version 3.10. The executables require at least 239K of memory and DOS 3.1 or higher. To compile the source, a Lahey FORTRAN compiler is required. Source code modifications will be necessary if the compiler to be used does not support NAMELIST input. Probably the easiest revision to make is to use a list-directed READ statement. The standard distribution medium for this program is a set of two 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. TS-SRP/PACK was developed in 1992.

  5. Clinical decision support tools: personal digital assistant versus online dietary supplement databases.

    PubMed

    Clauson, Kevin A; Polen, Hyla H; Peak, Amy S; Marsh, Wallace A; DiScala, Sandra L

    2008-11-01

    Clinical decision support tools (CDSTs) on personal digital assistants (PDAs) and online databases assist healthcare practitioners who make decisions about dietary supplements. To assess and compare the content of PDA dietary supplement databases and their online counterparts used as CDSTs. A total of 102 question-and-answer pairs were developed within 10 weighted categories of the most clinically relevant aspects of dietary supplement therapy. PDA versions of AltMedDex, Lexi-Natural, Natural Medicines Comprehensive Database, and Natural Standard and their online counterparts were assessed by scope (percent of correct answers present), completeness (3-point scale), ease of use, and a composite score integrating all 3 criteria. Descriptive statistics and inferential statistics, including a chi(2) test, Scheffé's multiple comparison test, McNemar's test, and the Wilcoxon signed rank test were used to analyze data. The scope scores for PDA databases were: Natural Medicines Comprehensive Database 84.3%, Natural Standard 58.8%, Lexi-Natural 50.0%, and AltMedDex 36.3%, with Natural Medicines Comprehensive Database statistically superior (p < 0.01). Completeness scores were: Natural Medicines Comprehensive Database 78.4%, Natural Standard 51.0%, Lexi-Natural 43.5%, and AltMedDex 29.7%. Lexi-Natural was superior in ease of use (p < 0.01). Composite scores for PDA databases were: Natural Medicines Comprehensive Database 79.3, Natural Standard 53.0, Lexi-Natural 48.0, and AltMedDex 32.5, with Natural Medicines Comprehensive Database superior (p < 0.01). There was no difference between the scope for PDA and online database pairs with Lexi-Natural (50.0% and 53.9%, respectively) or Natural Medicines Comprehensive Database (84.3% and 84.3%, respectively) (p > 0.05), whereas differences existed for AltMedDex (36.3% vs 74.5%, respectively) and Natural Standard (58.8% vs 80.4%, respectively) (p < 0.01). For composite scores, AltMedDex and Natural Standard online were better than their PDA counterparts (p < 0.01). Natural Medicines Comprehensive Database achieved significantly higher scope, completeness, and composite scores compared with other dietary supplement PDA CDSTs in this study. There was no difference between the PDA and online databases for Lexi-Natural and Natural Medicines Comprehensive Database, whereas online versions of AltMedDex and Natural Standard were significantly better than their PDA counterparts.

  6. Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences

    PubMed Central

    Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi

    2006-01-01

    Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384

  7. BDVC (Bimodal Database of Violent Content): A database of violent audio and video

    NASA Astrophysics Data System (ADS)

    Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro

    2017-09-01

    Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.

  8. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules—Search Options and Applications in Food Science

    PubMed Central

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-01-01

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs. PMID:27929431

  9. Internet Databases of the Properties, Enzymatic Reactions, and Metabolism of Small Molecules-Search Options and Applications in Food Science.

    PubMed

    Minkiewicz, Piotr; Darewicz, Małgorzata; Iwaniak, Anna; Bucholska, Justyna; Starowicz, Piotr; Czyrko, Emilia

    2016-12-06

    Internet databases of small molecules, their enzymatic reactions, and metabolism have emerged as useful tools in food science. Database searching is also introduced as part of chemistry or enzymology courses for food technology students. Such resources support the search for information about single compounds and facilitate the introduction of secondary analyses of large datasets. Information can be retrieved from databases by searching for the compound name or structure, annotating with the help of chemical codes or drawn using molecule editing software. Data mining options may be enhanced by navigating through a network of links and cross-links between databases. Exemplary databases reviewed in this article belong to two classes: tools concerning small molecules (including general and specialized databases annotating food components) and tools annotating enzymes and metabolism. Some problems associated with database application are also discussed. Data summarized in computer databases may be used for calculation of daily intake of bioactive compounds, prediction of metabolism of food components, and their biological activity as well as for prediction of interactions between food component and drugs.

  10. [Establishement for regional pelvic trauma database in Hunan Province].

    PubMed

    Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua

    2017-04-28

    To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry.
 Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6.
 Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management.
 Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.

  11. Creation of the NaSCoRD Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William

    This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less

  12. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  13. [Role and management of cancer clinical database in the application of gastric cancer precision medicine].

    PubMed

    Li, Yuanfang; Zhou, Zhiwei

    2016-02-01

    Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.

  14. The Moroccan Genetic Disease Database (MGDD): a database for DNA variations related to inherited disorders and disease susceptibility.

    PubMed

    Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid

    2014-03-01

    National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, William M. J.; Liu, Michael C.; Magnier, Eugene A.

    We present initial results from a wide-field (30,000 deg{sup 2}) search for L/T transition brown dwarfs within 25 pc using the Pan-STARRS1 and Wide-field Infrared Survey Explorer (WISE) surveys. Previous large-area searches have been incomplete for L/T transition dwarfs, because these objects are faint in optical bands and have near-infrared (near-IR) colors that are difficult to distinguish from background stars. To overcome these obstacles, we have cross-matched the Pan-STARRS1 (optical) and WISE (mid-IR) catalogs to produce a unique multi-wavelength database for finding ultracool dwarfs. As part of our initial discoveries, we have identified seven brown dwarfs in the L/T transitionmore » within 9-15 pc of the Sun. The L9.5 dwarf PSO J140.2308+45.6487 and the T1.5 dwarf PSO J307.6784+07.8263 (both independently discovered by Mace et al.) show possible spectroscopic variability at the Y and J bands. Two more objects in our sample show evidence of photometric J-band variability, and two others are candidate unresolved binaries based on their spectra. We expect our full search to yield a well-defined, volume-limited sample of L/T transition dwarfs that will include many new targets for study of this complex regime. PSO J307.6784+07.8263 in particular may be an excellent candidate for in-depth study of variability, given its brightness (J = 14.2 mag) and proximity (11 pc)« less

  16. RP-HPLC method using 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate incorporated with normalization technique in principal component analysis to differentiate the bovine, porcine and fish gelatins.

    PubMed

    Azilawati, M I; Hashim, D M; Jamilah, B; Amin, I

    2015-04-01

    The amino acid compositions of bovine, porcine and fish gelatin were determined by amino acid analysis using 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate as derivatization reagent. Sixteen amino acids were identified with similar spectral chromatograms. Data pre-treatment via centering and transformation of data by normalization were performed to provide data that are more suitable for analysis and easier to be interpreted. Principal component analysis (PCA) transformed the original data matrix into a number of principal components (PCs). Three principal components (PCs) described 96.5% of the total variance, and 2 PCs (91%) explained the highest variances. The PCA model demonstrated the relationships among amino acids in the correlation loadings plot to the group of gelatins in the scores plot. Fish gelatin was correlated to threonine, serine and methionine on the positive side of PC1; bovine gelatin was correlated to the non-polar side chains amino acids that were proline, hydroxyproline, leucine, isoleucine and valine on the negative side of PC1 and porcine gelatin was correlated to the polar side chains amino acids that were aspartate, glutamic acid, lysine and tyrosine on the negative side of PC2. Verification on the database using 12 samples from commercial products gelatin-based had confirmed the grouping patterns and the variables correlations. Therefore, this quantitative method is very useful as a screening method to determine gelatin from various sources. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. 9 CFR 81.2 - Identification of deer, elk, and moose in interstate commerce.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... is linked to that animal in the CWD National Database or in an approved State database. The second... that animal and herd in the CWD National Database or in an approved State database. (Approved by the...

  18. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  19. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  20. 9 CFR 81.2 - Identification of deer, elk, and moose in interstate commerce.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... is linked to that animal in the CWD National Database or in an approved State database. The second... that animal and herd in the CWD National Database or in an approved State database. (Approved by the...

Top