Sample records for key electronic databases

  1. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  2. Electron Inelastic-Mean-Free-Path Database

    National Institute of Standards and Technology Data Gateway

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  3. GuiaTreeKey, a multi-access electronic key to identify tree genera in French Guiana.

    PubMed

    Engel, Julien; Brousseau, Louise; Baraloto, Christopher

    2016-01-01

    The tropical rainforest of Amazonia is one of the most species-rich ecosystems on earth, with an estimated 16000 tree species. Due to this high diversity, botanical identification of trees in the Amazon is difficult, even to genus, often requiring the assistance of parataxonomists or taxonomic specialists. Advances in informatics tools offer a promising opportunity to develop user-friendly electronic keys to improve Amazonian tree identification. Here, we introduce an original multi-access electronic key for the identification of 389 tree genera occurring in French Guiana terra-firme forests, based on a set of 79 morphological characters related to vegetative, floral and fruit characters. Its purpose is to help Amazonian tree identification and to support the dissemination of botanical knowledge to non-specialists, including forest workers, students and researchers from other scientific disciplines. The electronic key is accessible with the free access software Xper ², and the database is publicly available on figshare: https://figshare.com/s/75d890b7d707e0ffc9bf (doi: 10.6084/m9.figshare.2682550).

  4. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    National Institute of Standards and Technology Data Gateway

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  5. Electron-Impact Ionization Cross Section Database

    National Institute of Standards and Technology Data Gateway

    SRD 107 Electron-Impact Ionization Cross Section Database (Web, free access)   This is a database primarily of total ionization cross sections of molecules by electron impact. The database also includes cross sections for a small number of atoms and energy distributions of ejected electrons for H, He, and H2. The cross sections were calculated using the Binary-Encounter-Bethe (BEB) model, which combines the Mott cross section with the high-incident energy behavior of the Bethe cross section. Selected experimental data are included.

  6. Successful Keyword Searching: Initiating Research on Popular Topics Using Electronic Databases.

    ERIC Educational Resources Information Center

    MacDonald, Randall M.; MacDonald, Susan Priest

    Students are using electronic resources more than ever before to locate information for assignments. Without the proper search terms, results are incomplete, and students are frustrated. Using the keywords, key people, organizations, and Web sites provided in this book and compiled from the most commonly used databases, students will be able to…

  7. Private database queries based on counterfactual quantum key distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Li; Guo, Fen-Zhuo; Gao, Fei; Liu, Bin; Wen, Qiao-Yan

    2013-08-01

    Based on the fundamental concept of quantum counterfactuality, we propose a protocol to achieve quantum private database queries, which is a theoretical study of how counterfactuality can be employed beyond counterfactual quantum key distribution (QKD). By adding crucial detecting apparatus to the device of QKD, the privacy of both the distrustful user and the database owner can be guaranteed. Furthermore, the proposed private-database-query protocol makes full use of the low efficiency in the counterfactual QKD, and by adjusting the relevant parameters, the protocol obtains excellent flexibility and extensibility.

  8. Electronic Strategies To Manage Key Relationships.

    ERIC Educational Resources Information Center

    Carr, Nora

    2003-01-01

    Describes how Charlotte-Mecklenburg (North Carolina) district used a relational database, e-mail, electronic newsletters, cable television, telecommunications, and the Internet to enhance communications with their constituencies. (PKP)

  9. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  10. Practical private database queries based on a quantum-key-distribution protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakobi, Markus; Humboldt-Universitaet zu Berlin, D-10117 Berlin; Simon, Christoph

    2011-02-15

    Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions inmore » order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.« less

  11. SPOT 5/HRS: A Key Source for Navigation Database

    DTIC Science & Technology

    2003-09-02

    SUBTITLE SPOT 5 / HRS: A Key Source for Navigation Database 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT ......strategic objective. Nice data ….. What after ?? Filière SPOT Marc BERNARD Page 15 Producing from HRS u Partnership with IGN ( French

  12. An ab initio electronic transport database for inorganic materials.

    PubMed

    Ricci, Francesco; Chen, Wei; Aydemir, Umut; Snyder, G Jeffrey; Rignanese, Gian-Marco; Jain, Anubhav; Hautier, Geoffroy

    2017-07-04

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material's band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present the workflow to generate the data, the data validation procedure, and the database structure. Our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.

  13. Academic impact of a public electronic health database: bibliometric analysis of studies using the general practice research database.

    PubMed

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.

  14. An ab initio electronic transport database for inorganic materials

    DOE PAGES

    Ricci, Francesco; Chen, Wei; Aydemir, Umut; ...

    2017-07-04

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material’s band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present themore » workflow to generate the data, the data validation procedure, and the database structure. In conclusion, our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.« less

  15. An ab initio electronic transport database for inorganic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, Francesco; Chen, Wei; Aydemir, Umut

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material’s band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present themore » workflow to generate the data, the data validation procedure, and the database structure. In conclusion, our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.« less

  16. The Efficacy of Multidimensional Constraint Keys in Database Query Performance

    ERIC Educational Resources Information Center

    Cardwell, Leslie K.

    2012-01-01

    This work is intended to introduce a database design method to resolve the two-dimensional complexities inherent in the relational data model and its resulting performance challenges through abstract multidimensional constructs. A multidimensional constraint is derived and utilized to implement an indexed Multidimensional Key (MK) to abstract a…

  17. Development of an electronic database for Acute Pain Service outcomes

    PubMed Central

    Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH

    2012-01-01

    BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364

  18. Academic Impact of a Public Electronic Health Database: Bibliometric Analysis of Studies Using the General Practice Research Database

    PubMed Central

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Background Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. Methodology and Findings A total of 749 studies published between 1995 and 2009 with ‘General Practice Research Database’ as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of “Pharmacology and Pharmacy”, “General and Internal Medicine”, and “Public, Environmental and Occupational Health”. The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. Conclusions A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research. PMID:21731733

  19. ELNET--The Electronic Library Database System.

    ERIC Educational Resources Information Center

    King, Shirley V.

    1991-01-01

    ELNET (Electronic Library Network), a Japanese language database, allows searching of index terms and free text terms from articles and stores the full text of the articles on an optical disc system. Users can order fax copies of the text from the optical disc. This article also explains online searching and discusses machine translation. (LRW)

  20. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  1. NOAA Data Rescue of Key Solar Databases and Digitization of Historical Solar Images

    NASA Astrophysics Data System (ADS)

    Coffey, H. E.

    2006-08-01

    Over a number of years, the staff at NOAA National Geophysical Data Center (NGDC) has worked to rescue key solar databases by converting them to digital format and making them available via the World Wide Web. NOAA has had several data rescue programs where staff compete for funds to rescue important and critical historical data that are languishing in archives and at risk of being lost due to deteriorating condition, loss of any metadata or descriptive text that describe the databases, lack of interest or funding in maintaining databases, etc. The Solar-Terrestrial Physics Division at NGDC was able to obtain funds to key in some critical historical tabular databases. Recently the NOAA Climate Database Modernization Program (CDMP) funded a project to digitize historical solar images, producing a large online database of historical daily full disk solar images. The images include the wavelengths Calcium K, Hydrogen Alpha, and white light photos, as well as sunspot drawings and the comprehensive drawings of a multitude of solar phenomena on one daily map (Fraunhofer maps and Wendelstein drawings). Included in the digitization are high resolution solar H-alpha images taken at the Boulder Solar Observatory 1967-1984. The scanned daily images document many phases of solar activity, from decadal variation to rotational variation to daily changes. Smaller versions are available online. Larger versions are available by request. See http://www.ngdc.noaa.gov/stp/SOLAR/ftpsolarimages.html. The tabular listings and solar imagery will be discussed.

  2. Standardizing terminology and definitions of medication adherence and persistence in research employing electronic databases.

    PubMed

    Raebel, Marsha A; Schmittdiel, Julie; Karter, Andrew J; Konieczny, Jennifer L; Steiner, John F

    2013-08-01

    To propose a unifying set of definitions for prescription adherence research utilizing electronic health record prescribing databases, prescription dispensing databases, and pharmacy claims databases and to provide a conceptual framework to operationalize these definitions consistently across studies. We reviewed recent literature to identify definitions in electronic database studies of prescription-filling patterns for chronic oral medications. We then develop a conceptual model and propose standardized terminology and definitions to describe prescription-filling behavior from electronic databases. The conceptual model we propose defines 2 separate constructs: medication adherence and persistence. We define primary and secondary adherence as distinct subtypes of adherence. Metrics for estimating secondary adherence are discussed and critiqued, including a newer metric (New Prescription Medication Gap measure) that enables estimation of both primary and secondary adherence. Terminology currently used in prescription adherence research employing electronic databases lacks consistency. We propose a clear, consistent, broadly applicable conceptual model and terminology for such studies. The model and definitions facilitate research utilizing electronic medication prescribing, dispensing, and/or claims databases and encompasses the entire continuum of prescription-filling behavior. Employing conceptually clear and consistent terminology to define medication adherence and persistence will facilitate future comparative effectiveness research and meta-analytic studies that utilize electronic prescription and dispensing records.

  3. An Algorithm for Building an Electronic Database.

    PubMed

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  4. Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient.

    PubMed

    Brossier, David; El Taani, Redha; Sauthier, Michael; Roumeliotis, Nadia; Emeriaud, Guillaume; Jouvet, Philippe

    2018-04-01

    Our objective was to construct a prospective high-quality and high-frequency database combining patient therapeutics and clinical variables in real time, automatically fed by the information system and network architecture available through fully electronic charting in our PICU. The purpose of this article is to describe the data acquisition process from bedside to the research electronic database. Descriptive report and analysis of a prospective database. A 24-bed PICU, medical ICU, surgical ICU, and cardiac ICU in a tertiary care free-standing maternal child health center in Canada. All patients less than 18 years old were included at admission to the PICU. None. Between May 21, 2015, and December 31, 2016, 1,386 consecutive PICU stays from 1,194 patients were recorded in the database. Data were prospectively collected from admission to discharge, every 5 seconds from monitors and every 30 seconds from mechanical ventilators and infusion pumps. These data were linked to the patient's electronic medical record. The database total volume was 241 GB. The patients' median age was 2.0 years (interquartile range, 0.0-9.0). Data were available for all mechanically ventilated patients (n = 511; recorded duration, 77,678 hr), and respiratory failure was the most frequent reason for admission (n = 360). The complete pharmacologic profile was synched to database for all PICU stays. Following this implementation, a validation phase is in process and several research projects are ongoing using this high-fidelity database. Using the existing bedside information system and network architecture of our PICU, we implemented an ongoing high-fidelity prospectively collected electronic database, preventing the continuous loss of scientific information. This offers the opportunity to develop research on clinical decision support systems and computational models of cardiorespiratory physiology for example.

  5. Practical Quantum Private Database Queries Based on Passive Round-Robin Differential Phase-shift Quantum Key Distribution.

    PubMed

    Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min

    2016-08-19

    A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security.

  6. Genome databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less

  7. Practical Quantum Private Database Queries Based on Passive Round-Robin Differential Phase-shift Quantum Key Distribution

    PubMed Central

    Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min

    2016-01-01

    A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security. PMID:27539654

  8. New DMSP database of precipitating auroral electrons and ions

    NASA Astrophysics Data System (ADS)

    Redmon, Robert J.; Denig, William F.; Kilcommons, Liam M.; Knipp, Delores J.

    2017-08-01

    Since the mid-1970s, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low Earth orbit. As the program evolved, so have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) to F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information and the Coordinated Data Analysis Web. We describe how the new database is being applied to high-latitude studies of the colocation of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field-aligned currents, and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  9. New DMSP Database of Precipitating Auroral Electrons and Ions.

    PubMed

    Redmon, Robert J; Denig, William F; Kilcommons, Liam M; Knipp, Delores J

    2017-08-01

    Since the mid 1970's, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low earth orbit. As the program evolved, so to have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) through F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information (NCEI) and the Coordinated Data Analysis Web (CDAWeb). We describe how the new database is being applied to high latitude studies of: the co-location of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field aligned currents and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  10. Improvement of medication event interventions through use of an electronic database.

    PubMed

    Merandi, Jenna; Morvay, Shelly; Lewe, Dorcas; Stewart, Barb; Catt, Char; Chanthasene, Phillip P; McClead, Richard; Kappeler, Karl; Mirtallo, Jay M

    2013-10-01

    Patient safety enhancements achieved through the use of an electronic Web-based system for responding to adverse drug events (ADEs) are described. A two-phase initiative was carried out at an academic pediatric hospital to improve processes related to "medication event huddles" (interdisciplinary meetings focused on ADE interventions). Phase 1 of the initiative entailed a review of huddles and interventions over a 16-month baseline period during which multiple databases were used to manage the huddle process and staff interventions were assigned via manually generated e-mail reminders. Phase 1 data collection included ADE details (e.g., medications and staff involved, location and date of event) and the types and frequencies of interventions. Based on the phase 1 analysis, an electronic database was created to eliminate the use of multiple systems for huddle scheduling and documentation and to automatically generate e-mail reminders on assigned interventions. In phase 2 of the initiative, the impact of the database during a 5-month period was evaluated; the primary outcome was the percentage of interventions documented as completed after database implementation. During the postimplementation period, 44.7% of assigned interventions were completed, compared with a completion rate of 21% during the preimplementation period, and interventions documented as incomplete decreased from 77% to 43.7% (p < 0.0001). Process changes, education, and medication order improvements were the most frequently documented categories of interventions. Implementation of a user-friendly electronic database improved intervention completion and documentation after medication event huddles.

  11. Integrated Electronic Health Record Database Management System: A Proposal.

    PubMed

    Schiza, Eirini C; Panos, George; David, Christiana; Petkov, Nicolai; Schizas, Christos N

    2015-01-01

    eHealth has attained significant importance as a new mechanism for health management and medical practice. However, the technological growth of eHealth is still limited by technical expertise needed to develop appropriate products. Researchers are constantly in a process of developing and testing new software for building and handling Clinical Medical Records, being renamed to Electronic Health Record (EHR) systems; EHRs take full advantage of the technological developments and at the same time provide increased diagnostic and treatment capabilities to doctors. A step to be considered for facilitating this aim is to involve more actively the doctor in building the fundamental steps for creating the EHR system and database. A global clinical patient record database management system can be electronically created by simulating real life medical practice health record taking and utilizing, analyzing the recorded parameters. This proposed approach demonstrates the effective implementation of a universal classic medical record in electronic form, a procedure by which, clinicians are led to utilize algorithms and intelligent systems for their differential diagnosis, final diagnosis and treatment strategies.

  12. Methods for synchronizing a countdown routine of a timer key and electronic device

    DOEpatents

    Condit, Reston A.; Daniels, Michael A.; Clemens, Gregory P.; Tomberlin, Eric S.; Johnson, Joel A.

    2015-06-02

    A timer key relating to monitoring a countdown time of a countdown routine of an electronic device is disclosed. The timer key comprises a processor configured to respond to a countdown time associated with operation of the electronic device, a display operably coupled with the processor, and a housing configured to house at least the processor. The housing has an associated structure configured to engage with the electronic device to share the countdown time between the electronic device and the timer key. The processor is configured to begin a countdown routine based at least in part on the countdown time, wherein the countdown routine is at least substantially synchronized with a countdown routine of the electronic device when the timer key is removed from the electronic device. A system and method for synchronizing countdown routines of a timer key and an electronic device are also disclosed.

  13. Herpes zoster surveillance using electronic databases in the Valencian Community (Spain)

    PubMed Central

    2013-01-01

    Background Epidemiologic data of Herpes Zoster (HZ) disease in Spain are scarce. The objective of this study was to assess the epidemiology of HZ in the Valencian Community (Spain), using outpatient and hospital electronic health databases. Methods Data from 2007 to 2010 was collected from computerized health databases of a population of around 5 million inhabitants. Diagnoses were recorded by physicians using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). A sample of medical records under different criteria was reviewed by a general practitioner, to assess the reliability of codification. Results The average annual incidence of HZ was 4.60 per 1000 persons-year (PY) for all ages (95% CI: 4.57-4.63), is more frequent in women [5.32/1000PY (95% CI: 5.28-5.37)] and is strongly age-related, with a peak incidence at 70-79 years. A total of 7.16/1000 cases of HZ required hospitalization. Conclusions Electronic health database used in the Valencian Community is a reliable electronic surveillance tool for HZ disease and will be useful to define trends in disease burden before and after HZ vaccine introduction. PMID:24094135

  14. NASA Electronic Library System (NELS) database schema, version 1.2

    NASA Technical Reports Server (NTRS)

    Melebeck, Clovis J.

    1991-01-01

    The database tables used by NELS version 1.2 are discussed. To provide the current functional capability offered by NELS, nineteen tables were created with ORACLE. Each table lists the ORACLE table name and provides a brief description of the tables intended use or function. The following sections cover four basic categories of tables: NELS object classes, NELS collections, NELS objects, and NELS supplemental tables. Also included in each section is a definition and/or relationship of each field to other fields or tables. The primary key(s) for each table is indicated with a single asterisk (*), while foreign keys are indicated with double asterisks (**). The primary key(s) indicate the key(s) which uniquely identifies a record for that table. The foreign key(s) is used to identify additional information in other table(s) for that record. The two appendices are the command which is used to construct the ORACLE tables for NELS. Appendix A contains the commands which create the tables which are defined in the following sections. Appendix B contains the commands which build the indices for these tables.

  15. Electric vehicle recycling 2020: Key component power electronics.

    PubMed

    Bulach, Winfried; Schüler, Doris; Sellin, Guido; Elwert, Tobias; Schmid, Dieter; Goldmann, Daniel; Buchert, Matthias; Kammer, Ulrich

    2018-04-01

    Electromobility will play a key role in order to reach the specified ambitious greenhouse gas reduction targets in the German transport sector of 42% between 1990 and 2030. Subsequently, a significant rise in the sale of electric vehicles (EVs) is to be anticipated in future. The amount of EVs to be recycled will rise correspondingly after a delay. This includes the recyclable power electronics modules which are incorporated in every EV as an important component for energy management. Current recycling methods using car shredders and subsequent post shredder technologies show high recycling rates for the bulk metals but are still associated with high losses of precious and strategic metals such as gold, silver, platinum, palladium and tantalum. For this reason, the project 'Electric vehicle recycling 2020 - key component power electronics' developed an optimised recycling route for recycling power electronics modules from EVs which is also practicable in series production and can be implemented using standardised technology. This 'WEEE recycling route' involves the disassembly of the power electronics from the vehicle and a subsequent recycling in an electronic end-of-life equipment recycling plant. The developed recycling process is economical under the current conditions and raw material prices, even though it involves considerably higher costs than recycling using the car shredder. The life cycle assessment shows basically good results, both for the traditional car shredder route and the developed WEEE recycling route: the latter provides additional benefits from some higher recovery rates and corresponding credits.

  16. BAO Plate Archive Project: Digitization, Electronic Database and Research Programmes

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Abrahamyan, H. V.; Andreasyan, H. R.; Azatyan, N. M.; Farmanyan, S. V.; Gigoyan, K. S.; Gyulzadyan, M. V.; Khachatryan, K. G.; Knyazyan, A. V.; Kostandyan, G. R.; Mikayelyan, G. A.; Nikoghosyan, E. H.; Paronyan, G. M.; Vardanyan, A. V.

    2016-06-01

    The most important part of the astronomical observational heritage are astronomical plate archives created on the basis of numerous observations at many observatories. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,000 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt type and other smaller telescopes during 1947-1991. In 2002-2005, the famous Markarian Survey (also called First Byurakan Survey, FBS) 1874 plates were digitized and the Digitized FBS (DFBS) was created. New science projects have been conducted based on these low-dispersion spectroscopic material. A large project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage was started in 2015. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 11 astronomers and 2 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization, as well as Armenian Virtual Observatory (ArVO) database will be used to accommodate all new data. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects, mainly including high proper motion stars, variable objects and Solar System bodies.

  17. Computer Cataloging of Electronic Journals in Unstable Aggregator Databases: The Hong Kong Baptist University Library Experience.

    ERIC Educational Resources Information Center

    Li, Yiu-On; Leung, Shirley W.

    2001-01-01

    Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…

  18. Searching for disability in electronic databases of published literature.

    PubMed

    Walsh, Emily S; Peterson, Jana J; Judkins, Dolores Z

    2014-01-01

    As researchers in disability and health conduct systematic reviews with greater frequency, the definition of disability used in these reviews gains importance. Translating a comprehensive conceptual definition of "disability" into an operational definition that utilizes electronic databases in the health sciences is a difficult step necessary for performing systematic literature reviews in the field. Consistency of definition across studies will help build a body of evidence that is comparable and amenable to synthesis. To illustrate a process for operationalizing the World Health Organization's International Classification of Disability, Functioning, and Health concept of disability for MEDLINE, PsycINFO, and CINAHL databases. We created an electronic search strategy in conjunction with a reference librarian and an expert panel. Quality control steps included comparison of search results to results of a search for a specific disabling condition and to articles nominated by the expert panel. The complete search strategy is presented. Results of the quality control steps indicated that our strategy was sufficiently sensitive and specific. Our search strategy will be valuable to researchers conducting literature reviews on broad populations with disabilities. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. [Discussion on developing a data management plan and its key factors in clinical study based on electronic data capture system].

    PubMed

    Li, Qing-na; Huang, Xiu-ling; Gao, Rui; Lu, Fang

    2012-08-01

    Data management has significant impact on the quality control of clinical studies. Every clinical study should have a data management plan to provide overall work instructions and ensure that all of these tasks are completed according to the Good Clinical Data Management Practice (GCDMP). Meanwhile, the data management plan (DMP) is an auditable document requested by regulatory inspectors and must be written in a manner that is realistic and of high quality. The significance of DMP, the minimum standards and the best practices provided by GCDMP, the main contents of DMP based on electronic data capture (EDC) and some key factors of DMP influencing the quality of clinical study were elaborated in this paper. Specifically, DMP generally consists of 15 parts, namely, the approval page, the protocol summary, role and training, timelines, database design, creation, maintenance and security, data entry, data validation, quality control and quality assurance, the management of external data, serious adverse event data reconciliation, coding, database lock, data management reports, the communication plan and the abbreviated terms. Among them, the following three parts are regarded as the key factors: designing a standardized database of the clinical study, entering data in time and cleansing data efficiently. In the last part of this article, the authors also analyzed the problems in clinical research of traditional Chinese medicine using the EDC system and put forward some suggestions for improvement.

  20. Use of large electronic health record databases for environmental epidemiology studies.

    EPA Science Inventory

    Background: Electronic health records (EHRs) are a ubiquitous component of the United States healthcare system and capture nearly all data collected in a clinic or hospital setting. EHR databases are attractive for secondary data analysis as they may contain detailed clinical rec...

  1. BAO Plate Archive digitization, creation of electronic database and its scientific usage

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg M.

    2015-08-01

    Astronomical plate archives created on the basis of numerous observations at many observatories are important part of the astronomical heritage. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,500 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. In 2002-2005, the famous Markarian Survey (First Byurakan Survey, FBS) 2000 plates were digitized and the Digitized FBS (DFBS, http://www.aras.am/Dfbs/dfbs.html) was created. New science projects have been conducted based on these low-dispersion spectroscopic material. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 9 astronomers and 3 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization, as well as Armenian Virtual Observatory (ArVO) database to accommodate all new data. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects.

  2. Database Development for Electrical, Electronic, and Electromechanical (EEE) Parts for the International Space Station Alpha

    NASA Technical Reports Server (NTRS)

    Wassil-Grimm, Andrew D.

    1997-01-01

    More effective electronic communication processes are needed to transfer contractor and international partner data into NASA and prime contractor baseline database systems. It is estimated that the International Space Station Alpha (ISSA) parts database will contain up to one million parts each of which may require database capabilities for approximately one thousand bytes of data for each part. The resulting gigabyte database must provide easy access to users who will be preparing multiple analyses and reports in order to verify as-designed, as-built, launch, on-orbit, and return configurations for up to 45 missions associated with the construction of the ISSA. Additionally, Internet access to this data base is strongly indicated to allow multiple user access from clients located in many foreign countries. This summer's project involved familiarization and evaluation of the ISSA Electrical, Electronic, and Electromechanical (EEE) Parts data and the process of electronically managing these data. Particular attention was devoted to improving the interfaces among the many elements of the ISSA information system and its global customers and suppliers. Additionally, prototype queries were developed to facilitate the identification of data changes in the data base, verifications that the designs used only approved parts, and certifications that the flight hardware containing EEE parts was ready for flight. This project also resulted in specific recommendations to NASA for further development in the area of EEE parts database development and usage.

  3. A technique for routinely updating the ITU-R database using radio occultation electron density profiles

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno

    2013-09-01

    Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).

  4. Improving Care And Research Electronic Data Trust Antwerp (iCAREdata): a research database of linked data on out-of-hours primary care.

    PubMed

    Colliers, Annelies; Bartholomeeusen, Stefaan; Remmen, Roy; Coenen, Samuel; Michiels, Barbara; Bastiaens, Hilde; Van Royen, Paul; Verhoeven, Veronique; Holmgren, Philip; De Ruyck, Bernard; Philips, Hilde

    2016-05-04

    Primary out-of-hours care is developing throughout Europe. High-quality databases with linked data from primary health services can help to improve research and future health services. In 2014, a central clinical research database infrastructure was established (iCAREdata: Improving Care And Research Electronic Data Trust Antwerp, www.icaredata.eu ) for primary and interdisciplinary health care at the University of Antwerp, linking data from General Practice Cooperatives, Emergency Departments and Pharmacies during out-of-hours care. Medical data are pseudonymised using the services of a Trusted Third Party, which encodes private information about patients and physicians before data is sent to iCAREdata. iCAREdata provides many new research opportunities in the fields of clinical epidemiology, health care management and quality of care. A key aspect will be to ensure the quality of data registration by all health care providers. This article describes the establishment of a research database and the possibilities of linking data from different primary out-of-hours care providers, with the potential to help to improve research and the quality of health care services.

  5. Under Lock and Key: Preventing Campus Theft of Electronic Equipment.

    ERIC Educational Resources Information Center

    Harrison, J. Phil

    1996-01-01

    A discussion of computer theft prevention on college campuses looks at a variety of elements in electronic equipment security, including the extent of the problem, physical antitheft products, computerized access, control of key access, alarm systems, competent security personnel, lighting, use of layers of protection, and increasing…

  6. Evaluation of Electronic Healthcare Databases for Post-Marketing Drug Safety Surveillance and Pharmacoepidemiology in China.

    PubMed

    Yang, Yu; Zhou, Xiaofeng; Gao, Shuangqing; Lin, Hongbo; Xie, Yanming; Feng, Yuji; Huang, Kui; Zhan, Siyan

    2018-01-01

    Electronic healthcare databases (EHDs) are used increasingly for post-marketing drug safety surveillance and pharmacoepidemiology in Europe and North America. However, few studies have examined the potential of these data sources in China. Three major types of EHDs in China (i.e., a regional community-based database, a national claims database, and an electronic medical records [EMR] database) were selected for evaluation. Forty core variables were derived based on the US Mini-Sentinel (MS) Common Data Model (CDM) as well as the data features in China that would be desirable to support drug safety surveillance. An email survey of these core variables and eight general questions as well as follow-up inquiries on additional variables was conducted. These 40 core variables across the three EHDs and all variables in each EHD along with those in the US MS CDM and Observational Medical Outcomes Partnership (OMOP) CDM were compared for availability and labeled based on specific standards. All of the EHDs' custodians confirmed their willingness to share their databases with academic institutions after appropriate approval was obtained. The regional community-based database contained 1.19 million people in 2015 with 85% of core variables. Resampled annually nationwide, the national claims database included 5.4 million people in 2014 with 55% of core variables, and the EMR database included 3 million inpatients from 60 hospitals in 2015 with 80% of core variables. Compared with MS CDM or OMOP CDM, the proportion of variables across the three EHDs available or able to be transformed/derived from the original sources are 24-83% or 45-73%, respectively. These EHDs provide potential value to post-marketing drug safety surveillance and pharmacoepidemiology in China. Future research is warranted to assess the quality and completeness of these EHDs or additional data sources in China.

  7. Infant feeding practices within a large electronic medical record database.

    PubMed

    Bartsch, Emily; Park, Alison L; Young, Jacqueline; Ray, Joel G; Tu, Karen

    2018-01-02

    The emerging adoption of the electronic medical record (EMR) in primary care enables clinicians and researchers to efficiently examine epidemiological trends in child health, including infant feeding practices. We completed a population-based retrospective cohort study of 8815 singleton infants born at term in Ontario, Canada, April 2002 to March 2013. Newborn records were linked to the Electronic Medical Record Administrative data Linked Database (EMRALD™), which uses patient-level information from participating family practice EMRs across Ontario. We assessed exclusive breastfeeding patterns using an automated electronic search algorithm, with manual review of EMRs when the latter was not possible. We examined the rate of breastfeeding at visits corresponding to 2, 4 and 6 months of age, as well as sociodemographic factors associated with exclusive breastfeeding. Of the 8815 newborns, 1044 (11.8%) lacked breastfeeding information in their EMR. Rates of exclusive breastfeeding were 39.5% at 2 months, 32.4% at 4 months and 25.1% at 6 months. At age 6 months, exclusive breastfeeding rates were highest among mothers aged ≥40 vs. < 20 years (rate ratio [RR] 2.45, 95% confidence interval [CI] 1.62-3.68), urban vs. rural residence (RR 1.35, 95% CI 1.22-1.50), and highest vs. lowest income quintile (RR 1.18, 95% CI 1.02-1.36). Overall, immigrants had similar rates of exclusive breastfeeding as non-immigrants; yet, by age 6 months, among those residing in the lowest income quintile, immigrants were more likely to exclusively breastfeed than their non-immigrant counterparts (RR 1.43, 95% CI 1.12-1.83). We efficiently determined rates and factors associated with exclusive breastfeeding using data from a large EMR database.

  8. Key features for ATA / ATR database design in missile systems

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2017-05-01

    Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.

  9. Electron Stark Broadening Database for Atomic N, O, and C Lines

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Yao, Winifred M.; Wray, Alan A.; Carbon, Duane F.

    2012-01-01

    A database for efficiently computing the electron Stark broadening line widths for atomic N, O, and C lines is constructed. The line width is expressed in terms of the electron number density and electronatom scattering cross sections based on the Baranger impact theory. The state-to-state cross sections are computed using the semiclassical approximation, in which the atom is treated quantum mechanically whereas the motion of the free electron follows a classical trajectory. These state-to-state cross sections are calculated based on newly compiled line lists. Each atomic line list consists of a careful merger of NIST, Vanderbilt, and TOPbase line datasets from wavelength 50 nm to 50 micrometers covering the VUV to IR spectral regions. There are over 10,000 lines in each atomic line list. The widths for each line are computed at 13 electron temperatures between 1,000 K 50,000 K. A linear least squares method using a four-term fractional power series is then employed to obtain an analytical fit for each line-width variation as a function of the electron temperature. The maximum L2 error of the analytic fits for all lines in our line lists is about 5%.

  10. Assessment of COPD-related outcomes via a national electronic medical record database.

    PubMed

    Asche, Carl; Said, Quayyim; Joish, Vijay; Hall, Charles Oaxaca; Brixner, Diana

    2008-01-01

    The technology and sophistication of healthcare utilization databases have expanded over the last decade to include results of lab tests, vital signs, and other clinical information. This review provides an assessment of the methodological and analytical challenges of conducting chronic obstructive pulmonary disease (COPD) outcomes research in a national electronic medical records (EMR) dataset and its potential application towards the assessment of national health policy issues, as well as a description of the challenges or limitations. An EMR database and its application to measuring outcomes for COPD are described. The ability to measure adherence to the COPD evidence-based practice guidelines, generated by the NIH and HEDIS quality indicators, in this database was examined. Case studies, before and after their publication, were used to assess the adherence to guidelines and gauge the conformity to quality indicators. EMR was the only source of information for pulmonary function tests, but low frequency in ordering by primary care was an issue. The EMR data can be used to explore impact of variation in healthcare provision on clinical outcomes. The EMR database permits access to specific lab data and biometric information. The richness and depth of information on "real world" use of health services for large population-based analytical studies at relatively low cost render such databases an attractive resource for outcomes research. Various sources of information exist to perform outcomes research. It is important to understand the desired endpoints of such research and choose the appropriate database source.

  11. Healthcare databases in Europe for studying medicine use and safety during pregnancy.

    PubMed

    Charlton, Rachel A; Neville, Amanda J; Jordan, Sue; Pierini, Anna; Damase-Michel, Christine; Klungsøyr, Kari; Andersen, Anne-Marie Nybo; Hansen, Anne Vinkel; Gini, Rosa; Bos, Jens H J; Puccini, Aurora; Hurault-Delarue, Caroline; Brooks, Caroline J; de Jong-van den Berg, Lolkje T W; de Vries, Corinne S

    2014-06-01

    The aim of this study was to describe a number of electronic healthcare databases in Europe in terms of the population covered, the source of the data captured and the availability of data on key variables required for evaluating medicine use and medicine safety during pregnancy. A sample of electronic healthcare databases that captured pregnancies and prescription data was selected on the basis of contacts within the EUROCAT network. For each participating database, a database inventory was completed. Eight databases were included, and the total population covered was 25 million. All databases recorded live births, seven captured stillbirths and five had full data available on spontaneous pregnancy losses and induced terminations. In six databases, data were usually available to determine the date of the woman's last menstrual period, whereas in the remainder, algorithms were needed to establish a best estimate for at least some pregnancies. In seven databases, it was possible to use data recorded in the databases to identify pregnancies where the offspring had a congenital anomaly. Information on confounding variables was more commonly available in databases capturing data recorded by primary-care practitioners. All databases captured maternal co-prescribing and a measure of socioeconomic status. This study suggests that within Europe, electronic healthcare databases may be valuable sources of data for evaluating medicine use and safety during pregnancy. The suitability of a particular database, however, will depend on the research question, the type of medicine to be evaluated, the prevalence of its use and any adverse outcomes of interest. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd.

  12. Bibliographical database of radiation biological dosimetry and risk assessment: Part 1, through June 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straume, T.; Ricker, Y.; Thut, M.

    1988-08-29

    This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. Itmore » should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database.« less

  13. Stakeholder engagement: a key component of integrating genomic information into electronic health records

    PubMed Central

    Hartzler, Andrea; McCarty, Catherine A.; Rasmussen, Luke V.; Williams, Marc S.; Brilliant, Murray; Bowton, Erica A.; Clayton, Ellen Wright; Faucett, William A.; Ferryman, Kadija; Field, Julie R.; Fullerton, Stephanie M.; Horowitz, Carol R.; Koenig, Barbara A.; McCormick, Jennifer B.; Ralston, James D.; Sanderson, Saskia C.; Smith, Maureen E.; Trinidad, Susan Brown

    2014-01-01

    Integrating genomic information into clinical care and the electronic health record can facilitate personalized medicine through genetically guided clinical decision support. Stakeholder involvement is critical to the success of these implementation efforts. Prior work on implementation of clinical information systems provides broad guidance to inform effective engagement strategies. We add to this evidence-based recommendations that are specific to issues at the intersection of genomics and the electronic health record. We describe stakeholder engagement strategies employed by the Electronic Medical Records and Genomics Network, a national consortium of US research institutions funded by the National Human Genome Research Institute to develop, disseminate, and apply approaches that combine genomic and electronic health record data. Through select examples drawn from sites of the Electronic Medical Records and Genomics Network, we illustrate a continuum of engagement strategies to inform genomic integration into commercial and homegrown electronic health records across a range of health-care settings. We frame engagement as activities to consult, involve, and partner with key stakeholder groups throughout specific phases of health information technology implementation. Our aim is to provide insights into engagement strategies to guide genomic integration based on our unique network experiences and lessons learned within the broader context of implementation research in biomedical informatics. On the basis of our collective experience, we describe key stakeholder practices, challenges, and considerations for successful genomic integration to support personalized medicine. PMID:24030437

  14. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  15. Managing Tradeoffs in the Electronic Age.

    ERIC Educational Resources Information Center

    Wagner, A. Ben

    2003-01-01

    Provides an overview of the development of electronic resources over the past three decades, discussing key features, disadvantages, and benefits of traditional online databases and CD-ROM and Web-based resources. Considers the decision to shift collections and resources toward purely digital formats, ownership of content, licensing, and user…

  16. Optimization of the efficiency of search operations in the relational database of radio electronic systems

    NASA Astrophysics Data System (ADS)

    Wajszczyk, Bronisław; Biernacki, Konrad

    2018-04-01

    The increase of interoperability of radio electronic systems used in the Armed Forces requires the processing of very large amounts of data. Requirements for the integration of information from many systems and sensors, including radar recognition, electronic and optical recognition, force to look for more efficient methods to support information retrieval in even-larger database resources. This paper presents the results of research on methods of improving the efficiency of databases using various types of indexes. The data structure indexing technique is a solution used in RDBMS systems (relational database management system). However, the analysis of the performance of indices, the description of potential applications, and in particular the presentation of a specific scale of performance growth for individual indices are limited to few studies in this field. This paper contains analysis of methods affecting the work efficiency of a relational database management system. As a result of the research, a significant increase in the efficiency of operations on data was achieved through the strategy of indexing data structures. The presentation of the research topic discussed in this paper mainly consists of testing the operation of various indexes against the background of different queries and data structures. The conclusions from the conducted experiments allow to assess the effectiveness of the solutions proposed and applied in the research. The results of the research indicate the existence of a real increase in the performance of operations on data using indexation of data structures. In addition, the level of this growth is presented, broken down by index types.

  17. Shared Electronic Health Record Systems: Key Legal and Security Challenges.

    PubMed

    Christiansen, Ellen K; Skipenes, Eva; Hausken, Marie F; Skeie, Svein; Østbye, Truls; Iversen, Marjolein M

    2017-11-01

    Use of shared electronic health records opens a whole range of new possibilities for flexible and fruitful cooperation among health personnel in different health institutions, to the benefit of the patients. There are, however, unsolved legal and security challenges. The overall aim of this article is to highlight legal and security challenges that should be considered before using shared electronic cooperation platforms and health record systems to avoid legal and security "surprises" subsequent to the implementation. Practical lessons learned from the use of a web-based ulcer record system involving patients, community nurses, GPs, and hospital nurses and doctors in specialist health care are used to illustrate challenges we faced. Discussion of possible legal and security challenges is critical for successful implementation of shared electronic collaboration systems. Key challenges include (1) allocation of responsibility, (2) documentation routines, (3) and integrated or federated access control. We discuss and suggest how challenges of legal and security aspects can be handled. This discussion may be useful for both current and future users, as well as policy makers.

  18. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  19. Database Search Strategies & Tips. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 17 articles presenting strategies and tips for searching databases online appear in this collection, which is one in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  20. CTEPP STANDARD OPERATING PROCEDURE FOR ENTERING OR IMPORTING ELECTRONIC DATA INTO THE CTEPP DATABASE (SOP-4.12)

    EPA Science Inventory

    This SOP described the method used to automatically parse analytical data generated from gas chromatography/mass spectrometry (GC/MS) analyses into CTEPP summary spreadsheets and electronically import the summary spreadsheets into the CTEPP study database.

  1. The Plant Organelles Database 3 (PODB3) update 2014: integrating electron micrographs and new options for plant organelle research.

    PubMed

    Mano, Shoji; Nakamura, Takanori; Kondo, Maki; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nagatani, Akira; Nishimura, Mikio

    2014-01-01

    The Plant Organelles Database 2 (PODB2), which was first launched in 2006 as PODB, provides static image and movie data of plant organelles, protocols for plant organelle research and external links to relevant websites. PODB2 has facilitated plant organellar research and the understanding of plant organelle dynamics. To provide comprehensive information on plant organelles in more detail, PODB2 was updated to PODB3 (http://podb.nibb.ac.jp/Organellome/). PODB3 contains two additional components: the electron micrograph database and the perceptive organelles database. Through the electron micrograph database, users can examine the subcellular and/or suborganellar structures in various organs of wild-type and mutant plants. The perceptive organelles database provides information on organelle dynamics in response to external stimuli. In addition to the extra components, the user interface for access has been enhanced in PODB3. The data in PODB3 are directly submitted by plant researchers and can be freely downloaded for use in further analysis. PODB3 contains all the information included in PODB2, and the volume of data and protocols deposited in PODB3 continue to grow steadily. We welcome contributions of data from all plant researchers to enhance the utility and comprehensiveness of PODB3.

  2. Principles of visual key construction-with a visual identification key to the Fagaceae of the southeastern United States.

    PubMed

    Kirchoff, Bruce K; Leggett, Roxanne; Her, Va; Moua, Chue; Morrison, Jessica; Poole, Chamika

    2011-01-01

    Advances in digital imaging have made possible the creation of completely visual keys. By a visual key we mean a key based primarily on images, and that contains a minimal amount of text. Characters in visual keys are visually, not verbally defined. In this paper we create the first primarily visual key to a group of taxa, in this case the Fagaceae of the southeastern USA. We also modify our recently published set of best practices for image use in illustrated keys to make them applicable to visual keys. Photographs of the Fagaceae were obtained from internet and herbarium databases or were taken specifically for this project. The images were printed and then sorted into hierarchical groups. These hierarchical groups of images were used to create the 'couplets' in the key. A reciprocal process of key creation and testing was used to produce the final keys. Four keys were created, one for each of the parts-leaves, buds, fruits and bark. Species description pages consisting of multiple images were also created for each of the species in the key. Creation and testing of the key resulted in a modified list of best practices for image use visual keys. The inclusion of images into paper and electronic keys has greatly increased their ease of use. However, virtually all of these keys are still based upon verbally defined, atomistic characters. The creation of primarily visual keys allows us to overcome the well-known limitations of linguistic-based characters and create keys that are much easier to use, especially for botanical novices.

  3. Cohesive properties of (Cu,Ni)-(In,Sn) intermetallics: Database, electron-density correlations and interpretation of bonding trends

    NASA Astrophysics Data System (ADS)

    Ramos, S. B.; González Lemus, N. V.; Cabeza, G. F.; Fernández Guillermet, A.

    2016-06-01

    This paper presents a systematic and comparative study of the composition and volume dependence of the cohesive properties for a large group of Me-X intermetallic phases (IPs) with Me=Cu,Ni and X=In,Sn, which are of interest in relation with the design of lead-free soldering (LFS) alloys. The work relies upon a database with total-energy versus volume information developed by using projected augmented waves (PAW) calculations. In previous papers by the current authors it was shown that these results account satisfactorily for the direct and indirect experimental data available. In the present work, the database is further expanded to investigate the composition dependence of the volume (V0), and the composition and volume dependence of the bulk modulus (B0) and cohesive energy (Ecoh). On these bases, an analysis is performed of the systematic effects of replacing Cu by Ni in several Me-X phases (Me=Cu,Ni and X=In,Sn) reported as stable and metastable, as well as various hypothetical compounds involved in the thermodynamic modeling of IPs using the Compound-Energy Formalism. Moreover, it is shown that the cohesion-related quantities (B0/V0)½ and (Ecoh½/V0) can be correlated with a parameter expressing the number of valence electrons per unit volume. These findings are compared in detail with related relations involving the Miedema empirical electron density at the boundary of the Wigner-Seitz cell. In view of the co-variation of the cohesive properties, Ecoh is selected as a key property and its composition and structure dependence is examined in terms of a theoretical view of the bonding which involves the hybridization of the d-states of Cu or Ni with the s and p-states of In or Sn, for this class of compounds. In particular, a comparative analysis is performed of the DOS of various representative, iso-structural Me-X compounds. Various effects of relevance to understand the consequences of replacing Cu by Ni in LFS alloys are highlighted and explained

  4. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed

  5. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  6. A tuberculosis biomarker database: the key to novel TB diagnostics.

    PubMed

    Yerlikaya, Seda; Broger, Tobias; MacLean, Emily; Pai, Madhukar; Denkinger, Claudia M

    2017-03-01

    New diagnostic innovations for tuberculosis (TB), including point-of-care solutions, are critical to reach the goals of the End TB Strategy. However, despite decades of research, numerous reports on new biomarker candidates, and significant investment, no well-performing, simple and rapid TB diagnostic test is yet available on the market, and the search for accurate, non-DNA biomarkers remains a priority. To help overcome this 'biomarker pipeline problem', FIND and partners are working on the development of a well-curated and user-friendly TB biomarker database. The web-based database will enable the dynamic tracking of evidence surrounding biomarker candidates in relation to target product profiles (TPPs) for needed TB diagnostics. It will be able to accommodate raw datasets and facilitate the verification of promising biomarker candidates and the identification of novel biomarker combinations. As such, the database will simplify data and knowledge sharing, empower collaboration, help in the coordination of efforts and allocation of resources, streamline the verification and validation of biomarker candidates, and ultimately lead to an accelerated translation into clinically useful tools. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Scopus database: a review.

    PubMed

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  8. Scopus database: a review

    PubMed Central

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. PMID:16522216

  9. Mouse Genome Database: From sequence to phenotypes and disease models

    PubMed Central

    Richardson, Joel E.; Kadin, James A.; Smith, Cynthia L.; Blake, Judith A.; Bult, Carol J.

    2015-01-01

    Summary The Mouse Genome Database (MGD, www.informatics.jax.org) is the international scientific database for genetic, genomic, and biological data on the laboratory mouse to support the research requirements of the biomedical community. To accomplish this goal, MGD provides broad data coverage, serves as the authoritative standard for mouse nomenclature for genes, mutants, and strains, and curates and integrates many types of data from literature and electronic sources. Among the key data sets MGD supports are: the complete catalog of mouse genes and genome features, comparative homology data for mouse and vertebrate genes, the authoritative set of Gene Ontology (GO) annotations for mouse gene functions, a comprehensive catalog of mouse mutations and their phenotypes, and a curated compendium of mouse models of human diseases. Here, we describe the data acquisition process, specifics about MGD's key data areas, methods to access and query MGD data, and outreach and user help facilities. genesis 53:458–473, 2015. © 2015 The Authors. Genesis Published by Wiley Periodicals, Inc. PMID:26150326

  10. Computational Thermochemistry: Scale Factor Databases and Scale Factors for Vibrational Frequencies Obtained from Electronic Model Chemistries.

    PubMed

    Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G

    2010-09-14

    Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).

  11. X-ray Photoelectron Spectroscopy Database (Version 4.1)

    National Institute of Standards and Technology Data Gateway

    SRD 20 X-ray Photoelectron Spectroscopy Database (Version 4.1) (Web, free access)   The NIST XPS Database gives access to energies of many photoelectron and Auger-electron spectral lines. The database contains over 22,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines.

  12. Simple Web-based interactive key development software (WEBiKEY) and an example key for Kuruna (Poaceae: Bambusoideae).

    PubMed

    Attigala, Lakshmi; De Silva, Nuwan I; Clark, Lynn G

    2016-04-01

    Programs that are user-friendly and freely available for developing Web-based interactive keys are scarce and most of the well-structured applications are relatively expensive. WEBiKEY was developed to enable researchers to easily develop their own Web-based interactive keys with fewer resources. A Web-based multiaccess identification tool (WEBiKEY) was developed that uses freely available Microsoft ASP.NET technologies and an SQL Server database for Windows-based hosting environments. WEBiKEY was tested for its usability with a sample data set, the temperate woody bamboo genus Kuruna (Poaceae). WEBiKEY is freely available to the public and can be used to develop Web-based interactive keys for any group of species. The interactive key we developed for Kuruna using WEBiKEY enables users to visually inspect characteristics of Kuruna and identify an unknown specimen as one of seven possible species in the genus.

  13. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Database and interactive monitoring system for the photonics and electronics of RPC Muon Trigger in CMS experiment

    NASA Astrophysics Data System (ADS)

    Wiacek, Daniel; Kudla, Ignacy M.; Pozniak, Krzysztof T.; Bunkowski, Karol

    2005-02-01

    The main task of the RPC (Resistive Plate Chamber) Muon Trigger monitoring system design for the CMS (Compact Muon Solenoid) experiment (at LHC in CERN Geneva) is the visualization of data that includes the structure of electronic trigger system (e.g. geometry and imagery), the way of its processes and to generate automatically files with VHDL source code used for programming of the FPGA matrix. In the near future, the system will enable the analysis of condition, operation and efficiency of individual Muon Trigger elements, registration of information about some Muon Trigger devices and present previously obtained results in interactive presentation layer. A broad variety of different database and programming concepts for design of Muon Trigger monitoring system was presented in this article. The structure and architecture of the system and its principle of operation were described. One of ideas for building this system is use object-oriented programming and design techniques to describe real electronics systems through abstract object models stored in database and implement these models in Java language.

  15. Seventy Years of RN Effectiveness: A Database Development Project to Inform Best Practice.

    PubMed

    Lulat, Zainab; Blain-McLeod, Julie; Grinspun, Doris; Penney, Tasha; Harripaul-Yhap, Anastasia; Rey, Michelle

    2018-03-23

    The appropriate nursing staff mix is imperative to the provision of quality care. Nurse staffing levels and staff mix vary from country to country, as well as between care settings. Understanding how staffing skill mix impacts patient, organizational, and financial outcomes is critical in order to allow policymakers and clinicians to make evidence-informed staffing decisions. This paper reports on the methodology for creation of an electronic database of studies exploring the effectiveness of Registered Nurses (RNs) on clinical and patient outcomes, organizational and nurse outcomes, and financial outcomes. Comprehensive literature searches were conducted in four electronic databases. Inclusion criteria for the database included studies published from 1946 to 2016, peer-reviewed international literature, and studies focused on RNs in all health-care disciplines, settings, and sectors. Masters-prepared nurse researchers conducted title and abstract screening and relevance review to determine eligibility of studies for the database. High-level analysis was conducted to determine key outcomes and the frequency at which they appeared within the database. Of the initial 90,352 records, a total of 626 abstracts were included within the database. Studies were organized into three groups corresponding to clinical and patient outcomes, organizational and nurse-related outcomes, and financial outcomes. Organizational and nurse-related outcomes represented the largest category in the database with 282 studies, followed by clinical and patient outcomes with 244 studies, and lastly financial outcomes, which included 124 studies. The comprehensive database of evidence for RN effectiveness is freely available at https://rnao.ca/bpg/initiatives/RNEffectiveness. The database will serve as a resource for the Registered Nurses' Association of Ontario, as well as a tool for researchers, clinicians, and policymakers for making evidence-informed staffing decisions. © 2018 The Authors

  16. ePORT, NASA's Computer Database Program for System Safety Risk Management Oversight (Electronic Project Online Risk Tool)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul W.

    2008-01-01

    ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.

  17. The ClinicalTrials.gov results database--update and key issues.

    PubMed

    Zarin, Deborah A; Tse, Tony; Williams, Rebecca J; Califf, Robert M; Ide, Nicholas C

    2011-03-03

    The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research. We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010. As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants. ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.

  18. Simple Web-based interactive key development software (WEBiKEY) and an example key for Kuruna (Poaceae: Bambusoideae)1

    PubMed Central

    Attigala, Lakshmi; De Silva, Nuwan I.; Clark, Lynn G.

    2016-01-01

    Premise of the study: Programs that are user-friendly and freely available for developing Web-based interactive keys are scarce and most of the well-structured applications are relatively expensive. WEBiKEY was developed to enable researchers to easily develop their own Web-based interactive keys with fewer resources. Methods and Results: A Web-based multiaccess identification tool (WEBiKEY) was developed that uses freely available Microsoft ASP.NET technologies and an SQL Server database for Windows-based hosting environments. WEBiKEY was tested for its usability with a sample data set, the temperate woody bamboo genus Kuruna (Poaceae). Conclusions: WEBiKEY is freely available to the public and can be used to develop Web-based interactive keys for any group of species. The interactive key we developed for Kuruna using WEBiKEY enables users to visually inspect characteristics of Kuruna and identify an unknown specimen as one of seven possible species in the genus. PMID:27144109

  19. Relativistic quantum private database queries

    NASA Astrophysics Data System (ADS)

    Sun, Si-Jia; Yang, Yu-Guang; Zhang, Ming-Ou

    2015-04-01

    Recently, Jakobi et al. (Phys Rev A 83, 022301, 2011) suggested the first practical private database query protocol (J-protocol) based on the Scarani et al. (Phys Rev Lett 92, 057901, 2004) quantum key distribution protocol. Unfortunately, the J-protocol is just a cheat-sensitive private database query protocol. In this paper, we present an idealized relativistic quantum private database query protocol based on Minkowski causality and the properties of quantum information. Also, we prove that the protocol is secure in terms of the user security and the database security.

  20. Utilisation and Impact of the Essential Electronic Agricultural Database (TEEAL) on Library Services in a Nigerian University of Agriculture

    ERIC Educational Resources Information Center

    Oduwole, A. A.; Sowole, A. O.

    2006-01-01

    Purpose: This study examined the utilisation of the Essential Electronic Agricultural Library database (TEEAL) at the University of Agriculture Library, Abeokuta, Nigeria. Design/methodology/approach: Data collection was by questionnaire following a purposive sampling technique. A total of 104 out 150 (69.3 per cent) responses were received and…

  1. Concierge: Personal Database Software for Managing Digital Research Resources

    PubMed Central

    Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro

    2007-01-01

    This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800

  2. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  3. Implementing and maintaining a researchable database from electronic medical records: a perspective from an academic family medicine department.

    PubMed

    Stewart, Moira; Thind, Amardeep; Terry, Amanda L; Chevendra, Vijaya; Marshall, J Neil

    2009-11-01

    Electronic medical records (EMRs) are posited as a tool for improving practice, policy and research in primary healthcare. This paper describes the Deliver Primary Healthcare Information (DELPHI) Project at the Department of Family Medicine at the University of Western Ontario, focusing on its development, current status and research potential in order to share experiences with researchers in similar contexts. The project progressed through four stages: (a) participant recruitment, (b) EMR software modification and implementation, (c) database creation and (d) data quality assessment. Currently, the DELPHI database holds more than two years of high-quality, de-identified data from 10 practices, with 30,000 patients and nearly a quarter of a million encounters.

  4. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  5. Annual Review of Database Developments: 1993.

    ERIC Educational Resources Information Center

    Basch, Reva

    1993-01-01

    Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…

  6. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  7. The ClinicalTrials.gov Results Database — Update and Key Issues

    PubMed Central

    Zarin, Deborah A.; Tse, Tony; Williams, Rebecca J.; Califf, Robert M.; Ide, Nicholas C.

    2011-01-01

    BACKGROUND The ClinicalTrials.gov trial registry was expanded in 2008 to include a database for reporting summary results. We summarize the structure and contents of the results database, provide an update of relevant policies, and show how the data can be used to gain insight into the state of clinical research. METHODS We analyzed ClinicalTrials.gov data that were publicly available between September 2009 and September 2010. RESULTS As of September 27, 2010, ClinicalTrials.gov received approximately 330 new and 2000 revised registrations each week, along with 30 new and 80 revised results submissions. We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting. Of results records available publicly, 20% reported more than two primary outcome measures and 5% reported more than five. Of a sample of 100 registry record outcome measures, 61% lacked specificity in describing the metric used in the planned analysis. In a sample of 700 results records, the mean number of different analysis populations per study group was 2.5 (median, 1; range, 1 to 25). Of these trials, 24% reported results for 90% or less of their participants. CONCLUSIONS ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data. PMID:21366476

  8. Developing a Nursing Database System in Kenya

    PubMed Central

    Riley, Patricia L; Vindigni, Stephen M; Arudo, John; Waudo, Agnes N; Kamenju, Andrew; Ngoya, Japheth; Oywer, Elizabeth O; Rakuom, Chris P; Salmon, Marla E; Kelley, Maureen; Rogers, Martha; St Louis, Michael E; Marum, Lawrence H

    2007-01-01

    Objective To describe the development, initial findings, and implications of a national nursing workforce database system in Kenya. Principal Findings Creating a national electronic nursing workforce database provides more reliable information on nurse demographics, migration patterns, and workforce capacity. Data analyses are most useful for human resources for health (HRH) planning when workforce capacity data can be linked to worksite staffing requirements. As a result of establishing this database, the Kenya Ministry of Health has improved capability to assess its nursing workforce and document important workforce trends, such as out-migration. Current data identify the United States as the leading recipient country of Kenyan nurses. The overwhelming majority of Kenyan nurses who elect to out-migrate are among Kenya's most qualified. Conclusions The Kenya nursing database is a first step toward facilitating evidence-based decision making in HRH. This database is unique to developing countries in sub-Saharan Africa. Establishing an electronic workforce database requires long-term investment and sustained support by national and global stakeholders. PMID:17489921

  9. Database resources for the Tuberculosis community

    PubMed Central

    Lew, Jocelyne M.; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D.; Gordon, Stephen V.; Schnappinger, Dirk; Cole, Stewart T.; Sobral, Bruno

    2013-01-01

    Summary Access to online repositories for genomic and associated “-omics” datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th–8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. PMID:23332401

  10. Governance and oversight of researcher access to electronic health data: the role of the Independent Scientific Advisory Committee for MHRA database research, 2006-2015.

    PubMed

    Waller, P; Cassell, J A; Saunders, M H; Stevens, R

    2017-03-01

    In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.

  11. Development of the Global Earthquake Model’s neotectonic fault database

    USGS Publications Warehouse

    Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert

    2015-01-01

    The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.

  12. Library Instruction and Online Database Searching.

    ERIC Educational Resources Information Center

    Mercado, Heidi

    1999-01-01

    Reviews changes in online database searching in academic libraries. Topics include librarians conducting all searches; the advent of end-user searching and the need for user instruction; compact disk technology; online public catalogs; the Internet; full text databases; electronic information literacy; user education and the remote library user;…

  13. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  14. Database resources for the tuberculosis community.

    PubMed

    Lew, Jocelyne M; Mao, Chunhong; Shukla, Maulik; Warren, Andrew; Will, Rebecca; Kuznetsov, Dmitry; Xenarios, Ioannis; Robertson, Brian D; Gordon, Stephen V; Schnappinger, Dirk; Cole, Stewart T; Sobral, Bruno

    2013-01-01

    Access to online repositories for genomic and associated "-omics" datasets is now an essential part of everyday research activity. It is important therefore that the Tuberculosis community is aware of the databases and tools available to them online, as well as for the database hosts to know what the needs of the research community are. One of the goals of the Tuberculosis Annotation Jamboree, held in Washington DC on March 7th-8th 2012, was therefore to provide an overview of the current status of three key Tuberculosis resources, TubercuList (tuberculist.epfl.ch), TB Database (www.tbdb.org), and Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org). Here we summarize some key updates and upcoming features in TubercuList, and provide an overview of the PATRIC site and its online tools for pathogen RNA-Seq analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Normative Databases for Imaging Instrumentation.

    PubMed

    Realini, Tony; Zangwill, Linda M; Flanagan, John G; Garway-Heath, David; Patella, Vincent M; Johnson, Chris A; Artes, Paul H; Gaddie, Ian B; Fingeret, Murray

    2015-08-01

    To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer's database differs in size, eligibility criteria, and ethnic make-up, among other key features. The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments.

  16. Apparatus, system, and method for synchronizing a timer key

    DOEpatents

    Condit, Reston A; Daniels, Michael A; Clemens, Gregory P; Tomberlin, Eric S; Johnson, Joel A

    2014-04-22

    A timer key relating to monitoring a countdown time of a countdown routine of an electronic device is disclosed. The timer key comprises a processor configured to respond to a countdown time associated with operation of the electronic device, a display operably coupled with the processor, and a housing configured to house at least the processor. The housing has an associated structure configured to engage with the electronic device to share the countdown time between the electronic device and the timer key. The processor is configured to begin a countdown routine based at least in part on the countdown time, wherein the countdown routine is at least substantially synchronized with a countdown routine of the electronic device when the timer key is removed from the electronic device. A system and method for synchronizing countdown routines of a timer key and an electronic device are also disclosed.

  17. Heterogeneous database integration in biomedicine.

    PubMed

    Sujansky, W

    2001-08-01

    The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.

  18. Chemical thermodynamic data. 1. The concept of links to the chemical elements and the historical development of key thermodynamic data [plus Supplementary Electronic Annex 2

    DOE PAGES

    Wolery, Thomas J.; Jove Colon, Carlos F.

    2016-09-26

    Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data.more » Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO 2, water, and aqueous species such as Na + and Cl -. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of “links” to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare “key” or “reference” datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of

  19. Chemical thermodynamic data. 1. The concept of links to the chemical elements and the historical development of key thermodynamic data [plus Supplementary Electronic Annex 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolery, Thomas J.; Jove Colon, Carlos F.

    Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data.more » Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO 2, water, and aqueous species such as Na + and Cl -. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of “links” to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare “key” or “reference” datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of

  20. Reliability of the Defense Commissary Agency Personnel Property Database.

    DTIC Science & Technology

    2000-02-18

    Departments’ personal property databases. The tests were designed to validate the personal property databases. This report is the second in a series of...with the completeness of its data , and key data elements were not reliable for estimating the historical costs of real property for the Military...values of greater than $100,000. However, some of the Military Departments had problems with the completeness of its data , and key data elements

  1. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  2. Clinical Databases for Chest Physicians.

    PubMed

    Courtwright, Andrew M; Gabriel, Peter E

    2018-04-01

    A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  3. Towards communication-efficient quantum oblivious key distribution

    NASA Astrophysics Data System (ADS)

    Panduranga Rao, M. V.; Jakobi, M.

    2013-01-01

    Symmetrically private information retrieval, a fundamental problem in the field of secure multiparty computation, is defined as follows: A database D of N bits held by Bob is queried by a user Alice who is interested in the bit Db in such a way that (1) Alice learns Db and only Db and (2) Bob does not learn anything about Alice's choice b. While solutions to this problem in the classical domain rely largely on unproven computational complexity theoretic assumptions, it is also known that perfect solutions that guarantee both database and user privacy are impossible in the quantum domain. Jakobi [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.83.022301 83, 022301 (2011)] proposed a protocol for oblivious transfer using well-known quantum key device (QKD) techniques to establish an oblivious key to solve this problem. Their solution provided a good degree of database and user privacy (using physical principles like the impossibility of perfectly distinguishing nonorthogonal quantum states and the impossibility of superluminal communication) while being loss-resistant and implementable with commercial QKD devices (due to the use of the Scarani-Acin-Ribordy-Gisin 2004 protocol). However, their quantum oblivious key distribution (QOKD) protocol requires a communication complexity of O(NlogN). Since modern databases can be extremely large, it is important to reduce this communication as much as possible. In this paper, we first suggest a modification of their protocol wherein the number of qubits that need to be exchanged is reduced to O(N). A subsequent generalization reduces the quantum communication complexity even further in such a way that only a few hundred qubits are needed to be transferred even for very large databases.

  4. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining

  5. HLLV avionics requirements study and electronic filing system database development

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This final report provides a summary of achievements and activities performed under Contract NAS8-39215. The contract's objective was to explore a new way of delivering, storing, accessing, and archiving study products and information and to define top level system requirements for Heavy Lift Launch Vehicle (HLLV) avionics that incorporate Vehicle Health Management (VHM). This report includes technical objectives, methods, assumptions, recommendations, sample data, and issues as specified by DPD No. 772, DR-3. The report is organized into two major subsections, one specific to each of the two tasks defined in the Statement of Work: the Index Database Task and the HLLV Avionics Requirements Task. The Index Database Task resulted in the selection and modification of a commercial database software tool to contain the data developed during the HLLV Avionics Requirements Task. All summary information is addressed within each task's section.

  6. Normative Databases for Imaging Instrumentation

    PubMed Central

    Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray

    2015-01-01

    Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003

  7. Tag Content Access Control with Identity-based Key Exchange

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Rong, Chunming

    2010-09-01

    Radio Frequency Identification (RFID) technology that used to identify objects and users has been applied to many applications such retail and supply chain recently. How to prevent tag content from unauthorized readout is a core problem of RFID privacy issues. Hash-lock access control protocol can make tag to release its content only to reader who knows the secret key shared between them. However, in order to get this shared secret key required by this protocol, reader needs to communicate with a back end database. In this paper, we propose to use identity-based secret key exchange approach to generate the secret key required for hash-lock access control protocol. With this approach, not only back end database connection is not needed anymore, but also tag cloning problem can be eliminated at the same time.

  8. Electronic Publishing.

    ERIC Educational Resources Information Center

    Lancaster, F. W.

    1989-01-01

    Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…

  9. NoSQL technologies for the CMS Conditions Database

    NASA Astrophysics Data System (ADS)

    Sipos, Roland

    2015-12-01

    With the restart of the LHC in 2015, the growth of the CMS Conditions dataset will continue, therefore the need of consistent and highly available access to the Conditions makes a great cause to revisit different aspects of the current data storage solutions. We present a study of alternative data storage backends for the Conditions Databases, by evaluating some of the most popular NoSQL databases to support a key-value representation of the CMS Conditions. The definition of the database infrastructure is based on the need of storing the conditions as BLOBs. Because of this, each condition can reach the size that may require special treatment (splitting) in these NoSQL databases. As big binary objects may be problematic in several database systems, and also to give an accurate baseline, a testing framework extension was implemented to measure the characteristics of the handling of arbitrary binary data in these databases. Based on the evaluation, prototypes of a document store, using a column-oriented and plain key-value store, are deployed. An adaption layer to access the backends in the CMS Offline software was developed to provide transparent support for these NoSQL databases in the CMS context. Additional data modelling approaches and considerations in the software layer, deployment and automatization of the databases are also covered in the research. In this paper we present the results of the evaluation as well as a performance comparison of the prototypes studied.

  10. Building a recruitment database for asthma trials: a conceptual framework for the creation of the UK Database of Asthma Research Volunteers.

    PubMed

    Nwaru, Bright I; Soyiri, Ireneous N; Simpson, Colin R; Griffiths, Chris; Sheikh, Aziz

    2016-05-26

    Randomised clinical trials are the 'gold standard' for evaluating the effectiveness of healthcare interventions. However, successful recruitment of participants remains a key challenge for many trialists. In this paper, we present a conceptual framework for creating a digital, population-based database for the recruitment of asthma patients into future asthma trials in the UK. Having set up the database, the goal is to then make it available to support investigators planning asthma clinical trials. The UK Database of Asthma Research Volunteers will comprise a web-based front-end that interactively allows participant registration, and a back-end that houses the database containing participants' key relevant data. The database will be hosted and maintained at a secure server at the Asthma UK Centre for Applied Research based at The University of Edinburgh. Using a range of invitation strategies, key demographic and clinical data will be collected from those pre-consenting to consider participation in clinical trials. These data will, with consent, in due course, be linkable to other healthcare, social, economic, and genetic datasets. To use the database, asthma investigators will send their eligibility criteria for participant recruitment; eligible participants will then be informed about the new trial and asked if they wish to participate. A steering committee will oversee the running of the database, including approval of usage access. Novel communication strategies will be utilised to engage participants who are recruited into the database in order to avoid attrition as a result of waiting time to participation in a suitable trial, and to minimise the risk of their being approached when already enrolled in a trial. The value of this database will be whether it proves useful and usable to researchers in facilitating recruitment into clinical trials on asthma and whether patient privacy and data security are protected in meeting this aim. Successful recruitment is

  11. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  12. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  13. Experience in running relational databases on clustered storage

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Potocky, Miroslav

    2015-12-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  14. The Camden & Islington Research Database: Using electronic mental health records for research.

    PubMed

    Werbeloff, Nomi; Osborn, David P J; Patel, Rashmi; Taylor, Matthew; Stewart, Robert; Broadbent, Matthew; Hayes, Joseph F

    2018-01-01

    Electronic health records (EHRs) are widely used in mental health services. Case registers using EHRs from secondary mental healthcare have the potential to deliver large-scale projects evaluating mental health outcomes in real-world clinical populations. We describe the Camden and Islington NHS Foundation Trust (C&I) Research Database which uses the Clinical Record Interactive Search (CRIS) tool to extract and de-identify routinely collected clinical information from a large UK provider of secondary mental healthcare, and demonstrate its capabilities to answer a clinical research question regarding time to diagnosis and treatment of bipolar disorder. The C&I Research Database contains records from 108,168 mental health patients, of which 23,538 were receiving active care. The characteristics of the patient population are compared to those of the catchment area, of London, and of England as a whole. The median time to diagnosis of bipolar disorder was 76 days (interquartile range: 17-391) and median time to treatment was 37 days (interquartile range: 5-194). Compulsory admission under the UK Mental Health Act was associated with shorter intervals to diagnosis and treatment. Prior diagnoses of other psychiatric disorders were associated with longer intervals to diagnosis, though prior diagnoses of schizophrenia and related disorders were associated with decreased time to treatment. The CRIS tool, developed by the South London and Maudsley NHS Foundation Trust (SLaM) Biomedical Research Centre (BRC), functioned very well at C&I. It is reassuring that data from different organizations deliver similar results, and that applications developed in one Trust can then be successfully deployed in another. The information can be retrieved in a quicker and more efficient fashion than more traditional methods of health research. The findings support the secondary use of EHRs for large-scale mental health research in naturalistic samples and settings investigated across large

  15. Key concepts relevant to quality of complex and shared decision-making in health care: a literature review.

    PubMed

    Dy, Sydney M; Purnell, Tanjala S

    2012-02-01

    High-quality provider-patient decision-making is key to quality care for complex conditions. We performed an analysis of key elements relevant to quality and complex, shared medical decision-making. Based on a search of electronic databases, including Medline and the Cochrane Library, as well as relevant articles' reference lists, reviews of tools, and annotated bibliographies, we developed a list of key concepts and applied them to a decision-making example. Key concepts identified included provider competence, trustworthiness, and cultural competence; communication with patients and families; information quality; patient/surrogate competence; and roles and involvement. We applied this concept list to a case example, shared decision-making for live donor kidney transplantation, and identified the likely most important concepts as provider and cultural competence, information quality, and communication with patients and families. This concept list may be useful for conceptualizing the quality of complex shared decision-making and in guiding research in this area. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Database Security: What Students Need to Know

    ERIC Educational Resources Information Center

    Murray, Meg Coffin

    2010-01-01

    Database security is a growing concern evidenced by an increase in the number of reported incidents of loss of or unauthorized exposure to sensitive data. As the amount of data collected, retained and shared electronically expands, so does the need to understand database security. The Defense Information Systems Agency of the US Department of…

  17. Databases and Electronic Resources - Betty Petersen Memorial Library

    Science.gov Websites

    of NOAA-Wide and Open Access Databases on the NOAA Central Library website. American Meteorological to a nonfederal website. Open Science Directory Open Science Directory contains collections of Open Access Journals (e.g. Directory of Open Access Journals) and journals in the special programs (Hinari

  18. Transterm—extended search facilities and improved integration with other databases

    PubMed Central

    Jacobs, Grant H.; Stockwell, Peter A.; Tate, Warren P.; Brown, Chris M.

    2006-01-01

    Transterm has now been publicly available for >10 years. Major changes have been made since its last description in this database issue in 2002. The current database provides data for key regions of mRNA sequences, a curated database of mRNA motifs and tools to allow users to investigate their own motifs or mRNA sequences. The key mRNA regions database is derived computationally from Genbank. It contains 3′ and 5′ flanking regions, the initiation and termination signal context and coding sequence for annotated CDS features from Genbank and RefSeq. The database is non-redundant, enabling summary files and statistics to be prepared for each species. Advances include providing extended search facilities, the database may now be searched by BLAST in addition to regular expressions (patterns) allowing users to search for motifs such as known miRNA sequences, and the inclusion of RefSeq data. The database contains >40 motifs or structural patterns important for translational control. In this release, patterns from UTRsite and Rfam are also incorporated with cross-referencing. Users may search their sequence data with Transterm or user-defined patterns. The system is accessible at . PMID:16381889

  19. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Masuyama, Keiichi

    CD-ROM has rapidly evolved as a new information medium with large capacity, In the U.S. it is predicted that it will become two hundred billion yen market in three years, and thus CD-ROM is strategic target of database industry. Here in Japan the movement toward its commercialization has been active since this year. Shall CD-ROM bussiness ever conquer information market as an on-disk database or electronic publication? Referring to some cases of the applications in the U.S. the author views marketability and the future trend of this new optical disk medium.

  20. System of end-to-end symmetric database encryption

    NASA Astrophysics Data System (ADS)

    Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.

    2018-05-01

    The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.

  1. Newspapers and Electronic Databases: Present Technology.

    ERIC Educational Resources Information Center

    Newcombe, Barbara; Trivedi, Harish

    1984-01-01

    Discusses technology used to preserve, control, index, and retrieve information in newspapers, highlighting ways to record analyses of news stories, storage/indexing systems based on computers, information as salable commodity, preparation of news for electronic storage, answering in-house queries, questions of copyright and invasion of privacy,…

  2. Food Composition Database Format and Structure: A User Focused Approach

    PubMed Central

    Clancy, Annabel K.; Woods, Kaitlyn; McMahon, Anne; Probst, Yasmine

    2015-01-01

    This study aimed to investigate the needs of Australian food composition database user’s regarding database format and relate this to the format of databases available globally. Three semi structured synchronous online focus groups (M = 3, F = 11) and n = 6 female key informant interviews were recorded. Beliefs surrounding the use, training, understanding, benefits and limitations of food composition data and databases were explored. Verbatim transcriptions underwent preliminary coding followed by thematic analysis with NVivo qualitative analysis software to extract the final themes. Schematic analysis was applied to the final themes related to database format. Desktop analysis also examined the format of six key globally available databases. 24 dominant themes were established, of which five related to format; database use, food classification, framework, accessibility and availability, and data derivation. Desktop analysis revealed that food classification systems varied considerably between databases. Microsoft Excel was a common file format used in all databases, and available software varied between countries. User’s also recognised that food composition databases format should ideally be designed specifically for the intended use, have a user-friendly food classification system, incorporate accurate data with clear explanation of data derivation and feature user input. However, such databases are limited by data availability and resources. Further exploration of data sharing options should be considered. Furthermore, user’s understanding of food composition data and databases limitations is inherent to the correct application of non-specific databases. Therefore, further exploration of user FCDB training should also be considered. PMID:26554836

  3. PlantTFDB: a comprehensive plant transcription factor database

    PubMed Central

    Guo, An-Yuan; Chen, Xin; Gao, Ge; Zhang, He; Zhu, Qi-Hui; Liu, Xiao-Chuan; Zhong, Ying-Fu; Gu, Xiaocheng; He, Kun; Luo, Jingchu

    2008-01-01

    Transcription factors (TFs) play key roles in controlling gene expression. Systematic identification and annotation of TFs, followed by construction of TF databases may serve as useful resources for studying the function and evolution of transcription factors. We developed a comprehensive plant transcription factor database PlantTFDB (http://planttfdb.cbi.pku.edu.cn), which contains 26 402 TFs predicted from 22 species, including five model organisms with available whole genome sequence and 17 plants with available EST sequences. To provide comprehensive information for those putative TFs, we made extensive annotation at both family and gene levels. A brief introduction and key references were presented for each family. Functional domain information and cross-references to various well-known public databases were available for each identified TF. In addition, we predicted putative orthologs of those TFs among the 22 species. PlantTFDB has a simple interface to allow users to search the database by IDs or free texts, to make sequence similarity search against TFs of all or individual species, and to download TF sequences for local analysis. PMID:17933783

  4. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  5. The EPMI Malay Basin petroleum geology database: Design philosophy and keys to success

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, H.E.; Creaney, S.; Fairchild, L.H.

    1994-07-01

    Esso Production Malaysia Inc. (EPMI) developed and populated a database containing information collected in the areas of basic well data: stratigraphy, lithology, facies; pressure, temperature, column/contacts; geochemistry, shows and stains, migration, fluid properties; maturation; seal; structure. Paradox was used as the database engine and query language, with links to ZYCOR ZMAP+ for mapping and SAS for data analysis. Paradox has a query language that is simple enough for users. The ability to link to good analytical packages was deemed more important than having the capability in the package. Important elements of design philosophy were included: (1) information on data qualitymore » had to be rigorously recorded; (2) raw and interpreted data were kept separate and clearly identified; (3) correlations between rock and chronostratigraphic surfaces were recorded; and (4) queries across technical boundaries had to be seamless.« less

  6. Constructing Effective Search Strategies for Electronic Searching.

    ERIC Educational Resources Information Center

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  7. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse

  8. [Electronic poison information management system].

    PubMed

    Kabata, Piotr; Waldman, Wojciech; Kaletha, Krystian; Sein Anand, Jacek

    2013-01-01

    We describe deployment of electronic toxicological information database in poison control center of Pomeranian Center of Toxicology. System was based on Google Apps technology, by Google Inc., using electronic, web-based forms and data tables. During first 6 months from system deployment, we used it to archive 1471 poisoning cases, prepare monthly poisoning reports and facilitate statistical analysis of data. Electronic database usage made Poison Center work much easier.

  9. Electronic surveillance and using administrative data to identify healthcare associated infections.

    PubMed

    Gastmeier, Petra; Behnke, Michael

    2016-08-01

    Traditional surveillance of healthcare associated infections (HCAI) is time consuming and error-prone. We have analysed literature of the past year to look at new developments in this field. It is divided into three parts: new algorithms for electronic surveillance, the use of administrative data for surveillance of HCAI, and the definition of new endpoints of surveillance, in accordance with an automatic surveillance approach. Most studies investigating electronic surveillance of HCAI have concentrated on bloodstream infection or surgical site infection. However, the lack of important parameters in hospital databases can lead to misleading results. The accuracy of administrative coding data was poor at identifying HCAI. New endpoints should be defined for automatic detection, with the most crucial step being to win clinicians' acceptance. Electronic surveillance with conventional endpoints is a successful method when hospital information systems implemented key changes and enhancements. One requirement is the access to systems for hospital administration and clinical databases.Although the primary source of data for HCAI surveillance is not administrative coding data, these are important components of a hospital-wide programme of automated surveillance. The implementation of new endpoints for surveillance is an approach which needs to be discussed further.

  10. DDD: Dynamic Database for Diatomics

    NASA Technical Reports Server (NTRS)

    Schwenke, David

    2004-01-01

    We have developed as web-based database containing spectra of diatomic moiecuies. All data is computed from first principles, and if a user requests data for a molecule/ion that is not in the database, new calculations are automatically carried out on that species. Rotational, vibrational, and electronic transitions are included. Different levels of accuracy can be selected from qualitatively correct to the best calculations that can be carried out. The user can view and modify spectroscopic constants, view potential energy curves, download detailed high temperature linelists, or view synthetic spectra.

  11. De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.

    2012-12-01

    Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed

  12. DIMA quick start, database for inventory, monitoring and assessment

    USDA-ARS?s Scientific Manuscript database

    The Database for Inventory, Monitoring and Assessment (DIMA) is a highly-customized Microsoft Access database for collecting data electronically in the field and for organizing, storing and reporting those data for monitoring and assessment. While DIMA can be used for any number of different monito...

  13. Exposure to benzodiazepines (anxiolytics, hypnotics and related drugs) in seven European electronic healthcare databases: a cross-national descriptive study from the PROTECT-EU Project.

    PubMed

    Huerta, Consuelo; Abbing-Karahagopian, Victoria; Requena, Gema; Oliva, Belén; Alvarez, Yolanda; Gardarsdottir, Helga; Miret, Montserrat; Schneider, Cornelia; Gil, Miguel; Souverein, Patrick C; De Bruin, Marie L; Slattery, Jim; De Groot, Mark C H; Hesse, Ulrik; Rottenkolber, Marietta; Schmiedl, Sven; Montero, Dolores; Bate, Andrew; Ruigomez, Ana; García-Rodríguez, Luis Alberto; Johansson, Saga; de Vries, Frank; Schlienger, Raymond G; Reynolds, Robert F; Klungel, Olaf H; de Abajo, Francisco José

    2016-03-01

    Studies on drug utilization usually do not allow direct cross-national comparisons because of differences in the respective applied methods. This study aimed to compare time trends in BZDs prescribing by applying a common protocol and analyses plan in seven European electronic healthcare databases. Crude and standardized prevalence rates of drug prescribing from 2001-2009 were calculated in databases from Spain, United Kingdon (UK), The Netherlands, Germany and Denmark. Prevalence was stratified by age, sex, BZD type [(using ATC codes), i.e. BZD-anxiolytics BZD-hypnotics, BZD-related drugs and clomethiazole], indication and number of prescription. Crude prevalence rates of BZDs prescribing ranged from 570 to 1700 per 10,000 person-years over the study period. Standardization by age and sex did not substantially change the differences. Standardized prevalence rates increased in the Spanish (+13%) and UK databases (+2% and +8%) over the study period, while they decreased in the Dutch databases (-4% and -22%), the German (-12%) and Danish (-26%) database. Prevalence of anxiolytics outweighed that of hypnotics in the Spanish, Dutch and Bavarian databases, but the reverse was shown in the UK and Danish databases. Prevalence rates consistently increased with age and were two-fold higher in women than in men in all databases. A median of 18% of users received 10 or more prescriptions in 2008. Although similar methods were applied, the prevalence of BZD prescribing varied considerably across different populations. Clinical factors related to BZDs and characteristics of the databases may explain these differences. Copyright © 2015 John Wiley & Sons, Ltd.

  14. ClinicalKey: a point-of-care search engine.

    PubMed

    Vardell, Emily

    2013-01-01

    ClinicalKey is a new point-of-care resource for health care professionals. Through controlled vocabulary, ClinicalKey offers a cross section of resources on diseases and procedures, from journals to e-books and practice guidelines to patient education. A sample search was conducted to demonstrate the features of the database, and a comparison with similar tools is presented.

  15. A mobile trauma database with charge capture.

    PubMed

    Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin

    2005-11-01

    Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient

  16. The messages presented in online electronic cigarette promotions and discussions: a scoping review protocol

    PubMed Central

    Maycock, Bruce; Jancey, Jonine

    2017-01-01

    Introduction Electronic cigarettes have become increasingly popular over the last 10 years. These devices represent a new paradigm for tobacco control offering smokers an opportunity to inhale nicotine without inhaling tobacco smoke. To date there are no definite conclusions regarding the safety and long-term health effects of electronic cigarettes; however, there is evidence that they are being marketed online as a healthier alternative to traditional cigarettes. This scoping review aims to identify and describe the breadth of messages (eg, health, smoking-cessation and price related claims) presented in online electronic cigarette promotions and discussions. Methods and analysis A scoping review will be undertaken adhering to the methodology outlined in The Joanna Briggs Institute Manual for Scoping Reviews. Six key electronic databases will be searched to identify eligible studies. Studies must be published in English between 2007 and 2017, examine and/or analyse content captured from online electronic cigarette promotions or discussions and report results for electronic cigarettes separately to other forms of tobacco delivery. Studies will be screened initially by title and abstract, followed by full-text review. Results of the search strategy will be reported in a PRISMA flow diagram and presented in tabular form with accompanying narrative summary. Ethics and dissemination The methodology consists of reviewing and collecting data from publicly available studies, and therefore does not require ethics approval. Results will be published in a peer reviewed journal and be presented at national/international conferences. Additionally, findings will be disseminated via social media and online platforms. Advocacy will be key to informing policy makers of regulatory and health issues that need to be addressed. Registration details The review was registered prospectively with The Joanna Briggs Institute Systematic Reviews database. PMID:29122804

  17. The messages presented in online electronic cigarette promotions and discussions: a scoping review protocol.

    PubMed

    McCausland, Kahlia; Maycock, Bruce; Jancey, Jonine

    2017-11-08

    Electronic cigarettes have become increasingly popular over the last 10 years. These devices represent a new paradigm for tobacco control offering smokers an opportunity to inhale nicotine without inhaling tobacco smoke. To date there are no definite conclusions regarding the safety and long-term health effects of electronic cigarettes; however, there is evidence that they are being marketed online as a healthier alternative to traditional cigarettes. This scoping review aims to identify and describe the breadth of messages (eg, health, smoking-cessation and price related claims) presented in online electronic cigarette promotions and discussions. A scoping review will be undertaken adhering to the methodology outlined in The Joanna Briggs Institute Manual for Scoping Reviews. Six key electronic databases will be searched to identify eligible studies. Studies must be published in English between 2007 and 2017, examine and/or analyse content captured from online electronic cigarette promotions or discussions and report results for electronic cigarettes separately to other forms of tobacco delivery. Studies will be screened initially by title and abstract, followed by full-text review. Results of the search strategy will be reported in a PRISMA flow diagram and presented in tabular form with accompanying narrative summary. The methodology consists of reviewing and collecting data from publicly available studies, and therefore does not require ethics approval. Results will be published in a peer reviewed journal and be presented at national/international conferences. Additionally, findings will be disseminated via social media and online platforms. Advocacy will be key to informing policy makers of regulatory and health issues that need to be addressed. The review was registered prospectively with The Joanna Briggs Institute Systematic Reviews database. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights

  18. Real-time database drawn from an electronic health record for a thoracic surgery unit: high-quality clinical data saving time and human resources.

    PubMed

    Salati, Michele; Pompili, Cecilia; Refai, Majed; Xiumè, Francesco; Sabbatini, Armando; Brunelli, Alessandro

    2014-06-01

    The aim of the present study was to verify whether the implementation of an electronic health record (EHR) in our thoracic surgery unit allows creation of a high-quality clinical database saving time and costs. Before August 2011, multiple individuals compiled the on-paper documents/records and a single data manager inputted selected data into the database (traditional database, tDB). Since the adoption of an EHR in August 2011, multiple individuals have been responsible for compiling the EHR, which automatically generates a real-time database (EHR-based database, eDB), without the need for a data manager. During the initial period of implementation of the EHR, periodic meetings were held with all physicians involved in the use of the EHR in order to monitor and standardize the data registration process. Data quality of the first 100 anatomical lung resections recorded in the eDB was assessed by measuring the total number of missing values (MVs: existing non-reported value) and inaccurate values (wrong data) occurring in 95 core variables. The average MV of the eDB was compared with the one occurring in the same variables of the last 100 records registered in the tDB. A learning curve was constructed by plotting the number of MVs in the electronic database and tDB with the patients arranged by the date of registration. The tDB and eDB had similar MVs (0.74 vs 1, P = 0.13). The learning curve showed an initial phase including about 35 records, where MV in the eDB was higher than that in the tDB (1.9 vs 0.74, P = 0.03), and a subsequent phase, where the MV was similar in the two databases (0.7 vs 0.74, P = 0.6). The inaccuracy rate of these two phases in the eDB was stable (0.5 vs 0.3, P = 0.3). Using EHR saved an average of 9 min per patient, totalling 15 h saved for obtaining a dataset of 100 patients with respect to the tDB. The implementation of EHR allowed streamlining the process of clinical data recording. It saved time and human resource costs, without

  19. The Human Communication Research Centre dialogue database.

    PubMed

    Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J

    1992-10-01

    The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.

  20. The use of Research Electronic Data Capture (REDCap) software to create a database of librarian-mediated literature searches.

    PubMed

    Lyon, Jennifer A; Garcia-Milian, Rolando; Norton, Hannah F; Tennant, Michele R

    2014-01-01

    Expert-mediated literature searching, a keystone service in biomedical librarianship, would benefit significantly from regular methodical review. This article describes the novel use of Research Electronic Data Capture (REDCap) software to create a database of literature searches conducted at a large academic health sciences library. An archive of paper search requests was entered into REDCap, and librarians now prospectively enter records for current searches. Having search data readily available allows librarians to reuse search strategies and track their workload. In aggregate, this data can help guide practice and determine priorities by identifying users' needs, tracking librarian effort, and focusing librarians' continuing education.

  1. Constraints on Biological Mechanism from Disease Comorbidity Using Electronic Medical Records and Database of Genetic Variants

    PubMed Central

    Bagley, Steven C.; Sirota, Marina; Chen, Richard; Butte, Atul J.; Altman, Russ B.

    2016-01-01

    Patterns of disease co-occurrence that deviate from statistical independence may represent important constraints on biological mechanism, which sometimes can be explained by shared genetics. In this work we study the relationship between disease co-occurrence and commonly shared genetic architecture of disease. Records of pairs of diseases were combined from two different electronic medical systems (Columbia, Stanford), and compared to a large database of published disease-associated genetic variants (VARIMED); data on 35 disorders were available across all three sources, which include medical records for over 1.2 million patients and variants from over 17,000 publications. Based on the sources in which they appeared, disease pairs were categorized as having predominant clinical, genetic, or both kinds of manifestations. Confounding effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similarly shaped incidence patterns. We find that disease pairs that are overrepresented in both electronic medical record systems and in VARIMED come from two main disease classes, autoimmune and neuropsychiatric. We furthermore identify specific genes that are shared within these disease groups. PMID:27115429

  2. Constraints on Biological Mechanism from Disease Comorbidity Using Electronic Medical Records and Database of Genetic Variants.

    PubMed

    Bagley, Steven C; Sirota, Marina; Chen, Richard; Butte, Atul J; Altman, Russ B

    2016-04-01

    Patterns of disease co-occurrence that deviate from statistical independence may represent important constraints on biological mechanism, which sometimes can be explained by shared genetics. In this work we study the relationship between disease co-occurrence and commonly shared genetic architecture of disease. Records of pairs of diseases were combined from two different electronic medical systems (Columbia, Stanford), and compared to a large database of published disease-associated genetic variants (VARIMED); data on 35 disorders were available across all three sources, which include medical records for over 1.2 million patients and variants from over 17,000 publications. Based on the sources in which they appeared, disease pairs were categorized as having predominant clinical, genetic, or both kinds of manifestations. Confounding effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similarly shaped incidence patterns. We find that disease pairs that are overrepresented in both electronic medical record systems and in VARIMED come from two main disease classes, autoimmune and neuropsychiatric. We furthermore identify specific genes that are shared within these disease groups.

  3. CEBS: a comprehensive annotated database of toxicological data

    PubMed Central

    Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer

    2017-01-01

    The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660

  4. Enhancing user privacy in SARG04-based private database query protocols

    NASA Astrophysics Data System (ADS)

    Yu, Fang; Qiu, Daowen; Situ, Haozhen; Wang, Xiaoming; Long, Shun

    2015-11-01

    The well-known SARG04 protocol can be used in a private query application to generate an oblivious key. By usage of the key, the user can retrieve one out of N items from a database without revealing which one he/she is interested in. However, the existing SARG04-based private query protocols are vulnerable to the attacks of faked data from the database since in its canonical form, the SARG04 protocol lacks means for one party to defend attacks from the other. While such attacks can cause significant loss of user privacy, a variant of the SARG04 protocol is proposed in this paper with new mechanisms designed to help the user protect its privacy in private query applications. In the protocol, it is the user who starts the session with the database, trying to learn from it bits of a raw key in an oblivious way. An honesty test is used to detect a cheating database who had transmitted faked data. The whole private query protocol has O( N) communication complexity for conveying at least N encrypted items. Compared with the existing SARG04-based protocols, it is efficient in communication for per-bit learning.

  5. The Key Ingredients of the Electronic Structure of FeSe

    NASA Astrophysics Data System (ADS)

    Coldea, Amalia I.; Watson, Matthew D.

    2018-03-01

    FeSe is a fascinating superconducting material at the frontier of research in condensed matter physics. Here, we provide an overview of the current understanding of the electronic structure of FeSe, focusing in particular on its low-energy electronic structure as determined from angle-resolved photoemission spectroscopy, quantum oscillations, and magnetotransport measurements of single-crystal samples. We discuss the unique place of FeSe among iron-based superconductors, as it is a multiband system exhibiting strong orbitally dependent electronic correlations and unusually small Fermi surfaces and is prone to different electronic instabilities. We pay particular attention to the evolution of the electronic structure that accompanies the tetragonal-orthorhombic structural distortion of the lattice around 90 K, which stabilizes a unique nematic electronic state. Finally, we discuss how the multiband multiorbital nematic electronic structure impacts our understanding of the superconductivity, and show that the tunability of the nematic state with chemical and physical pressure helps to disentangle the role of different competing interactions relevant for enhancing superconductivity.

  6. EST databases and web tools for EST projects.

    PubMed

    Shen, Yao-Qing; O'Brien, Emmet; Koski, Liisa; Lang, B Franz; Burger, Gertraud

    2009-01-01

    This chapter outlines key considerations for constructing and implementing an EST database. Instead of showing the technological details step by step, emphasis is put on the design of an EST database suited to the specific needs of EST projects and how to choose the most suitable tools. Using TBestDB as an example, we illustrate the essential factors to be considered for database construction and the steps for data population and annotation. This process employs technologies such as PostgreSQL, Perl, and PHP to build the database and interface, and tools such as AutoFACT for data processing and annotation. We discuss these in comparison to other available technologies and tools, and explain the reasons for our choices.

  7. Brief Report: Databases in the Asia-Pacific Region: The Potential for a Distributed Network Approach.

    PubMed

    Lai, Edward Chia-Cheng; Man, Kenneth K C; Chaiyakunapruk, Nathorn; Cheng, Ching-Lan; Chien, Hsu-Chih; Chui, Celine S L; Dilokthornsakul, Piyameth; Hardy, N Chantelle; Hsieh, Cheng-Yang; Hsu, Chung Y; Kubota, Kiyoshi; Lin, Tzu-Chieh; Liu, Yanfang; Park, Byung Joo; Pratt, Nicole; Roughead, Elizabeth E; Shin, Ju-Young; Watcharathanakij, Sawaeng; Wen, Jin; Wong, Ian C K; Yang, Yea-Huei Kao; Zhang, Yinghong; Setoguchi, Soko

    2015-11-01

    This study describes the availability and characteristics of databases in Asian-Pacific countries and assesses the feasibility of a distributed network approach in the region. A web-based survey was conducted among investigators using healthcare databases in the Asia-Pacific countries. Potential survey participants were identified through the Asian Pharmacoepidemiology Network. Investigators from a total of 11 databases participated in the survey. Database sources included four nationwide claims databases from Japan, South Korea, and Taiwan; two nationwide electronic health records from Hong Kong and Singapore; a regional electronic health record from western China; two electronic health records from Thailand; and cancer and stroke registries from Taiwan. We identified 11 databases with capabilities for distributed network approaches. Many country-specific coding systems and terminologies have been already converted to international coding systems. The harmonization of health expenditure data is a major obstacle for future investigations attempting to evaluate issues related to medical costs.

  8. ATtRACT-a database of RNA-binding proteins and associated motifs.

    PubMed

    Giudice, Girolamo; Sánchez-Cabo, Fátima; Torroja, Carlos; Lara-Pezzi, Enrique

    2016-01-01

    RNA-binding proteins (RBPs) play a crucial role in key cellular processes, including RNA transport, splicing, polyadenylation and stability. Understanding the interaction between RBPs and RNA is key to improve our knowledge of RNA processing, localization and regulation in a global manner. Despite advances in recent years, a unified non-redundant resource that includes information on experimentally validated motifs, RBPs and integrated tools to exploit this information is lacking. Here, we developed a database named ATtRACT (available athttp://attract.cnic.es) that compiles information on 370 RBPs and 1583 RBP consensus binding motifs, 192 of which are not present in any other database. To populate ATtRACT we (i) extracted and hand-curated experimentally validated data from CISBP-RNA, SpliceAid-F, RBPDB databases, (ii) integrated and updated the unavailable ASD database and (iii) extracted information from Protein-RNA complexes present in Protein Data Bank database through computational analyses. ATtRACT provides also efficient algorithms to search a specific motif and scan one or more RNA sequences at a time. It also allows discoveringde novomotifs enriched in a set of related sequences and compare them with the motifs included in the database.Database URL:http:// attract. cnic. es. © The Author(s) 2016. Published by Oxford University Press.

  9. The research of network database security technology based on web service

    NASA Astrophysics Data System (ADS)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  10. Risk factors for manipulation after total knee arthroplasty: a pooled electronic health record database study.

    PubMed

    Pfefferle, Kiel J; Shemory, Scott T; Dilisio, Matthew F; Fening, Stephen D; Gradisar, Ian M

    2014-10-01

    A commercially available software platform, Explorys (Explorys, Inc., Cleveland, OH), was used to mine a pooled electronic healthcare database consisting of the medical records of more than 27 million patients. A total of 229,420 patients had undergone a total knee arthroplasty; 3470 (1.51%) patients were identified to have undergone manipulation under anesthesia. Individual risk factors of being female, African American race, age less than 60, BMI >30 and nicotine dependence were determined to have relative risk of 1.25, 2.20, 3.46, 1.33 and 1.32 respectively. Depressive disorder, diabetes mellitus, opioid abuse/dependence and rheumatoid arthritis were not significant risk factors. African Americans under the age of 60 at time of TKA had the greatest incidence of MUA (5.17%) and relative risk of 3.73 (CI: 3.36, 4.13). Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Databases toward Disseminated Use - Nikkei News Telecom -

    NASA Astrophysics Data System (ADS)

    Kasiwagi, Akira

    The need for “searchers” - adept hands in the art of information retrieval - is increasing nowadays. Searchers have become necessary as the result of the upbeat online database market. The number of database users is rising steeply. There is the urgent need to develop potential users of general information, such as newspaper articles. Simple commands, easy operation, and low prices hold the key to general popularization of databases, and the issue lies in how the industry will get about achieving this task. Nihon Keizai Shimbun has been undertaking a wide range of possibilities with Nikkei News Telecom. Although only two years have passed since its start, results of Nikkei’s efforts are summarized below.

  12. The Cambridge Structural Database

    PubMed Central

    Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.

    2016-01-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  13. The Cambridge Structural Database.

    PubMed

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  14. GOVERNING GENETIC DATABASES: COLLECTION, STORAGE AND USE

    PubMed Central

    Gibbons, Susan M.C.; Kaye, Jane

    2008-01-01

    This paper provides an introduction to a collection of five papers, published as a special symposium journal issue, under the title: “Governing Genetic Databases: Collection, Storage and Use”. It begins by setting the scene, to provide a backdrop and context for the papers. It describes the evolving scientific landscape around genetic databases and genomic research, particularly within the biomedical and criminal forensic investigation fields. It notes the lack of any clear, coherent or coordinated legal governance regime, either at the national or international level. It then identifies and reflects on key cross-cutting issues and themes that emerge from the five papers, in particular: terminology and definitions; consent; special concerns around population genetic databases (biobanks) and forensic databases; international harmonisation; data protection; data access; boundary-setting; governance; and issues around balancing individual interests against public good values. PMID:18841252

  15. A mapping review of the literature on UK-focused health and social care databases.

    PubMed

    Cooper, Chris; Rogers, Morwenna; Bethel, Alison; Briscoe, Simon; Lowe, Jenny

    2015-03-01

    Bibliographic databases are a day-to-day tool of the researcher: they offer the researcher easy and organised access to knowledge, but how much is actually known about the databases on offer? The focus of this paper is UK health and social care databases. These databases are often small, specialised by topic, and provide a complementary literature to the large, international databases. There is, however, good evidence that these databases are overlooked in systematic reviews, perhaps because little is known about what they can offer. To systematically locate and map, published and unpublished literature on the key UK health and social care bibliographic databases. Systematic searching and mapping. Two hundred and forty-two items were identified which specifically related to the 24 of the 34 databases under review. There is little published or unpublished literature specifically analysing the key UK health and social care databases. Since several UK databases have closed, others are at risk, and some are overlooked in reviews, better information is required to enhance our knowledge. Further research on UK health and social care databases is required. This paper suggests the need to develop the evidence base through a series of case studies on each of the databases. © 2014 The authors. Health Information and Libraries Journal © 2014 Health Libraries Journal.

  16. DOE technology information management system database study report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less

  17. Using Linked Electronic Health Records to Estimate Healthcare Costs: Key Challenges and Opportunities.

    PubMed

    Asaria, Miqdad; Grasic, Katja; Walker, Simon

    2016-02-01

    This paper discusses key challenges and opportunities that arise when using linked electronic health records (EHR) in health economics and outcomes research (HEOR), with a particular focus on estimating healthcare costs. These challenges and opportunities are framed in the context of a case study modelling the costs of stable coronary artery disease in England. The challenges and opportunities discussed fall broadly into the categories of (1) handling and organising data of this size and sensitivity; (2) extracting clinical endpoints from datasets that have not been designed and collected with such endpoints in mind; and (3) the principles and practice of costing resource use from routinely collected data. We find that there are a number of new challenges and opportunities that arise when working with EHR compared with more traditional sources of data for HEOR. These call for greater clinician involvement and intelligent use of sensitivity analysis.

  18. Use of administrative medical databases in population-based research.

    PubMed

    Gavrielov-Yusim, Natalie; Friger, Michael

    2014-03-01

    Administrative medical databases are massive repositories of data collected in healthcare for various purposes. Such databases are maintained in hospitals, health maintenance organisations and health insurance organisations. Administrative databases may contain medical claims for reimbursement, records of health services, medical procedures, prescriptions, and diagnoses information. It is clear that such systems may provide a valuable variety of clinical and demographic information as well as an on-going process of data collection. In general, information gathering in these databases does not initially presume and is not planned for research purposes. Nonetheless, administrative databases may be used as a robust research tool. In this article, we address the subject of public health research that employs administrative data. We discuss the biases and the limitations of such research, as well as other important epidemiological and biostatistical key points specific to administrative database studies.

  19. [A Terahertz Spectral Database Based on Browser/Server Technique].

    PubMed

    Zhang, Zhuo-yong; Song, Yue

    2015-09-01

    With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to

  20. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  1. A comparison of the performance of seven key bibliographic databases in identifying all relevant systematic reviews of interventions for hypertension.

    PubMed

    Rathbone, John; Carter, Matt; Hoffmann, Tammy; Glasziou, Paul

    2016-02-09

    Bibliographic databases are the primary resource for identifying systematic reviews of health care interventions. Reliable retrieval of systematic reviews depends on the scope of indexing used by database providers. Therefore, searching one database may be insufficient, but it is unclear how many need to be searched. We sought to evaluate the performance of seven major bibliographic databases for the identification of systematic reviews for hypertension. We searched seven databases (Cochrane library, Database of Abstracts of Reviews of Effects (DARE), Excerpta Medica Database (EMBASE), Epistemonikos, Medical Literature Analysis and Retrieval System Online (MEDLINE), PubMed Health and Turning Research Into Practice (TRIP)) from 2003 to 2015 for systematic reviews of any intervention for hypertension. Citations retrieved were screened for relevance, coded and checked for screening consistency using a fuzzy text matching query. The performance of each database was assessed by calculating its sensitivity, precision, the number of missed reviews and the number of unique records retrieved. Four hundred systematic reviews were identified for inclusion from 11,381 citations retrieved from seven databases. No single database identified all the retrieved systematic reviews for hypertension. EMBASE identified the most reviews (sensitivity 69 %) but also retrieved the most irrelevant citations with 7.2 % precision (Pr). The sensitivity of the Cochrane library was 60 %, DARE 57 %, MEDLINE 57 %, PubMed Health 53 %, Epistemonikos 49 % and TRIP 33 %. EMBASE contained the highest number of unique records (n = 43). The Cochrane library identified seven unique records and had the highest precision (Pr = 30 %), followed by Epistemonikos (n = 2, Pr = 19 %). No unique records were found in PubMed Health (Pr = 24 %) DARE (Pr = 21 %), TRIP (Pr = 10 %) or MEDLINE (Pr = 10 %). Searching EMBASE and the Cochrane library identified 88 % of all systematic reviews in the reference set, and

  2. Use and satisfaction with key functions of a common commercial electronic health record: a survey of primary care providers.

    PubMed

    Makam, Anil N; Lanham, Holly J; Batchelor, Kim; Samal, Lipika; Moran, Brett; Howell-Stampley, Temple; Kirk, Lynne; Cherukuri, Manjula; Santini, Noel; Leykum, Luci K; Halm, Ethan A

    2013-08-09

    Despite considerable financial incentives for adoption, there is little evidence available about providers' use and satisfaction with key functions of electronic health records (EHRs) that meet "meaningful use" criteria. We surveyed primary care providers (PCPs) in 11 general internal medicine and family medicine practices affiliated with 3 health systems in Texas about their use and satisfaction with performing common tasks (documentation, medication prescribing, preventive services, problem list) in the Epic EHR, a common commercial system. Most practices had greater than 5 years of experience with the Epic EHR. We used multivariate logistic regression to model predictors of being a structured documenter, defined as using electronic templates or prepopulated dot phrases to document at least two of the three note sections (history, physical, assessment and plan). 146 PCPs responded (70%). The majority used free text to document the history (51%) and assessment and plan (54%) and electronic templates to document the physical exam (57%). Half of PCPs were structured documenters (55%) with family medicine specialty (adjusted OR 3.3, 95% CI, 1.4-7.8) and years since graduation (nonlinear relationship with youngest and oldest having lowest probabilities) being significant predictors. Nearly half (43%) reported spending at least one extra hour beyond each scheduled half-day clinic completing EHR documentation. Three-quarters were satisfied with documenting completion of pneumococcal vaccinations and half were satisfied with documenting cancer screening (57% for breast, 45% for colorectal, and 46% for cervical). Fewer were satisfied with reminders for overdue pneumococcal vaccination (48%) and cancer screening (38% for breast, 37% for colorectal, and 31% for cervical). While most believed the problem list was helpful (70%) and kept an up-to-date list for their patients (68%), half thought they were unreliable and inaccurate (51%). Dissatisfaction with and suboptimal use

  3. Defining Electron Bifurcation in the Electron-Transferring Flavoprotein Family.

    PubMed

    Garcia Costas, Amaya M; Poudel, Saroj; Miller, Anne-Frances; Schut, Gerrit J; Ledbetter, Rhesa N; Fixen, Kathryn R; Seefeldt, Lance C; Adams, Michael W W; Harwood, Caroline S; Boyd, Eric S; Peters, John W

    2017-11-01

    Electron bifurcation is the coupling of exergonic and endergonic redox reactions to simultaneously generate (or utilize) low- and high-potential electrons. It is the third recognized form of energy conservation in biology and was recently described for select electron-transferring flavoproteins (Etfs). Etfs are flavin-containing heterodimers best known for donating electrons derived from fatty acid and amino acid oxidation to an electron transfer respiratory chain via Etf-quinone oxidoreductase. Canonical examples contain a flavin adenine dinucleotide (FAD) that is involved in electron transfer, as well as a non-redox-active AMP. However, Etfs demonstrated to bifurcate electrons contain a second FAD in place of the AMP. To expand our understanding of the functional variety and metabolic significance of Etfs and to identify amino acid sequence motifs that potentially enable electron bifurcation, we compiled 1,314 Etf protein sequences from genome sequence databases and subjected them to informatic and structural analyses. Etfs were identified in diverse archaea and bacteria, and they clustered into five distinct well-supported groups, based on their amino acid sequences. Gene neighborhood analyses indicated that these Etf group designations largely correspond to putative differences in functionality. Etfs with the demonstrated ability to bifurcate were found to form one group, suggesting that distinct conserved amino acid sequence motifs enable this capability. Indeed, structural modeling and sequence alignments revealed that identifying residues occur in the NADH- and FAD-binding regions of bifurcating Etfs. Collectively, a new classification scheme for Etf proteins that delineates putative bifurcating versus nonbifurcating members is presented and suggests that Etf-mediated bifurcation is associated with surprisingly diverse enzymes. IMPORTANCE Electron bifurcation has recently been recognized as an electron transfer mechanism used by microorganisms to maximize

  4. Defining Electron Bifurcation in the Electron-Transferring Flavoprotein Family

    PubMed Central

    Garcia Costas, Amaya M.; Poudel, Saroj; Miller, Anne-Frances; Schut, Gerrit J.; Ledbetter, Rhesa N.; Seefeldt, Lance C.; Adams, Michael W. W.

    2017-01-01

    ABSTRACT Electron bifurcation is the coupling of exergonic and endergonic redox reactions to simultaneously generate (or utilize) low- and high-potential electrons. It is the third recognized form of energy conservation in biology and was recently described for select electron-transferring flavoproteins (Etfs). Etfs are flavin-containing heterodimers best known for donating electrons derived from fatty acid and amino acid oxidation to an electron transfer respiratory chain via Etf-quinone oxidoreductase. Canonical examples contain a flavin adenine dinucleotide (FAD) that is involved in electron transfer, as well as a non-redox-active AMP. However, Etfs demonstrated to bifurcate electrons contain a second FAD in place of the AMP. To expand our understanding of the functional variety and metabolic significance of Etfs and to identify amino acid sequence motifs that potentially enable electron bifurcation, we compiled 1,314 Etf protein sequences from genome sequence databases and subjected them to informatic and structural analyses. Etfs were identified in diverse archaea and bacteria, and they clustered into five distinct well-supported groups, based on their amino acid sequences. Gene neighborhood analyses indicated that these Etf group designations largely correspond to putative differences in functionality. Etfs with the demonstrated ability to bifurcate were found to form one group, suggesting that distinct conserved amino acid sequence motifs enable this capability. Indeed, structural modeling and sequence alignments revealed that identifying residues occur in the NADH- and FAD-binding regions of bifurcating Etfs. Collectively, a new classification scheme for Etf proteins that delineates putative bifurcating versus nonbifurcating members is presented and suggests that Etf-mediated bifurcation is associated with surprisingly diverse enzymes. IMPORTANCE Electron bifurcation has recently been recognized as an electron transfer mechanism used by microorganisms to

  5. Certifiable database generation for SVS

    NASA Astrophysics Data System (ADS)

    Schiefele, Jens; Damjanovic, Dejan; Kubbat, Wolfgang

    2000-06-01

    In future aircraft cockpits SVS will be used to display 3D physical and virtual information to pilots. A review of prototype and production Synthetic Vision Displays (SVD) from Euro Telematic, UPS Advanced Technologies, Universal Avionics, VDO-Luftfahrtgeratewerk, and NASA, are discussed. As data sources terrain, obstacle, navigation, and airport data is needed, Jeppesen-Sanderson, Inc. and Darmstadt Univ. of Technology currently develop certifiable methods for acquisition, validation, and processing methods for terrain, obstacle, and airport databases. The acquired data will be integrated into a High-Quality Database (HQ-DB). This database is the master repository. It contains all information relevant for all types of aviation applications. From the HQ-DB SVS relevant data is retried, converted, decimated, and adapted into a SVS Real-Time Onboard Database (RTO-DB). The process of data acquisition, verification, and data processing will be defined in a way that allows certication within DO-200a and new RTCA/EUROCAE standards for airport and terrain data. The open formats proposed will be established and evaluated for industrial usability. Finally, a NASA-industry cooperation to develop industrial SVS products under the umbrella of the NASA Aviation Safety Program (ASP) is introduced. A key element of the SVS NASA-ASP is the Jeppesen lead task to develop methods for world-wide database generation and certification. Jeppesen will build three airport databases that will be used in flight trials with NASA aircraft.

  6. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  7. Stratospheric emissions effects database development

    NASA Technical Reports Server (NTRS)

    Baughcum, Steven L.; Henderson, Stephen C.; Hertel, Peter S.; Maggiora, Debra R.; Oncina, Carlos A.

    1994-01-01

    This report describes the development of a stratospheric emissions effects database (SEED) of aircraft fuel burn and emissions from projected Year 2015 subsonic aircraft fleets and from projected fleets of high-speed civil transports (HSCT's). This report also describes the development of a similar database of emissions from Year 1990 scheduled commercial passenger airline and air cargo traffic. The objective of this work was to initiate, develop, and maintain an engineering database for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burn and emissions of nitrogen oxides (NO(x) as NO2), carbon monoxide, and hydrocarbons (as CH4) have been calculated on a 1-degree latitude x 1-degree longitude x 1-kilometer altitude grid and delivered to NASA as electronic files. This report describes the assumptions and methodology for the calculations and summarizes the results of these calculations.

  8. 76 FR 5106 - Deposit Requirements for Registration of Automated Databases That Predominantly Consist of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-28

    ... Registration of Automated Databases That Predominantly Consist of Photographs AGENCY: Copyright Office, Library... regarding electronic registration of automated databases that consist predominantly of photographs and group... applications for automated databases that consist predominantly of photographs. The proposed amendments would...

  9. 77 FR 21618 - 60-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... DEPARTMENT OF STATE [Public Notice 7843] 60-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096... Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB Control Number: 1405-0168...

  10. 77 FR 47690 - 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... DEPARTMENT OF STATE [Public Notice 7976] 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096.... Title of Information Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB...

  11. Network-Based Methods for Identifying Key Active Proteins in the Extracellular Electron Transfer Process in Shewanella oneidensis MR-1.

    PubMed

    Ding, Dewu; Sun, Xiao

    2018-01-16

    Shewanella oneidensis MR-1 can transfer electrons from the intracellular environment to the extracellular space of the cells to reduce the extracellular insoluble electron acceptors (Extracellular Electron Transfer, EET). Benefiting from this EET capability, Shewanella has been widely used in different areas, such as energy production, wastewater treatment, and bioremediation. Genome-wide proteomics data was used to determine the active proteins involved in activating the EET process. We identified 1012 proteins with decreased expression and 811 proteins with increased expression when the EET process changed from inactivation to activation. We then networked these proteins to construct the active protein networks, and identified the top 20 key active proteins by network centralization analysis, including metabolism- and energy-related proteins, signal and transcriptional regulatory proteins, translation-related proteins, and the EET-related proteins. We also constructed the integrated protein interaction and transcriptional regulatory networks for the active proteins, then found three exclusive active network motifs involved in activating the EET process-Bi-feedforward Loop, Regulatory Cascade with a Feedback, and Feedback with a Protein-Protein Interaction (PPI)-and identified the active proteins involved in these motifs. Both enrichment analysis and comparative analysis to the whole-genome data implicated the multiheme c -type cytochromes and multiple signal processing proteins involved in the process. Furthermore, the interactions of these motif-guided active proteins and the involved functional modules were discussed. Collectively, by using network-based methods, this work reported a proteome-wide search for the key active proteins that potentially activate the EET process.

  12. Five key recommendations for the implementation of Hospital Electronic Prescribing and Medicines Administration systems in Scotland.

    PubMed

    Cresswell, Kathrin; Slee, Ann; Sheikh, Aziz

    2017-01-24

    NHS Scotland is about to embark on the implementation of Hospital Electronic Prescribing and Medicines Administration (HEPMA) systems. There are a number of risks associated with such ventures, thus drawing on existing experiences from other settings is crucial in informing deployment.Drawing on our previous and ongoing work in English settings as well as the international literature, we reflect on key lessons that NHS Scotland may wish to consider in going forward. These deliberations include recommendations surrounding key aspects of deployment strategy surrounding: 1) the way central coordination should be conceptualised, 2) how flexibility in can be ensured, 3) paying attention to optimising systems from the outset, 4) how expertise should be developed and centrally shared, and 5) ways in which learning from experience can be maximised.Our five recommendations will, we hope, provide a starting point for the strategic deliberations of policy makers. Throughout this journey, it is important to view the deployment of HEPMA as part of a wider strategic goal of creating integrated digital infrastructures across Scotland.

  13. FCDD: A Database for Fruit Crops Diseases.

    PubMed

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  14. Extending key sharing: how to generate a key tightly coupled to a network security policy

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Matheos

    2006-04-01

    Current state of the art security policy technologies, besides the small scale limitation and largely manual nature of accompanied management methods, are lacking a) in real-timeliness of policy implementation and b) vulnerabilities and inflexibility stemming from the centralized policy decision making; even if, for example, a policy description or access control database is distributed, the actual decision is often a centralized action and forms a system single point of failure. In this paper we are presenting a new fundamental concept that allows implement a security policy by a systematic and efficient key distribution procedure. Specifically, we extend the polynomial Shamir key splitting. According to this, a global key is split into n parts, any k of which can re-construct the original key. In this paper we present a method that instead of having "any k parts" be able to re-construct the original key, the latter can only be reconstructed if keys are combined as any access control policy describes. This leads into an easily deployable key generation procedure that results a single key per entity that "knows" its role in the specific access control policy from which it was derived. The system is considered efficient as it may be used to avoid expensive PKI operations or pairwise key distributions as well as provides superior security due to its distributed nature, the fact that the key is tightly coupled to the policy, and that policy change may be implemented easier and faster.

  15. The Barcelona Hospital Clínic therapeutic apheresis database.

    PubMed

    Cid, Joan; Carbassé, Gloria; Cid-Caballero, Marc; López-Púa, Yolanda; Alba, Cristina; Perea, Dolores; Lozano, Miguel

    2017-09-22

    A therapeutic apheresis (TA) database helps to increase knowledge about indications and type of apheresis procedures that are performed in clinical practice. The objective of the present report was to describe the type and number of TA procedures that were performed at our institution in a 10-year period, from 2007 to 2016. The TA electronic database was created by transferring patient data from electronic medical records and consultation forms into a Microsoft Access database developed exclusively for this purpose. Since 2007, prospective data from every TA procedure were entered in the database. A total of 5940 TA procedures were performed: 3762 (63.3%) plasma exchange (PE) procedures, 1096 (18.5%) hematopoietic progenitor cell (HPC) collections, and 1082 (18.2%) TA procedures other than PEs and HPC collections. The overall trend for the time-period was progressive increase in total number of TA procedures performed each year (from 483 TA procedures in 2007 to 822 in 2016). The tracking trend of each procedure during the 10-year period was different: the number of PE and other type of TA procedures increased 22% and 2818%, respectively, and the number of HPC collections decreased 28%. The TA database helped us to increase our knowledge about various indications and type of TA procedures that were performed in our current practice. We also believe that this database could serve as a model that other institutions can use to track service metrics. © 2017 Wiley Periodicals, Inc.

  16. An international database of radionuclide concentration ratios for wildlife: development and uses.

    PubMed

    Copplestone, D; Beresford, N A; Brown, J E; Yankovich, T

    2013-12-01

    A key element of most systems for assessing the impact of radionuclides on the environment is a means to estimate the transfer of radionuclides to organisms. To facilitate this, an international wildlife transfer database has been developed to provide an online, searchable compilation of transfer parameters in the form of equilibrium-based whole-organism to media concentration ratios. This paper describes the derivation of the wildlife transfer database, the key data sources it contains and highlights the applications for the data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Performance Evaluation of a Database System in a Multiple Backend Configurations,

    DTIC Science & Technology

    1984-10-01

    leaving a systemn process , the * internal performance measuremnents of MMSD have been carried out. Mathodo lo.- gies for constructing test databases...access d i rectory data via the AT, EDIT, and CDT. In designing the test database, one of the key concepts is the choice of the directory attributes in...internal timing. These requests are selected since they retrieve the seIaI lest portion of the test database and the processing time for each request is

  18. Pressure Injuries in Inpatient Care Facilities in the Czech Republic: Analysis of a National Electronic Database.

    PubMed

    Pokorná, Andrea; Benešová, Klára; Jarkovský, Jirˇí; Mužík, Jan; Beeckman, Dimitri

    The purpose of this study was to analyze pressure injury (PI) occurrence upon admission and at any time during the hospital course inpatients care facilities in the Czech Republic. Secondary aims were to evaluate demographic and clinical data of patients with PI and the impact of a PI on length of stay (LOS) in the hospital. Retrospective, cross-sectional analysis. The sample comprised data of hospitalized patients entered into the National Register of Hospitalized Patients (NRHOSP) database of the Czech Republic between 2007 and 2014 with a diagnosis L89 (pressure ulcer of unspecified site based on the International Classification of Diseases, Tenth Revision, ICD-10). Electronic records of 17,762,854 hospitalizations were reviewed. Data from the NRHOSP from all acute and non-acute care hospitals in the Czech Republic were analyzed. Specifically, we analyzed patients admitted to acute and non-acute care facilities with a primary or secondary diagnosis of PI. The NRHOSP database included 17,762,854 cases, of which 46,224 cases (33,342 cases in acute care hospitals; 12,882 in non-acute care hospitals) had the L89 diagnosis (0.3%). The mean age of patients admitted with a PI was 73.8 ± 15.3 years (mean ± SD), and their average LOS was 33.2 ± 76.9 days. The mean LOS of patients hospitalized with L89 code as a primary diagnosis (n = 6877) was significantly longer compared to those patients for whom L89 code was a secondary diagnosis (25.8 vs 20.2 days, P < .001) in acute care facilities. In contrast, we found no difference in the mean LOS for patients hospitalized in non-acute care facility (58.7 days vs 65.1 days; P = .146) with ICD code L89. Pressure injuries were associated with significant LOS in both acute and non-acute care settings in the Czech Republic. Despite the valuable insights we obtained from the analysis of NRHOSP data, we advocate creation of a more valid and reliable electronic reporting system that enables policy makers to evaluate the quality and

  19. Teaching Tip: Active Learning via a Sample Database: The Case of Microsoft's Adventure Works

    ERIC Educational Resources Information Center

    Mitri, Michel

    2015-01-01

    This paper describes the use and benefits of Microsoft's Adventure Works (AW) database to teach advanced database skills in a hands-on, realistic environment. Database management and querying skills are a key element of a robust information systems curriculum, and active learning is an important way to develop these skills. To facilitate active…

  20. The European general thoracic surgery database project.

    PubMed

    Falcoz, Pierre Emmanuel; Brunelli, Alessandro

    2014-05-01

    The European Society of Thoracic Surgeons (ESTS) Database is a free registry created by ESTS in 2001. The current online version was launched in 2007. It runs currently on a Dendrite platform with extensive data security and frequent backups. The main features are a specialty-specific, procedure-specific, prospectively maintained, periodically audited and web-based electronic database, designed for quality control and performance monitoring, which allows for the collection of all general thoracic procedures. Data collection is the "backbone" of the ESTS database. It includes many risk factors, processes of care and outcomes, which are specially designed for quality control and performance audit. The user can download and export their own data and use them for internal analyses and quality control audits. The ESTS database represents the gold standard of clinical data collection for European General Thoracic Surgery. Over the past years, the ESTS database has achieved many accomplishments. In particular, the database hit two major milestones: it now includes more than 235 participating centers and 70,000 surgical procedures. The ESTS database is a snapshot of surgical practice that aims at improving patient care. In other words, data capture should become integral to routine patient care, with the final objective of improving quality of care within Europe.

  1. Expanded national database collection and data coverage in the FINDbase worldwide database for clinically relevant genomic variation allele frequencies

    PubMed Central

    Viennas, Emmanouil; Komianou, Angeliki; Mizzi, Clint; Stojiljkovic, Maja; Mitropoulou, Christina; Muilu, Juha; Vihinen, Mauno; Grypioti, Panagiota; Papadaki, Styliani; Pavlidis, Cristiana; Zukic, Branka; Katsila, Theodora; van der Spek, Peter J.; Pavlovic, Sonja; Tzimas, Giannis; Patrinos, George P.

    2017-01-01

    FINDbase (http://www.findbase.org) is a comprehensive data repository that records the prevalence of clinically relevant genomic variants in various populations worldwide, such as pathogenic variants leading mostly to monogenic disorders and pharmacogenomics biomarkers. The database also records the incidence of rare genetic diseases in various populations, all in well-distinct data modules. Here, we report extensive data content updates in all data modules, with direct implications to clinical pharmacogenomics. Also, we report significant new developments in FINDbase, namely (i) the release of a new version of the ETHNOS software that catalyzes development curation of national/ethnic genetic databases, (ii) the migration of all FINDbase data content into 90 distinct national/ethnic mutation databases, all built around Microsoft's PivotViewer (http://www.getpivot.com) software (iii) new data visualization tools and (iv) the interrelation of FINDbase with DruGeVar database with direct implications in clinical pharmacogenomics. The abovementioned updates further enhance the impact of FINDbase, as a key resource for Genomic Medicine applications. PMID:27924022

  2. Encryption key distribution via chaos synchronization

    NASA Astrophysics Data System (ADS)

    Keuninckx, Lars; Soriano, Miguel C.; Fischer, Ingo; Mirasso, Claudio R.; Nguimdo, Romain M.; van der Sande, Guy

    2017-02-01

    We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method.

  3. Encryption key distribution via chaos synchronization

    PubMed Central

    Keuninckx, Lars; Soriano, Miguel C.; Fischer, Ingo; Mirasso, Claudio R.; Nguimdo, Romain M.; Van der Sande, Guy

    2017-01-01

    We present a novel encryption scheme, wherein an encryption key is generated by two distant complex nonlinear units, forced into synchronization by a chaotic driver. The concept is sufficiently generic to be implemented on either photonic, optoelectronic or electronic platforms. The method for generating the key bitstream from the chaotic signals is reconfigurable. Although derived from a deterministic process, the obtained bit series fulfill the randomness conditions as defined by the National Institute of Standards test suite. We demonstrate the feasibility of our concept on an electronic delay oscillator circuit and test the robustness against attacks using a state-of-the-art system identification method. PMID:28233876

  4. Irreversible electron attachment--a key to DNA damage by solvated electrons in aqueous solution.

    PubMed

    Westphal, K; Wiczk, J; Miloch, J; Kciuk, G; Bobrowski, K; Rak, J

    2015-11-07

    The TYT and TXT trimeric oligonucleotides, where X stands for a native nucleobase, T (thymine), C (cytosine), A (adenine), or G (guanine), and Y indicates a brominated analogue of the former, were irradiated with ionizing radiation generated by a (60)Co source in aqueous solutions containing Tris as a hydroxyl radical scavenger. In the past, these oligomers were bombarded with low energy electrons under an ultra-high vacuum and significant damage to TXT trimers was observed. However, in aqueous solution, hydrated electrons do not produce serious damage to TXT trimers although the employed radiation dose exceeded many times the doses used in radiotherapy. Thus, our studies demonstrate unequivocally that hydrated electrons, which are the major form of electrons generated during radiotherapy, are a negligible factor in damage to native DNA. It was also demonstrated that all the studied brominated nucleobases have a potential to sensitize DNA under hypoxic conditions. Strand breaks, abasic sites and the products of hydroxyl radical attachment to nucleobases have been identified by HPLC and LC-MS methods. Although all the bromonucleobases lead to DNA damage under the experimental conditions of the present work, bromopyrimidines seem to be the radiosensitizers of choice since they lead to more strand breaks than bromopurines.

  5. On-line database of voltammetric data of immobilized particles for identifying pigments and minerals in archaeometry, conservation and restoration (ELCHER database).

    PubMed

    Doménech-Carbó, Antonio; Doménech-Carbó, María Teresa; Valle-Algarra, Francisco Manuel; Gimeno-Adelantado, José Vicente; Osete-Cortina, Laura; Bosch-Reig, Francisco

    2016-07-13

    A web-based database of voltammograms is presented for characterizing artists' pigments and corrosion products of ceramic, stone and metal objects by means of the voltammetry of immobilized particles methodology. Description of the website and the database is provided. Voltammograms are, in most cases, accompanied by scanning electron microphotographs, X-ray spectra, infrared spectra acquired in attenuated total reflectance Fourier transform infrared spectroscopy mode (ATR-FTIR) and diffuse reflectance spectra in the UV-Vis-region. For illustrating the usefulness of the database two case studies involving identification of pigments and a case study describing deterioration of an archaeological metallic object are presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Constructing a Geology Ontology Using a Relational Database

    NASA Astrophysics Data System (ADS)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  7. RNAcentral: A vision for an international database of RNA sequences

    PubMed Central

    Bateman, Alex; Agrawal, Shipra; Birney, Ewan; Bruford, Elspeth A.; Bujnicki, Janusz M.; Cochrane, Guy; Cole, James R.; Dinger, Marcel E.; Enright, Anton J.; Gardner, Paul P.; Gautheret, Daniel; Griffiths-Jones, Sam; Harrow, Jen; Herrero, Javier; Holmes, Ian H.; Huang, Hsien-Da; Kelly, Krystyna A.; Kersey, Paul; Kozomara, Ana; Lowe, Todd M.; Marz, Manja; Moxon, Simon; Pruitt, Kim D.; Samuelsson, Tore; Stadler, Peter F.; Vilella, Albert J.; Vogel, Jan-Hinnerk; Williams, Kelly P.; Wright, Mathew W.; Zwieb, Christian

    2011-01-01

    During the last decade there has been a great increase in the number of noncoding RNA genes identified, including new classes such as microRNAs and piRNAs. There is also a large growth in the amount of experimental characterization of these RNA components. Despite this growth in information, it is still difficult for researchers to access RNA data, because key data resources for noncoding RNAs have not yet been created. The most pressing omission is the lack of a comprehensive RNA sequence database, much like UniProt, which provides a comprehensive set of protein knowledge. In this article we propose the creation of a new open public resource that we term RNAcentral, which will contain a comprehensive collection of RNA sequences and fill an important gap in the provision of biomedical databases. We envision RNA researchers from all over the world joining a federated RNAcentral network, contributing specialized knowledge and databases. RNAcentral would centralize key data that are currently held across a variety of databases, allowing researchers instant access to a single, unified resource. This resource would facilitate the next generation of RNA research and help drive further discoveries, including those that improve food production and human and animal health. We encourage additional RNA database resources and research groups to join this effort. We aim to obtain international network funding to further this endeavor. PMID:21940779

  8. Public Key Infrastructure Study

    DTIC Science & Technology

    1994-04-01

    commerce. This Public Key Infrastructure (PKI) study focuses on the United States Federal Government operations, but also addresses national and global ... issues in order to facilitate the interoperation of protected electronic commerce among the various levels of government in the U.S., private citizens

  9. The UMIST database for astrochemistry 2006

    NASA Astrophysics Data System (ADS)

    Woodall, J.; Agúndez, M.; Markwick-Kemper, A. J.; Millar, T. J.

    2007-05-01

    Aims:We present a new version of the UMIST Database for Astrochemistry, the fourth such version to be released to the public. The current version contains some 4573 binary gas-phase reactions, an increase of 10% from the previous (1999) version, among 420 species, of which 23 are new to the database. Methods: Major updates have been made to ion-neutral reactions, neutral-neutral reactions, particularly at low temperature, and dissociative recombination reactions. We have included for the first time the interstellar chemistry of fluorine. In addition to the usual database, we have also released a reaction set in which the effects of dipole-enhanced ion-neutral rate coefficients are included. Results: These two reactions sets have been used in a dark cloud model and the results of these models are presented and discussed briefly. The database and associated software are available on the World Wide Web at www.udfa.net. Tables 1, 2, 4 and 9 are only available in electronic form at http://www.aanda.org

  10. [Benefits of large healthcare databases for drug risk research].

    PubMed

    Garbe, Edeltraut; Pigeot, Iris

    2015-08-01

    Large electronic healthcare databases have become an important worldwide data resource for drug safety research after approval. Signal generation methods and drug safety studies based on these data facilitate the prospective monitoring of drug safety after approval, as has been recently required by EU law and the German Medicines Act. Despite its large size, a single healthcare database may include insufficient patients for the study of a very small number of drug-exposed patients or the investigation of very rare drug risks. For that reason, in the United States, efforts have been made to work on models that provide the linkage of data from different electronic healthcare databases for monitoring the safety of medicines after authorization in (i) the Sentinel Initiative and (ii) the Observational Medical Outcomes Partnership (OMOP). In July 2014, the pilot project Mini-Sentinel included a total of 178 million people from 18 different US databases. The merging of the data is based on a distributed data network with a common data model. In the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance (ENCEPP) there has been no comparable merging of data from different databases; however, first experiences have been gained in various EU drug safety projects. In Germany, the data of the statutory health insurance providers constitute the most important resource for establishing a large healthcare database. Their use for this purpose has so far been severely restricted by the Code of Social Law (Section 75, Book 10). Therefore, a reform of this section is absolutely necessary.

  11. The Database Business: Managing Today--Planning for Tomorrow. Issues and Futures.

    ERIC Educational Resources Information Center

    Aitchison, T. M.; And Others

    1988-01-01

    Current issues and the future of the database business are discussed in five papers. Topics covered include aspects relating to the quality of database production; international ownership in the U.S. information marketplace; an overview of pricing strategies in the electronic information industry; and pricing issues from the viewpoints of online…

  12. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was

  13. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  14. Multicomponent reactions provide key molecules for secret communication.

    PubMed

    Boukis, Andreas C; Reiter, Kevin; Frölich, Maximiliane; Hofheinz, Dennis; Meier, Michael A R

    2018-04-12

    A convenient and inherently more secure communication channel for encoding messages via specifically designed molecular keys is introduced by combining advanced encryption standard cryptography with molecular steganography. The necessary molecular keys require large structural diversity, thus suggesting the application of multicomponent reactions. Herein, the Ugi four-component reaction of perfluorinated acids is utilized to establish an exemplary database consisting of 130 commercially available components. Considering all permutations, this combinatorial approach can unambiguously provide 500,000 molecular keys in only one synthetic procedure per key. The molecular keys are transferred nondigitally and concealed by either adsorption onto paper, coffee, tea or sugar as well as by dissolution in a perfume or in blood. Re-isolation and purification from these disguises is simplified by the perfluorinated sidechains of the molecular keys. High resolution tandem mass spectrometry can unequivocally determine the molecular structure and thus the identity of the key for a subsequent decryption of an encoded message.

  15. Preliminary development of a diabetic foot ulcer database from a wound electronic medical record: a tool to decrease limb amputations.

    PubMed

    Golinko, Michael S; Margolis, David J; Tal, Adit; Hoffstad, Ole; Boulton, Andrew J M; Brem, Harold

    2009-01-01

    Our objective was to create a practical standardized database of clinically relevant variables in the care of patients with diabetes and foot ulcers. Numerical clinical variables such as age, baseline laboratory values, and wound area were extracted from the wound electronic medical record (WEMR). A coding system was developed to translate narrative data, culture, and pathology reports into discrete, quantifiable variables. Using data extracted from the WEMR, a diabetic foot ulcer-specific database incorporated the following tables: (1) demographics, medical history, and baseline laboratory values; (2) vascular testing data; (3) radiology data; (4) wound characteristics; and (5) wound debridement data including pathology, culture results, and amputation data. The database contains variables that can be easily exported for analysis. Amputation was studied in 146 patients who had at least two visits (e.g., two entries in the database). Analysis revealed that 19 (13%) patients underwent 32 amputations (nine major and 23 minor) in 23 limbs. There was a decreased risk of amputation, 0.87 (0.78, 1.00), using a proportional hazards model, associated with an increased number of visits and entries in the WEMR. Further analysis revealed no significant difference in age, gender, HbA1c%, cholesterol, white blood cell count, or prealbumin at baseline, whereas hemoglobin and albumin were significantly lower in the amputee group (p<0.05) than the nonamputee group. Fifty-nine percent of amputees had histological osteomyelitis based on operating room biopsy vs. 45% of nonamputees. In conclusion, tracking patients with a WEMR is a tool that could potentially increase patient safety and quality of care, allowing clinicians to more easily identify a nonhealing wound and intervene. This report describes a method of capturing data relevant to clinical care of a patient with a diabetic foot ulcer, and may enable clinicians to adapt such a system to their own patient population.

  16. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    PubMed

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  17. The Latin American Social Medicine database

    PubMed Central

    Eldredge, Jonathan D; Waitzkin, Howard; Buchanan, Holly S; Teal, Janis; Iriart, Celia; Wiley, Kevin; Tregear, Jonathan

    2004-01-01

    Background Public health practitioners and researchers for many years have been attempting to understand more clearly the links between social conditions and the health of populations. Until recently, most public health professionals in English-speaking countries were unaware that their colleagues in Latin America had developed an entire field of inquiry and practice devoted to making these links more clearly understood. The Latin American Social Medicine (LASM) database finally bridges this previous gap. Description This public health informatics case study describes the key features of a unique information resource intended to improve access to LASM literature and to augment understanding about the social determinants of health. This case study includes both quantitative and qualitative evaluation data. Currently the LASM database at The University of New Mexico brings important information, originally known mostly within professional networks located in Latin American countries to public health professionals worldwide via the Internet. The LASM database uses Spanish, Portuguese, and English language trilingual, structured abstracts to summarize classic and contemporary works. Conclusion This database provides helpful information for public health professionals on the social determinants of health and expands access to LASM. PMID:15627401

  18. Electron and Positron Stopping Powers of Materials

    National Institute of Standards and Technology Data Gateway

    SRD 7 NIST Electron and Positron Stopping Powers of Materials (PC database for purchase)   The EPSTAR database provides rapid calculations of stopping powers (collisional, radiative, and total), CSDA ranges, radiation yields and density effect corrections for incident electrons or positrons with kinetic energies from 1 keV to 10 GeV, and for any chemically defined target material.

  19. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches.

    PubMed

    Sommanustweechai, Angkana; Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-02-01

    To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. We gathered information, on antibiotic distribution in Thailand, in in-depth interviews - with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators- and in database and literature searches. In 2016-2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors - e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as "dangerous drugs", it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act's regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients.

  20. The network of Shanghai Stroke Service System (4S): A public health-care web-based database using automatic extraction of electronic medical records.

    PubMed

    Dong, Yi; Fang, Kun; Wang, Xin; Chen, Shengdi; Liu, Xueyuan; Zhao, Yuwu; Guan, Yangtai; Cai, Dingfang; Li, Gang; Liu, Jianmin; Liu, Jianren; Zhuang, Jianhua; Wang, Panshi; Chen, Xin; Shen, Haipeng; Wang, David Z; Xian, Ying; Feng, Wuwei; Campbell, Bruce Cv; Parsons, Mark; Dong, Qiang

    2018-07-01

    Background Several stroke outcome and quality control projects have demonstrated the success in stroke care quality improvement through structured process. However, Chinese health-care systems are challenged with its overwhelming numbers of patients, limited resources, and large regional disparities. Aim To improve quality of stroke care to address regional disparities through process improvement. Method and design The Shanghai Stroke Service System (4S) is established as a regional network for stroke care quality improvement in the Shanghai metropolitan area. The 4S registry uses a web-based database that automatically extracts data from structured electronic medical records. Site-specific education and training program will be designed and administrated according to their baseline characteristics. Both acute reperfusion therapies including thrombectomy and thrombolysis in the acute phase and subsequent care were measured and monitored with feedback. Primary outcome is to evaluate the differences in quality metrics between baseline characteristics (including rate of thrombolysis in acute stroke and key performance indicators in secondary prevention) and post-intervention. Conclusions The 4S system is a regional stroke network that monitors the ongoing stroke care quality in Shanghai. This project will provide the opportunity to evaluate the spectrum of acute stroke care and design quality improvement processes for better stroke care. A regional stroke network model for quality improvement will be explored and might be expanded to other large cities in China. Clinical Trial Registration-URL http://www.clinicaltrials.gov . Unique identifier: NCT02735226.

  1. Ontology based heterogeneous materials database integration and semantic query

    NASA Astrophysics Data System (ADS)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  2. How we developed and piloted an electronic key features examination for the internal medicine clerkship based on a US national curriculum.

    PubMed

    Bronander, Kirk A; Lang, Valerie J; Nixon, L James; Harrell, Heather E; Kovach, Regina; Hingle, Susan; Berman, Norman

    2015-01-01

    Key features examinations (KFEs) have been used to assess clinical decision making in medical education, yet there are no reports of an online KFE-based on a national curriculum for the internal medicine clerkship. What we did: The authors developed and pilot tested an electronic KFE based on the US Clerkship Directors in Internal Medicine core curriculum. Teams, with expert oversight and peer review, developed key features (KFs) and cases. The exam was pilot tested at eight medical schools with 162 third and fourth year medical students, of whom 96 (59.3%) responded to a survey. While most students reported that the exam was more difficult than a multiple choice question exam, 61 (83.3%) students agreed that it reflected problems seen in clinical practice and 51 (69.9%) students reported that it more accurately assessed the ability to make clinical decisions. The development of an electronic KFs exam is a time-intensive process. A team approach offers built-in peer review and accountability. Students, although not familiar with this format in the US, recognized it as authentically assessing clinical decision-making for problems commonly seen in the clerkship.

  3. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  4. The Danish Cardiac Rehabilitation Database.

    PubMed

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.

  5. The Radiation Belt Electron Scattering by Magnetosonic Wave: Dependence on Key Parameters

    NASA Astrophysics Data System (ADS)

    Lei, Mingda; Xie, Lun; Li, Jinxing; Pu, Zuyin; Fu, Suiyan; Ni, Binbin; Hua, Man; Chen, Lunjin; Li, Wen

    2017-12-01

    Magnetosonic (MS) waves have been found capable of creating radiation belt electron butterfly distributions in the inner magnetosphere. To investigate the physical nature of the interactions between radiation belt electrons and MS waves, and to explore a preferential condition for MS waves to scatter electrons efficiently, we performed a comprehensive parametric study of MS wave-electron interactions using test particle simulations. The diffusion coefficients simulated by varying the MS wave frequency show that the scattering effect of MS waves is frequency insensitive at low harmonics (f < 20 fcp), which has great implications on modeling the electron scattering caused by MS waves with harmonic structures. The electron scattering caused by MS waves is very sensitive to wave normal angles, and MS waves with off 90° wave normal angles scatter electrons more efficiently. By simulating the diffusion coefficients and the electron phase space density evolution at different L shells under different plasma environment circumstances, we find that MS waves can readily produce electron butterfly distributions in the inner part of the plasmasphere where the ratio of electron plasma-to-gyrofrequency (fpe/fce) is large, while they may essentially form a two-peak distribution outside the plasmapause and in the inner radiation belt where fpe/fce is small.

  6. Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis.

    PubMed

    Zhang, Ting; Chen, Juan; Jia, Xiaofeng

    2015-01-01

    This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006-2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006-2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including "natural products and polymers" with nine key technical points, "fermentation industry" with twelve ones, "electrical medical equipment" with four ones, and "diagnosis, surgery" with four ones. The results of this study could provide guidance on the development direction of oncology, and also help researchers broaden innovative ideas and discover new

  7. Digitizing Olin Eggen's Card Database

    NASA Astrophysics Data System (ADS)

    Crast, J.; Silvis, G.

    2017-06-01

    The goal of the Eggen Card Database Project is to recover as many of the photometric observations from Olin Eggen's Card Database as possible and preserve these observations, in digital forms that are accessible by anyone. Any observations of interest to the AAVSO will be added to the AAVSO International Database (AID). Given to the AAVSO on long-term loan by the Cerro Tololo Inter-American Observatory, the database is a collection of over 78,000 index cards holding all Eggen's observations made between 1960 and 1990. The cards were electronically scanned and the resulting 108,000 card images have been published as a series of 2,216 PDF files, which are available from the AAVSO web site. The same images are also stored in an AAVSO online database where they are indexed by star name and card content. These images can be viewed using the eggen card portal online tool. Eggen made observations using filter bands from five different photometric systems. He documented these observations using 15 different data recording formats. Each format represents a combination of filter magnitudes and color indexes. These observations are being transcribed onto spreadsheets, from which observations of value to the AAVSO are added to the AID. A total of 506 U, B, V, R, and I observations were added to the AID for the variable stars S Car and l Car. We would like the reader to search through the card database using the eggen card portal for stars of particular interest. If such stars are found and retrieval of the observations is desired, e-mail the authors, and we will be happy to help retrieve those data for the reader.

  8. Performance assessment of EMR systems based on post-relational database.

    PubMed

    Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji

    2012-08-01

    Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.

  9. NIST Databases on Atomic Spectra

    NASA Astrophysics Data System (ADS)

    Reader, J.; Wiese, W. L.; Martin, W. C.; Musgrove, A.; Fuhr, J. R.

    2002-11-01

    The NIST atomic and molecular spectroscopic databases now available on the World Wide Web through the NIST Physics Laboratory homepage include Atomic Spectra Database, Ground Levels and Ionization Energies for the Neutral Atoms, Spectrum of Platinum Lamp for Ultraviolet Spectrograph Calibration, Bibliographic Database on Atomic Transition Probabilities, Bibliographic Database on Atomic Spectral Line Broadening, and Electron-Impact Ionization Cross Section Database. The Atomic Spectra Database (ASD) [1] offers evaluated data on energy levels, wavelengths, and transition probabilities for atoms and atomic ions. Data are given for some 950 spectra and 70,000 energy levels. About 91,000 spectral lines are included, with transition probabilities for about half of these. Additional data resulting from our ongoing critical compilations will be included in successive new versions of ASD. We plan to include, for example, our recently published data for some 16,000 transitions covering most ions of the iron-group elements, as well as Cu, Kr, and Mo [2]. Our compilations benefit greatly from experimental and theoretical atomic-data research being carried out in the NIST Atomic Physics Division. A new compilation covering spectra of the rare gases in all stages of ionization, for example, revealed a need for improved data in the infrared. We have thus measured these needed data with our high-resolution Fourier transform spectrometer [3]. An upcoming new database will give wavelengths and intensities for the stronger lines of all neutral and singly-ionized atoms, along with energy levels and transition probabilities for the persistent lines [4]. A critical compilation of the transition probabilities of Ba I and Ba II [5] has been completed and several other compilations of atomic transition probabilities are nearing completion. These include data for all spectra of Na, Mg, Al, and Si [6]. Newly compiled data for selected ions of Ne, Mg, Si and S, will form the basis for a new

  10. Optical components damage parameters database system

    NASA Astrophysics Data System (ADS)

    Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong

    2012-10-01

    Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.

  11. DynAstVO : a Europlanet database of NEA orbits

    NASA Astrophysics Data System (ADS)

    Desmars, J.; Thuillot, W.; Hestroffer, D.; David, P.; Le Sidaner, P.

    2017-09-01

    DynAstVO is a new orbital database developed within the Europlanet 2020 RI and the Virtual European Solar and Planetary Access (VESPA) frameworks. The database is dedicated to Near-Earth asteroids and provide parameters related to orbits: osculating elements, observational information, ephemeris through SPICE kernel, and in particular, orbit uncertainty and associated covariance matrix. DynAstVO is daily updated on a automatic process of orbit determination on the basis of the Minor Planet Electronic Circulars that reports new observations or the discover of a new asteroid. This database conforms to EPN-TAP environment and is accessible through VO protocols and on the VESPA portal web access (http://vespa.obspm.fr/). A comparison with other classical databases such as Astorb, MPCORB, NEODyS and JPL is also presented.

  12. Database and Related Activities in Japan

    NASA Astrophysics Data System (ADS)

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi; Sakaue, Hiroyuki A.; Kato, Takako; Ding, Xiaobin; Morita, Shigeru; Kitajima, Masashi; Koike, Fumihiro; Nakamura, Nobuyuki; Sakamoto, Naoki; Sasaki, Akira; Skobelev, Igor; Tsuchida, Hidetsugu; Ulantsev, Artemiy; Watanabe, Tetsuya; Yamamoto, Norimasa

    2011-05-01

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition of W26+ ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association "Forum of Atomic and Molecular Data and Their Applications" to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.

  13. Patterns of Undergraduates' Use of Scholarly Databases in a Large Research University

    ERIC Educational Resources Information Center

    Mbabu, Loyd Gitari; Bertram, Albert; Varnum, Ken

    2013-01-01

    Authentication data was utilized to explore undergraduate usage of subscription electronic databases. These usage patterns were linked to the information literacy curriculum of the library. The data showed that out of the 26,208 enrolled undergraduate students, 42% of them accessed a scholarly database at least once in the course of the entire…

  14. Mining a human transcriptome database for Nrf2 modulators

    EPA Science Inventory

    Nuclear factor erythroid-2 related factor 2 (Nrf2) is a key transcription factor important in the protection against oxidative stress. We developed computational procedures to enable the identification of chemical, genetic and environmental modulators of Nrf2 in a large database ...

  15. Keys and the crisis in taxonomy: extinction or reinvention?

    PubMed

    Walter, David Evans; Winterton, Shaun

    2007-01-01

    Dichotomous keys that follow a single pathway of character state choices to an end point have been the primary tools for the identification of unknown organisms for more than two centuries. However, a revolution in computer diagnostics is now under way that may result in the replacement of traditional keys by matrix-based computer interactive keys that have many paths to a correct identification and make extensive use of hypertext to link to images, glossaries, and other support material. Progress is also being made on replacing keys entirely by optical matching of specimens to digital databases and DNA sequences. These new tools may go some way toward alleviating the taxonomic impediment to biodiversity studies and other ecological and evolutionary research, especially with better coordination between those who produce keys and those who use them and by integrating interactive keys into larger biological Web sites.

  16. Antibiotic distribution channels in Thailand: results of key-informant interviews, reviews of drug regulations and database searches

    PubMed Central

    Chanvatik, Sunicha; Sermsinsiri, Varavoot; Sivilaikul, Somsajee; Patcharanarumol, Walaiporn; Yeung, Shunmay; Tangcharoensathien, Viroj

    2018-01-01

    Abstract Objective To analyse how antibiotics are imported, manufactured, distributed and regulated in Thailand. Methods We gathered information, on antibiotic distribution in Thailand, in in-depth interviews – with 43 key informants from farms, health facilities, pharmaceutical and animal feed industries, private pharmacies and regulators– and in database and literature searches. Findings In 2016–2017, licensed antibiotic distribution in Thailand involves over 700 importers and about 24 000 distributors – e.g. retail pharmacies and wholesalers. Thailand imports antibiotics and active pharmaceutical ingredients. There is no system for monitoring the distribution of active ingredients, some of which are used directly on farms, without being processed. Most antibiotics can be bought from pharmacies, for home or farm use, without a prescription. Although the 1987 Drug Act classified most antibiotics as “dangerous drugs”, it only classified a few of them as prescription-only medicines and placed no restrictions on the quantities of antibiotics that could be sold to any individual. Pharmacists working in pharmacies are covered by some of the Act’s regulations, but the quality of their dispensing and prescribing appears to be largely reliant on their competences. Conclusion In Thailand, most antibiotics are easily and widely available from retail pharmacies, without a prescription. If the inappropriate use of active pharmaceutical ingredients and antibiotics is to be reduced, we need to reclassify and restrict access to certain antibiotics and to develop systems to audit the dispensing of antibiotics in the retail sector and track the movements of active ingredients. PMID:29403113

  17. PAMDB: a comprehensive Pseudomonas aeruginosa metabolome database.

    PubMed

    Huang, Weiliang; Brewer, Luke K; Jones, Jace W; Nguyen, Angela T; Marcu, Ana; Wishart, David S; Oglesby-Sherrouse, Amanda G; Kane, Maureen A; Wilks, Angela

    2018-01-04

    The Pseudomonas aeruginosaMetabolome Database (PAMDB, http://pseudomonas.umaryland.edu) is a searchable, richly annotated metabolite database specific to P. aeruginosa. P. aeruginosa is a soil organism and significant opportunistic pathogen that adapts to its environment through a versatile energy metabolism network. Furthermore, P. aeruginosa is a model organism for the study of biofilm formation, quorum sensing, and bioremediation processes, each of which are dependent on unique pathways and metabolites. The PAMDB is modelled on the Escherichia coli (ECMDB), yeast (YMDB) and human (HMDB) metabolome databases and contains >4370 metabolites and 938 pathways with links to over 1260 genes and proteins. The database information was compiled from electronic databases, journal articles and mass spectrometry (MS) metabolomic data obtained in our laboratories. For each metabolite entered, we provide detailed compound descriptions, names and synonyms, structural and physiochemical information, nuclear magnetic resonance (NMR) and MS spectra, enzymes and pathway information, as well as gene and protein sequences. The database allows extensive searching via chemical names, structure and molecular weight, together with gene, protein and pathway relationships. The PAMBD and its future iterations will provide a valuable resource to biologists, natural product chemists and clinicians in identifying active compounds, potential biomarkers and clinical diagnostics. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Analyzing key ecological functions for transboundary subbasin assessments.

    Treesearch

    B.G Marcot; T.A. O' Neil; J.B. Nyberg; A. MacKinnon; P.J. Paquet; D.H. Johnson

    2007-01-01

    We present an evaluation of the ecological roles ("key ecological functions" or KEFs) of 618 wildlife species as one facet of subbasin assessment in the Columbia River basin (CRB) of the United States and Canada. Using a wildlife-habitat relationships database (IBIS) and geographic information system, we have mapped KEFs as levels of functional redundancy (...

  19. Library Micro-Computing, Vol. 2. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 19 articles pertaining to library microcomputing appear in this collection, the second of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  20. Radiation damage of biomolecules (RADAM) database development: current status

    NASA Astrophysics Data System (ADS)

    Denifl, S.; Garcia, G.; Huber, B. A.; Marinković, B. P.; Mason, N.; Postler, J.; Rabus, H.; Rixon, G.; Solov'yov, A. V.; Suraud, E.; Yakubovich, A. V.

    2013-06-01

    Ion beam therapy offers the possibility of excellent dose localization for treatment of malignant tumours, minimizing radiation damage in normal tissue, while maximizing cell killing within the tumour. However, as the underlying dependent physical, chemical and biological processes are too complex to treat them on a purely analytical level, most of our current and future understanding will rely on computer simulations, based on mathematical equations, algorithms and last, but not least, on the available atomic and molecular data. The viability of the simulated output and the success of any computer simulation will be determined by these data, which are treated as the input variables in each computer simulation performed. The radiation research community lacks a complete database for the cross sections of all the different processes involved in ion beam induced damage: ionization and excitation cross sections for ions with liquid water and biological molecules, all the possible electron - medium interactions, dielectric response data, electron attachment to biomolecules etc. In this paper we discuss current progress in the creation of such a database, outline the roadmap of the project and review plans for the exploitation of such a database in future simulations.

  1. Mining databases for protein aggregation: a review.

    PubMed

    Tsiolaki, Paraskevi L; Nastou, Katerina C; Hamodrakas, Stavros J; Iconomidou, Vassiliki A

    2017-09-01

    Protein aggregation is an active area of research in recent decades, since it is the most common and troubling indication of protein instability. Understanding the mechanisms governing protein aggregation and amyloidogenesis is a key component to the aetiology and pathogenesis of many devastating disorders, including Alzheimer's disease or type 2 diabetes. Protein aggregation data are currently found "scattered" in an increasing number of repositories, since advances in computational biology greatly influence this field of research. This review exploits the various resources of aggregation data and attempts to distinguish and analyze the biological knowledge they contain, by introducing protein-based, fragment-based and disease-based repositories, related to aggregation. In order to gain a broad overview of the available repositories, a novel comprehensive network maps and visualizes the current association between aggregation databases and other important databases and/or tools and discusses the beneficial role of community annotation. The need for unification of aggregation databases in a common platform is also addressed.

  2. [A web-based integrated clinical database for laryngeal cancer].

    PubMed

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  3. Open access intrapartum CTG database.

    PubMed

    Chudáček, Václav; Spilka, Jiří; Burša, Miroslav; Janků, Petr; Hruban, Lukáš; Huptych, Michal; Lhotská, Lenka

    2014-01-13

    Cardiotocography (CTG) is a monitoring of fetal heart rate and uterine contractions. Since 1960 it is routinely used by obstetricians to assess fetal well-being. Many attempts to introduce methods of automatic signal processing and evaluation have appeared during the last 20 years, however still no significant progress similar to that in the domain of adult heart rate variability, where open access databases are available (e.g. MIT-BIH), is visible. Based on a thorough review of the relevant publications, presented in this paper, the shortcomings of the current state are obvious. A lack of common ground for clinicians and technicians in the field hinders clinically usable progress. Our open access database of digital intrapartum cardiotocographic recordings aims to change that. The intrapartum CTG database consists in total of 552 intrapartum recordings, which were acquired between April 2010 and August 2012 at the obstetrics ward of the University Hospital in Brno, Czech Republic. All recordings were stored in electronic form in the OB TraceVue®;system. The recordings were selected from 9164 intrapartum recordings with clinical as well as technical considerations in mind. All recordings are at most 90 minutes long and start a maximum of 90 minutes before delivery. The time relation of CTG to delivery is known as well as the length of the second stage of labor which does not exceed 30 minutes. The majority of recordings (all but 46 cesarean sections) is - on purpose - from vaginal deliveries. All recordings have available biochemical markers as well as some more general clinical features. Full description of the database and reasoning behind selection of the parameters is presented in the paper. A new open-access CTG database is introduced which should give the research community common ground for comparison of results on reasonably large database. We anticipate that after reading the paper, the reader will understand the context of the field from clinical and

  4. PMAG: Relational Database Definition

    NASA Astrophysics Data System (ADS)

    Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.

    2002-12-01

    The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the

  5. Applying AN Object-Oriented Database Model to a Scientific Database Problem: Managing Experimental Data at Cebaf.

    NASA Astrophysics Data System (ADS)

    Ehlmann, Bryon K.

    Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.

  6. Toward unification of taxonomy databases in a distributed computer environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi

    1994-12-31

    All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less

  7. BOK-Printed Electronics

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2013-01-01

    The use of printed electronics technologies (PETs), 2D or 3D printing approaches either by conventional electronic fabrication or by rapid graphic printing of organic or nonorganic electronic devices on various small or large rigid or flexible substrates, is projected to grow exponentially in commercial industry. This has provided an opportunity to determine whether or not PETs could be applicable for low volume and high-reliability applications. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the current status of organic and printed electronics technologies. It reviews three key industry roadmaps- on this subject-OE-A, ITRS, and iNEMI-each with a different name identification for this emerging technology. This followed by a brief review of the status of the industry on standard development for this technology, including IEEE and IPC specifications. The report concludes with key technologies and applications and provides a technology hierarchy similar to those of conventional microelectronics for electronics packaging. Understanding key technology roadmaps, parameters, and applications is important when judicially selecting and narrowing the follow-up of new and emerging applicable technologies for evaluation, as well as the low risk insertion of organic, large area, and printed electronics.

  8. Library Micro-Computing, Vol. 1. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 18 articles pertaining to library microcomputing appear in this collection, the first of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1) an integrated library…

  9. Database and Related Activities in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi

    2011-05-11

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition ofmore » W{sup 26+} ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association ''Forum of Atomic and Molecular Data and Their Applications'' to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.« less

  10. KEY COMPARISON: Final report, on-going key comparison BIPM.QM-K1: Ozone at ambient level, comparison with ISCIII, 2007

    NASA Astrophysics Data System (ADS)

    Viallon, Joële; Moussay, Philippe; Wielgosz, Robert; Morillo Gomez, Pilar; Sánchez Blaya, Carmen

    2009-01-01

    As part of the on-going key comparison BIPM.QM-K1, a comparison has been performed between the ozone national standard of the Instituto de Salud Carlos III (ISCIII) and the common reference standard of the key comparison, maintained by the Bureau International des Poids et Mesures (BIPM). The instruments have been compared over a nominal ozone mole fraction range of 0 nmol/mol to 500 nmol/mol. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  11. One approach to design of speech emotion database

    NASA Astrophysics Data System (ADS)

    Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav

    2016-05-01

    This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.

  12. The Missing Link: Context Loss in Online Databases

    ERIC Educational Resources Information Center

    Mi, Jia; Nesta, Frederick

    2005-01-01

    Full-text databases do not allow for the complexity of the interaction of the human eye and brain with printed matter. As a result, both content and context may be lost. The authors propose additional indexing fields that would maintain the content and context of print in electronic formats.

  13. Trials by Juries: Suggested Practices for Database Trials

    ERIC Educational Resources Information Center

    Ritterbush, Jon

    2012-01-01

    Librarians frequently utilize product trials to assess the content and usability of a database prior to committing funds to a new subscription or purchase. At the 2012 Electronic Resources and Libraries Conference in Austin, Texas, three librarians presented a panel discussion on their institutions' policies and practices regarding database…

  14. Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.

    PubMed

    Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond

    2016-11-01

    An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.

  15. An Interactive Iterative Method for Electronic Searching of Large Literature Databases

    ERIC Educational Resources Information Center

    Hernandez, Marco A.

    2013-01-01

    PubMed® is an on-line literature database hosted by the U.S. National Library of Medicine. Containing over 21 million citations for biomedical literature--both abstracts and full text--in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return…

  16. Identification of the Key Fields and Their Key Technical Points of Oncology by Patent Analysis

    PubMed Central

    Zhang, Ting; Chen, Juan; Jia, Xiaofeng

    2015-01-01

    Background This paper aims to identify the key fields and their key technical points of oncology by patent analysis. Methodology/Principal Findings Patents of oncology applied from 2006 to 2012 were searched in the Thomson Innovation database. The key fields and their key technical points were determined by analyzing the Derwent Classification (DC) and the International Patent Classification (IPC), respectively. Patent applications in the top ten DC occupied 80% of all the patent applications of oncology, which were the ten fields of oncology to be analyzed. The number of patent applications in these ten fields of oncology was standardized based on patent applications of oncology from 2006 to 2012. For each field, standardization was conducted separately for each of the seven years (2006–2012) and the mean of the seven standardized values was calculated to reflect the relative amount of patent applications in that field; meanwhile, regression analysis using time (year) and the standardized values of patent applications in seven years (2006–2012) was conducted so as to evaluate the trend of patent applications in each field. Two-dimensional quadrant analysis, together with the professional knowledge of oncology, was taken into consideration in determining the key fields of oncology. The fields located in the quadrant with high relative amount or increasing trend of patent applications are identified as key ones. By using the same method, the key technical points in each key field were identified. Altogether 116,820 patents of oncology applied from 2006 to 2012 were retrieved, and four key fields with twenty-nine key technical points were identified, including “natural products and polymers” with nine key technical points, “fermentation industry” with twelve ones, “electrical medical equipment” with four ones, and “diagnosis, surgery” with four ones. Conclusions/Significance The results of this study could provide guidance on the development

  17. Database resources of the National Center for Biotechnology

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Federhen, Scott; Lash, Alex E.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Tatusova, Tatiana A.; Wagner, Lukas

    2003-01-01

    In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, PubMed, PubMed Central (PMC), LocusLink, the NCBITaxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR (e-PCR), Open Reading Frame (ORF) Finder, References Sequence (RefSeq), UniGene, HomoloGene, ProtEST, Database of Single Nucleotide Polymorphisms (dbSNP), Human/Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes and related tools, the Map Viewer, Model Maker (MM), Evidence Viewer (EV), Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), and the Conserved Domain Architecture Retrieval Tool (CDART). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:12519941

  18. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes

    PubMed Central

    Corcoran, Callan C.; Grady, Cameron R.; Pisitkun, Trairak; Parulekar, Jaya

    2017-01-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database (https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database (Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. PMID:27974320

  19. What is lost when searching only one literature database for articles relevant to injury prevention and safety promotion?

    PubMed

    Lawrence, D W

    2008-12-01

    To assess what is lost if only one literature database is searched for articles relevant to injury prevention and safety promotion (IPSP) topics. Serial textword (keyword, free-text) searches using multiple synonym terms for five key IPSP topics (bicycle-related brain injuries, ethanol-impaired driving, house fires, road rage, and suicidal behaviors among adolescents) were conducted in four of the bibliographic databases that are most used by IPSP professionals: EMBASE, MEDLINE, PsycINFO, and Web of Science. Through a systematic procedure, an inventory of articles on each topic in each database was conducted to identify the total unduplicated count of all articles on each topic, the number of articles unique to each database, and the articles available if only one database is searched. No single database included all of the relevant articles on any topic, and the database with the broadest coverage differed by topic. A search of only one literature database will return 16.7-81.5% (median 43.4%) of the available articles on any of five key IPSP topics. Each database contributed unique articles to the total bibliography for each topic. A literature search performed in only one database will, on average, lead to a loss of more than half of the available literature on a topic.

  20. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Khovayko, Oleg; Landsman, David; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Miller, Vadim; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Shumway, Martin; Sequeira, Edwin; Sherry, Steven T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusov, Roman L.; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene

    2008-01-01

    In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data available through NCBI's web site. NCBI resources include Entrez, the Entrez Programming Utilities, My NCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genome, Genome Project and related tools, the Trace, Assembly, and Short Read Archives, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Influenza Viral Resources, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Entrez Probe, GENSAT, Database of Genotype and Phenotype, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting the web applications are custom implementations of the BLAST program optimized to search specialized data sets. These resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:18045790

  1. Image use in field guides and identification keys: review and recommendations.

    PubMed

    Leggett, Roxanne; Kirchoff, Bruce K

    2011-01-01

    Although illustrations have played an important role in identification keys and guides since the 18th century, their use has varied widely. Some keys lack all illustrations, while others are heavily illustrated. Even within illustrated guides, the way in which images are used varies considerably. Here, we review image use in paper and electronic guides, and establish a set of best practices for image use in illustrated keys and guides. Our review covers image use in both paper and electronic guides, though we only briefly cover apps for mobile devices. With this one exception, we cover the full range of guides, from those that consist only of species descriptions with no keys, to lavishly illustrated technical keys. Emphasis is placed on how images are used, not on the operation of the guides and key, which has been reviewed by others. We only deal with operation when it impacts image use. Few illustrated keys or guides use images in optimal ways. Most include too few images to show taxonomic variation or variation in characters and character states. The use of multiple images allows easier taxon identification and facilitates the understanding of characters. Most images are usually not standardized, making comparison between images difficult. Although some electronic guides allow images to be enlarged, many do not. The best keys and guides use standardized images, displayed at sizes that are easy to see and arranged in a standardized manner so that similar images can be compared across species. Illustrated keys and glossaries should contain multiple images for each character state so that the user can judge variation in the state. Photographic backgrounds should not distract from the subject and, where possible, should be of a standard colour. When used, drawings should be prepared by professional botanical illustrators, and clearly labelled. Electronic keys and guides should allow images to be enlarged so that their details can be seen.

  2. Image use in field guides and identification keys: review and recommendations

    PubMed Central

    Leggett, Roxanne; Kirchoff, Bruce K.

    2011-01-01

    Background and aims Although illustrations have played an important role in identification keys and guides since the 18th century, their use has varied widely. Some keys lack all illustrations, while others are heavily illustrated. Even within illustrated guides, the way in which images are used varies considerably. Here, we review image use in paper and electronic guides, and establish a set of best practices for image use in illustrated keys and guides. Scope Our review covers image use in both paper and electronic guides, though we only briefly cover apps for mobile devices. With this one exception, we cover the full range of guides, from those that consist only of species descriptions with no keys, to lavishly illustrated technical keys. Emphasis is placed on how images are used, not on the operation of the guides and key, which has been reviewed by others. We only deal with operation when it impacts image use. Main points Few illustrated keys or guides use images in optimal ways. Most include too few images to show taxonomic variation or variation in characters and character states. The use of multiple images allows easier taxon identification and facilitates the understanding of characters. Most images are usually not standardized, making comparison between images difficult. Although some electronic guides allow images to be enlarged, many do not. Conclusions The best keys and guides use standardized images, displayed at sizes that are easy to see and arranged in a standardized manner so that similar images can be compared across species. Illustrated keys and glossaries should contain multiple images for each character state so that the user can judge variation in the state. Photographic backgrounds should not distract from the subject and, where possible, should be of a standard colour. When used, drawings should be prepared by professional botanical illustrators, and clearly labelled. Electronic keys and guides should allow images to be enlarged so that

  3. KEY COMPARISON: Final report on CCQM-K69 key comparison: Testosterone glucuronide in human urine

    NASA Astrophysics Data System (ADS)

    Liu, Fong-Ha; Mackay, Lindsey; Murby, John

    2010-01-01

    The CCQM-K69 key comparison of testosterone glucuronide in human urine was organized under the auspices of the CCQM Organic Analysis Working Group (OAWG). The National Measurement Institute Australia (NMIA) acted as the coordinating laboratory for the comparison. The samples distributed for the key comparison were prepared at NMIA with funding from the World Anti-Doping Agency (WADA). WADA granted the approval for this material to be used for the intercomparison provided the distribution and handling of the material were strictly controlled. Three national metrology institutes (NMIs)/designated institutes (DIs) developed reference methods and submitted data for the key comparison along with two other laboratories who participated in the parallel pilot study. A good selection of analytical methods and sample workup procedures was displayed in the results submitted considering the complexities of the matrix involved. The comparability of measurement results was successfully demonstrated by the participating NMIs. Only the key comparison data were used to estimate the key comparison reference value (KCRV), using the arithmetic mean approach. The reported expanded uncertainties for results ranged from 3.7% to 6.7% at the 95% level of confidence and all results agreed within the expanded uncertainty of the KCRV. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  4. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Kenton, David L.; Khovayko, Oleg; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Sherry, Stephen T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Suzek, Tugba O.; Tatusov, Roman; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene

    2006-01-01

    In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Retroviral Genotyping Tools, HIV-1, Human Protein Interaction Database, SAGEmap, Gene Expression Omnibus, Entrez Probe, GENSAT, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of the resources can be accessed through the NCBI home page at: . PMID:16381840

  5. Database resources of the National Center for Biotechnology Information

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.

    2001-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s Web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, GeneMap’99, Human–Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, Cancer Genome Anatomy Project (CGAP), SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheri­tance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:11125038

  6. A Computational Chemistry Database for Semiconductor Processing

    NASA Technical Reports Server (NTRS)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  7. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  8. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0.

    PubMed

    Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam

    2017-09-01

    Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  9. 49 CFR 234.315 - Electronic recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Electronic recordkeeping. 234.315 Section 234.315 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad adequately limits and controls accessibility to the records retained in its electronic database...

  10. 49 CFR 234.315 - Electronic recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Electronic recordkeeping. 234.315 Section 234.315 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad adequately limits and controls accessibility to the records retained in its electronic database...

  11. 49 CFR 234.315 - Electronic recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Electronic recordkeeping. 234.315 Section 234.315 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad adequately limits and controls accessibility to the records retained in its electronic database...

  12. Introduction of the American Academy of Facial Plastic and Reconstructive Surgery FACE TO FACE Database.

    PubMed

    Abraham, Manoj T; Rousso, Joseph J; Hu, Shirley; Brown, Ryan F; Moscatello, Augustine L; Finn, J Charles; Patel, Neha A; Kadakia, Sameep P; Wood-Smith, Donald

    2017-07-01

    The American Academy of Facial Plastic and Reconstructive Surgery FACE TO FACE database was created to gather and organize patient data primarily from international humanitarian surgical mission trips, as well as local humanitarian initiatives. Similar to cloud-based Electronic Medical Records, this web-based user-generated database allows for more accurate tracking of provider and patient information and outcomes, regardless of site, and is useful when coordinating follow-up care for patients. The database is particularly useful on international mission trips as there are often different surgeons who may provide care to patients on subsequent missions, and patients who may visit more than 1 mission site. Ultimately, by pooling data across multiples sites and over time, the database has the potential to be a useful resource for population-based studies and outcome data analysis. The objective of this paper is to delineate the process involved in creating the AAFPRS FACE TO FACE database, to assess its functional utility, to draw comparisons to electronic medical records systems that are now widely implemented, and to explain the specific benefits and disadvantages of the use of the database as it was implemented on recent international surgical mission trips.

  13. PhamDB: a web-based application for building Phamerator databases.

    PubMed

    Lamine, James G; DeJong, Randall J; Nelesen, Serita M

    2016-07-01

    PhamDB is a web application which creates databases of bacteriophage genes, grouped by gene similarity. It is backwards compatible with the existing Phamerator desktop software while providing an improved database creation workflow. Key features include a graphical user interface, validation of uploaded GenBank files, and abilities to import phages from existing databases, modify existing databases and queue multiple jobs. Source code and installation instructions for Linux, Windows and Mac OSX are freely available at https://github.com/jglamine/phage PhamDB is also distributed as a docker image which can be managed via Kitematic. This docker image contains the application and all third party software dependencies as a pre-configured system, and is freely available via the installation instructions provided. snelesen@calvin.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Electronic Photography

    NASA Technical Reports Server (NTRS)

    Payne, Meredith Lindsay

    1995-01-01

    The main objective was to assist in the production of electronic images in the Electronic Photography Lab (EPL). The EPL is a new facility serving the electronic photographic needs of the Langley community. The purpose of the Electronic Photography lab is to provide Langley with access to digital imaging technology. Although the EPL has been in operation for less than one year, almost 1,000 images have been produced. The decision to establish the lab was made after careful determination of the centers needs for electronic photography. The LaRC community requires electronic photography for the production of electronic printing, Web sites, desktop publications, and its increased enhancement capabilities. In addition to general use, other considerations went into the planning of the EPL. For example, electronic photography is much less of a burden on the environment compared to conventional photography. Also, the possibilities of an on-line database and retrieval system could make locating past work more efficient. Finally, information in an electronic image is quantified, making measurements and calculations easier for the researcher.

  15. Service Management Database for DSN Equipment

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.

  16. Development of a bird banding recapture database

    USGS Publications Warehouse

    Tautin, J.; Doherty, P.F.; Metras, L.

    2001-01-01

    Recaptures (and resightings) constitute the vast majority of post-release data from banded or otherwise marked nongame birds. A powerful suite of contemporary analytical models is available for using recapture data to estimate population size, survival rates and other parameters, and many banders collect recapture data for their project specific needs. However, despite widely recognized, broader programmatic needs for more and better data, banders' recapture data are not centrally reposited and made available for use by others. To address this need, the US Bird Banding Laboratory, the Canadian Bird Banding Office and the Georgia Cooperative Fish and Wildlife Research Unit are developing a bird banding recapture database. In this poster we discuss the critical steps in developing the database, including: determining exactly which recapture data should be included; developing a standard record format and structure for the database; developing electronic means for collecting, vetting and disseminating the data; and most importantly, developing metadata descriptions and individual data set profiles to facilitate the user's selection of appropriate analytical models. We provide examples of individual data sets to be included in the database, and we assess the feasibility of developing a prescribed program for obtaining recapture data from banders who do not presently collect them. It is expected that the recapture database eventually will contain millions of records made available publicly for a variety of avian research and management purposes

  17. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  18. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  19. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  20. My Favorite Things Electronically Speaking, 1997 Edition.

    ERIC Educational Resources Information Center

    Glantz, Shelley

    1997-01-01

    Responding to an informal survey, 96 media specialists named favorite software, CD-ROMs, and online sites. This article lists automation packages, electronic encyclopedias, CD-ROMs, electronic magazine indexes, CD-ROM and online database services, electronic sources of current events, laser disks for grades 6-12, word processing programs for…

  1. Building structural similarity database for metric learning

    NASA Astrophysics Data System (ADS)

    Jin, Guoxin; Pappas, Thrasyvoulos N.

    2015-03-01

    We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.

  2. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  3. Modernization and multiscale databases at the U.S. geological survey

    USGS Publications Warehouse

    Morrison, J.L.

    1992-01-01

    The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.

  4. A Taxonomic Search Engine: Federating taxonomic databases using web services

    PubMed Central

    Page, Roderic DM

    2005-01-01

    Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. Conclusion The Taxonomic Search Engine is available at and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names. PMID:15757517

  5. The Israeli National Genetic database: a 10-year experience.

    PubMed

    Zlotogora, Joël; Patrinos, George P

    2017-03-16

    The Israeli National and Ethnic Mutation database ( http://server.goldenhelix.org/israeli ) was launched in September 2006 on the ETHNOS software to include clinically relevant genomic variants reported among Jewish and Arab Israeli patients. In 2016, the database was reviewed and corrected according to ClinVar ( https://www.ncbi.nlm.nih.gov/clinvar ) and ExAC ( http://exac.broadinstitute.org ) database entries. The present article summarizes some key aspects from the development and continuous update of the database over a 10-year period, which could serve as a paradigm of successful database curation for other similar resources. In September 2016, there were 2444 entries in the database, 890 among Jews, 1376 among Israeli Arabs, and 178 entries among Palestinian Arabs, corresponding to an ~4× data content increase compared to when originally launched. While the Israeli Arab population is much smaller than the Jewish population, the number of pathogenic variants causing recessive disorders reported in the database is higher among Arabs (934) than among Jews (648). Nevertheless, the number of pathogenic variants classified as founder mutations in the database is smaller among Arabs (175) than among Jews (192). In 2016, the entire database content was compared to that of other databases such as ClinVar and ExAC. We show that a significant difference in the percentage of pathogenic variants from the Israeli genetic database that were present in ExAC was observed between the Jewish population (31.8%) and the Israeli Arab population (20.6%). The Israeli genetic database was launched in 2006 on the ETHNOS software and is available online ever since. It allows querying the database according to the disorder and the ethnicity; however, many other features are not available, in particular the possibility to search according to the name of the gene. In addition, due to the technical limitations of the previous ETHNOS software, new features and data are not included in the

  6. Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges

    PubMed Central

    Akanbi, Maxwell O.; Ocheke, Amaka N.; Agaba, Patricia A.; Daniyam, Comfort A.; Agaba, Emmanuel I.; Okeke, Edith N.; Ukoli, Christiana O.

    2012-01-01

    Background The Electronic Health Record (EHR) is a key component of medical informatics that is increasingly being utilized in industrialized nations to improve healthcare. There is limited information on the use of EHR in sub-Saharan Africa. This paper reviews availability of EHRs in sub-Saharan Africa. Methods Searches were performed on PubMed and Google Scholar databases using the terms ‘Electronic Health Records OR Electronic Medical Records OR e-Health and Africa’. References from identified publications were reviewed. Inclusion criterion was documented use of EHR in Africa. Results The search yielded 147 publications of which 21papers from 15 sub-Saharan African countries documented the use of EHR in Africa and were reviewed. About 91% reported use of Open Source healthcare software, with OpenMRS being the most widely used. Most reports were from HIV related health centers. Barriers to adoption of EHRs include high cost of procurement and maintenance, poor network infrastructure and lack of comfort among health workers with electronic medical records. Conclusion There has been an increase in the use of EHRs in sub-Saharan Africa, largely driven by utilization by HIV treatment programs. Penetration is still however very low. PMID:25243111

  7. Validation of asthma recording in electronic health records: protocol for a systematic review.

    PubMed

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-05-29

    Asthma is a common, heterogeneous disease with significant morbidity and mortality worldwide. It can be difficult to define in epidemiological studies using electronic health records as the diagnosis is based on non-specific respiratory symptoms and spirometry, neither of which are routinely registered. Electronic health records can nonetheless be valuable to study the epidemiology, management, healthcare use and control of asthma. For health databases to be useful sources of information, asthma diagnoses should ideally be validated. The primary objectives are to provide an overview of the methods used to validate asthma diagnoses in electronic health records and summarise the results of the validation studies. EMBASE and MEDLINE will be systematically searched for appropriate search terms. The searches will cover all studies in these databases up to October 2016 with no start date and will yield studies that have validated algorithms or codes for the diagnosis of asthma in electronic health records. At least one test validation measure (sensitivity, specificity, positive predictive value, negative predictive value or other) is necessary for inclusion. In addition, we require the validated algorithms to be compared with an external golden standard, such as a manual review, a questionnaire or an independent second database. We will summarise key data including author, year of publication, country, time period, date, data source, population, case characteristics, clinical events, algorithms, gold standard and validation statistics in a uniform table. This study is a synthesis of previously published studies and, therefore, no ethical approval is required. The results will be submitted to a peer-reviewed journal for publication. Results from this systematic review can be used to study outcome research on asthma and can be used to identify case definitions for asthma. CRD42016041798. © Article author(s) (or their employer(s) unless otherwise stated in the text of the

  8. EMR Database Upgrade from MUMPS to CACHE: Lessons Learned.

    PubMed

    Alotaibi, Abduallah; Emshary, Mshary; Househ, Mowafa

    2014-01-01

    Over the past few years, Saudi hospitals have been implementing and upgrading Electronic Medical Record Systems (EMRs) to ensure secure data transfer and exchange between EMRs.This paper focuses on the process and lessons learned in upgrading the MUMPS database to a the newer Caché database to ensure the integrity of electronic data transfer within a local Saudi hospital. This paper examines the steps taken by the departments concerned, their action plans and how the change process was managed. Results show that user satisfaction was achieved after the upgrade was completed. The system was stable and offered better healthcare quality to patients as a result of the data exchange. Hardware infrastructure upgrades improved scalability and software upgrades to Caché improved stability. The overall performance was enhanced and new functions were added (CPOE) during the upgrades. The essons learned were: 1) Involve higher management; 2) Research multiple solutions available in the market; 3) Plan for a variety of implementation scenarios.

  9. KEY COMPARISON: CCQM-K21 Key Comparison Determination of pp’-DDT in fish oil

    NASA Astrophysics Data System (ADS)

    Webb, K. S.; Carter, D.; Wolff Briche, C. S. J.

    2003-01-01

    A key comparison on the determination of (pp'-dichlorodiphenyl) trichloroethane (pp'-DDT) in a fish oil matrix has been successfully completed. Nine NMIs participated in this key comparison and used the technique of isotope dilution gas-chromatography mass spectrometry (ID/GC/MS) for the determinations. Two samples (A and B) of fish oil were distributed to participants, each gravimetrically spiked with pp'-DDT. The KCRV for Sample A is 0.0743 +/- 0.0020 µg g-1 and that of Sample B is 0.1655 +/- 0.0014 µg g-1 of pp'-DDT in fish oil. The results for Sample A showed a RSD of 3.5%, the RSD for Sample B was within 1%. These results were an improvement over those of the corresponding pilot study (CCQM-P21), where at a mass fraction of pp'-DDT in fish oil of 0.311 µg g-1 the RSD was 2.6%. The compound pp'-DDT is a typical organochlorine pesticide and this key comparison has shown that NMIs have the ability to measure such compounds at levels typically found in the environment. The compound (pp'-dichlorodiphenyl) dichloroethylene (pp'-DDE), a metabolite of pp'-DDT, was the subject of a previous key comparison (CCQM-K5). The compound pp'-DDT is technically more challenging than that of pp'-DDE since it can decompose during the measurement procedure. Consequently the success of this key comparison, combined with that of CCQM-K5 demonstrates a broad measurement capability by NMIs for organochlorine compounds in the environment. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the Mutual Recognition Arrangement (MRA).

  10. Evaluation of unique identifiers used as keys to match identical publications in Pure and SciVal - a case study from health science.

    PubMed

    Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne

    2016-01-01

    Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication.  The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become

  11. From 20th century metabolic wall charts to 21st century systems biology: database of mammalian metabolic enzymes.

    PubMed

    Corcoran, Callan C; Grady, Cameron R; Pisitkun, Trairak; Parulekar, Jaya; Knepper, Mark A

    2017-03-01

    The organization of the mammalian genome into gene subsets corresponding to specific functional classes has provided key tools for systems biology research. Here, we have created a web-accessible resource called the Mammalian Metabolic Enzyme Database ( https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/MetabolicEnzymeDatabase.html) keyed to the biochemical reactions represented on iconic metabolic pathway wall charts created in the previous century. Overall, we have mapped 1,647 genes to these pathways, representing ~7 percent of the protein-coding genome. To illustrate the use of the database, we apply it to the area of kidney physiology. In so doing, we have created an additional database ( Database of Metabolic Enzymes in Kidney Tubule Segments: https://hpcwebapps.cit.nih.gov/ESBL/Database/MetabolicEnzymes/), mapping mRNA abundance measurements (mined from RNA-Seq studies) for all metabolic enzymes to each of 14 renal tubule segments. We carry out bioinformatics analysis of the enzyme expression pattern among renal tubule segments and mine various data sources to identify vasopressin-regulated metabolic enzymes in the renal collecting duct. Copyright © 2017 the American Physiological Society.

  12. [Discussion of the implementation of MIMIC database in emergency medical study].

    PubMed

    Li, Kaiyuan; Feng, Cong; Jia, Lijing; Chen, Li; Pan, Fei; Li, Tanshi

    2018-05-01

    To introduce Medical Information Mart for Intensive Care (MIMIC) database and elaborate the approach of critically emergent research with big data based on the feature of MIMIC and updated studies both domestic and overseas, we put forward the feasibility and necessity of introducing medical big data to research in emergency. Then we discuss the role of MIMIC database in emergency clinical study, as well as the principles and key notes of experimental design and implementation under the medical big data circumstance. The implementation of MIMIC database in emergency medical research provides a brand new field for the early diagnosis, risk warning and prognosis of critical illness, however there are also limitations. To meet the era of big data, emergency medical database which is in accordance with our national condition is needed, which will provide new energy to the development of emergency medicine.

  13. The future of medical diagnostics: large digitized databases.

    PubMed

    Kerr, Wesley T; Lau, Edward P; Owens, Gwen E; Trefler, Aaron

    2012-09-01

    The electronic health record mandate within the American Recovery and Reinvestment Act of 2009 will have a far-reaching affect on medicine. In this article, we provide an in-depth analysis of how this mandate is expected to stimulate the production of large-scale, digitized databases of patient information. There is evidence to suggest that millions of patients and the National Institutes of Health will fully support the mining of such databases to better understand the process of diagnosing patients. This data mining likely will reaffirm and quantify known risk factors for many diagnoses. This quantification may be leveraged to further develop computer-aided diagnostic tools that weigh risk factors and provide decision support for health care providers. We expect that creation of these databases will stimulate the development of computer-aided diagnostic support tools that will become an integral part of modern medicine.

  14. Performance of Point and Range Queries for In-memory Databases using Radix Trees on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Maksudul; Yoginath, Srikanth B; Perumalla, Kalyan S

    In in-memory database systems augmented by hardware accelerators, accelerating the index searching operations can greatly increase the runtime performance of database queries. Recently, adaptive radix trees (ART) have been shown to provide very fast index search implementation on the CPU. Here, we focus on an accelerator-based implementation of ART. We present a detailed performance study of our GPU-based adaptive radix tree (GRT) implementation over a variety of key distributions, synthetic benchmarks, and actual keys from music and book data sets. The performance is also compared with other index-searching schemes on the GPU. GRT on modern GPUs achieves some of themore » highest rates of index searches reported in the literature. For point queries, a throughput of up to 106 million and 130 million lookups per second is achieved for sparse and dense keys, respectively. For range queries, GRT yields 600 million and 1000 million lookups per second for sparse and dense keys, respectively, on a large dataset of 64 million 32-bit keys.« less

  15. A Data Analysis Expert System For Large Established Distributed Databases

    NASA Astrophysics Data System (ADS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  16. Planned and ongoing projects (pop) database: development and results.

    PubMed

    Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice

    2014-11-01

    The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.

  17. Freely Accessible Chemical Database Resources of Compounds for in Silico Drug Discovery.

    PubMed

    Yang, JingFang; Wang, Di; Jia, Chenyang; Wang, Mengyao; Hao, GeFei; Yang, GuangFu

    2018-05-07

    In silico drug discovery has been proved to be a solidly established key component in early drug discovery. However, this task is hampered by the limitation of quantity and quality of compound databases for screening. In order to overcome these obstacles, freely accessible database resources of compounds have bloomed in recent years. Nevertheless, how to choose appropriate tools to treat these freely accessible databases are crucial. To the best of our knowledge, this is the first systematic review on this issue. The existed advantages and drawbacks of chemical databases were analyzed and summarized based on the collected six categories of freely accessible chemical databases from literature in this review. Suggestions on how and in which conditions the usage of these databases could be reasonable were provided. Tools and procedures for building 3D structure chemical libraries were also introduced. In this review, we described the freely accessible chemical database resources for in silico drug discovery. In particular, the chemical information for building chemical database appears as attractive resources for drug design to alleviate experimental pressure. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Databases as policy instruments. About extending networks as evidence-based policy.

    PubMed

    de Bont, Antoinette; Stoevelaar, Herman; Bal, Roland

    2007-12-07

    This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively). We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national) bodies of professionals, resulting in restrictive prescription behavior amongst physicians. The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.

  19. Building a Patient-Reported Outcome Metric Database: One Hospital's Experience.

    PubMed

    Rana, Adam J

    2016-06-01

    A number of provisions exist within the Patient Protection and Affordable Care Act that focus on improving the delivery of health care in the United States, including quality of care. From a total joint arthroplasty perspective, the issue of quality increasingly refers to quantifying patient-reported outcome metrics (PROMs). This article describes one hospital's experience in building and maintaining an electronic PROM database for a practice of 6 board-certified orthopedic surgeons. The surgeons advocated to and worked with the hospital to contract with a joint registry database company and hire a research assistant. They implemented a standardized process for all surgical patients to fill out patient-reported outcome questionnaires at designated intervals. To date, the group has collected patient-reported outcome metric data for >4500 cases. The data are frequently used in different venues at the hospital including orthopedic quality metric and research meetings. In addition, the results were used to develop an annual outcome report. The annual report is given to patients and primary care providers, and portions of it are being used in discussions with insurance carriers. Building an electronic database to collect PROMs is a group undertaking and requires a physician champion. A considerable amount of work needs to be done up front to make its introduction a success. Once established, a PROM database can provide a significant amount of information and data that can be effectively used in multiple capacities. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Database resources of the National Center for Biotechnology Information.

    PubMed

    Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Feolo, Michael; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Landsman, David; Lipman, David J; Madden, Thomas L; Maglott, Donna R; Miller, Vadim; Mizrachi, Ilene; Ostell, James; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Yaschenko, Eugene; Ye, Jian

    2009-01-01

    In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs), Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART) and the PubChem suite of small molecule databases. Augmenting many of the web applications is custom implementation of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  1. NIST Gas Hydrate Research Database and Web Dissemination Channel.

    PubMed

    Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D

    2010-01-01

    To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.

  2. Application of database methods to the prediction of B3LYP-optimized polyhedral water cluster geometries and electronic energies

    NASA Astrophysics Data System (ADS)

    Anick, David J.

    2003-12-01

    A method is described for a rapid prediction of B3LYP-optimized geometries for polyhedral water clusters (PWCs). Starting with a database of 121 B3LYP-optimized PWCs containing 2277 H-bonds, linear regressions yield formulas correlating O-O distances, O-O-O angles, and H-O-H orientation parameters, with local and global cluster descriptors. The formulas predict O-O distances with a rms error of 0.85 pm to 1.29 pm and predict O-O-O angles with a rms error of 0.6° to 2.2°. An algorithm is given which uses the O-O and O-O-O formulas to determine coordinates for the oxygen nuclei of a PWC. The H-O-H formulas then determine positions for two H's at each O. For 15 test clusters, the gap between the electronic energy of the predicted geometry and the true B3LYP optimum ranges from 0.11 to 0.54 kcal/mol or 4 to 18 cal/mol per H-bond. Linear regression also identifies 14 parameters that strongly correlate with PWC electronic energy. These descriptors include the number of H-bonds in which both oxygens carry a non-H-bonding H, the number of quadrilateral faces, the number of symmetric angles in 5- and in 6-sided faces, and the square of the cluster's estimated dipole moment.

  3. Research Electronic Data Capture (REDCap®) used as an audit tool with a built-in database.

    PubMed

    Kragelund, Signe H; Kjærsgaard, Mona; Jensen-Fangel, Søren; Leth, Rita A; Ank, Nina

    2018-05-01

    The aim of this study was to develop an audit tool with a built-in database using Research Electronic Data Capture (REDCap®) as part of an antimicrobial stewardship program at a regional hospital in the Central Denmark Region, and to analyse the need, if any, to involve more than one expert in the evaluation of cases of antimicrobial treatment, and the level of agreement among the experts. Patients treated with systemic antimicrobials in the period from 1 September 2015 to 31 August 2016 were included, in total 722 cases. Data were collected retrospectively and entered manually. The audit was based on seven flow charts regarding: (1) initiation of antimicrobial treatment (2) infection (3) prescription and administration of antimicrobials (4) discontinuation of antimicrobials (5) reassessment within 48 h after the first prescription of antimicrobials (6) microbiological sampling in the period between suspicion of infection and the first administration of antimicrobials (7) microbiological results. The audit was based on automatic calculations drawing on the entered data and on expert assessments. Initially, two experts completed the audit, and in the cases in which they disagreed, a third expert was consulted. In 31.9% of the cases, the two experts agreed on all elements of the audit. In 66.2%, the two experts reached agreement by discussing the cases. Finally, 1.9% of the cases were completed in cooperation with a third expert. The experts assessed 3406 flow charts of which they agreed on 75.8%. We succeeded in creating an audit tool with a built-in database that facilitates independent expert evaluation using REDCap. We found a large inter-observer difference that needs to be considered when constructing a project based on expert judgements. Our two experts agreed on most of the flow charts after discussion, whereas the third expert's intervention did not have any influence on the overall assessment. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Physical key-protected one-time pad

    PubMed Central

    Horstmeyer, Roarke; Judkewitz, Benjamin; Vellekoop, Ivo M.; Assawaworrarit, Sid; Yang, Changhuei

    2013-01-01

    We describe an encrypted communication principle that forms a secure link between two parties without electronically saving either of their keys. Instead, random cryptographic bits are kept safe within the unique mesoscopic randomness of two volumetric scattering materials. We demonstrate how a shared set of patterned optical probes can generate 10 gigabits of statistically verified randomness between a pair of unique 2 mm3 scattering objects. This shared randomness is used to facilitate information-theoretically secure communication following a modified one-time pad protocol. Benefits of volumetric physical storage over electronic memory include the inability to probe, duplicate or selectively reset any bits without fundamentally altering the entire key space. Our ability to securely couple the randomness contained within two unique physical objects can extend to strengthen hardware required by a variety of cryptographic protocols, which is currently a critically weak link in the security pipeline of our increasingly mobile communication culture. PMID:24345925

  5. ProCarDB: a database of bacterial carotenoids.

    PubMed

    Nupur, L N U; Vats, Asheema; Dhanda, Sandeep Kumar; Raghava, Gajendra P S; Pinnaka, Anil Kumar; Kumar, Ashwani

    2016-05-26

    Carotenoids have important functions in bacteria, ranging from harvesting light energy to neutralizing oxidants and acting as virulence factors. However, information pertaining to the carotenoids is scattered throughout the literature. Furthermore, information about the genes/proteins involved in the biosynthesis of carotenoids has tremendously increased in the post-genomic era. A web server providing the information about microbial carotenoids in a structured manner is required and will be a valuable resource for the scientific community working with microbial carotenoids. Here, we have created a manually curated, open access, comprehensive compilation of bacterial carotenoids named as ProCarDB- Prokaryotic Carotenoid Database. ProCarDB includes 304 unique carotenoids arising from 50 biosynthetic pathways distributed among 611 prokaryotes. ProCarDB provides important information on carotenoids, such as 2D and 3D structures, molecular weight, molecular formula, SMILES, InChI, InChIKey, IUPAC name, KEGG Id, PubChem Id, and ChEBI Id. The database also provides NMR data, UV-vis absorption data, IR data, MS data and HPLC data that play key roles in the identification of carotenoids. An important feature of this database is the extension of biosynthetic pathways from the literature and through the presence of the genes/enzymes in different organisms. The information contained in the database was mined from published literature and databases such as KEGG, PubChem, ChEBI, LipidBank, LPSN, and Uniprot. The database integrates user-friendly browsing and searching with carotenoid analysis tools to help the user. We believe that this database will serve as a major information centre for researchers working on bacterial carotenoids.

  6. USDA National Nutrient Database for Standard Reference, release 28

    USDA-ARS?s Scientific Manuscript database

    The USDA National Nutrient Database for Standard Reference, Release 28 contains data for nearly 8,800 food items for up to 150 food components. SR28 replaces the previous release, SR27, originally issued in August 2014. Data in SR28 supersede values in the printed handbooks and previous electronic...

  7. USDA National Nutrient Database for Standard Reference, Release 25

    USDA-ARS?s Scientific Manuscript database

    The USDA National Nutrient Database for Standard Reference, Release 25(SR25)contains data for over 8,100 food items for up to 146 food components. It replaces the previous release, SR24, issued in September 2011. Data in SR25 supersede values in the printed handbooks and previous electronic releas...

  8. USDA National Nutrient Database for Standard Reference, Release 24

    USDA-ARS?s Scientific Manuscript database

    The USDA Nutrient Database for Standard Reference, Release 24 contains data for over 7,900 food items for up to 146 food components. It replaces the previous release, SR23, issued in September 2010. Data in SR24 supersede values in the printed Handbooks and previous electronic releases of the databa...

  9. [Construction of chemical information database based on optical structure recognition technique].

    PubMed

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research

  10. Final Results of Shuttle MMOD Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, J. L.; Christiansen, E. L.; Lear, D. M.

    2015-01-01

    The Shuttle Hypervelocity Impact Database documents damage features on each Orbiter thought to be from micrometeoroids (MM) or orbital debris (OD). Data is divided into tables for crew module windows, payload bay door radiators and thermal protection systems along with other miscellaneous regions. The combined number of records in the database is nearly 3000. Each database record provides impact feature dimensions, location on the vehicle and relevant mission information. Additional detail on the type and size of particle that produced the damage site is provided when sampling data and definitive spectroscopic analysis results are available. Guidelines are described which were used in determining whether impact damage is from micrometeoroid or orbital debris impact based on the findings from scanning electron microscopy chemical analysis. Relationships assumed when converting from observed feature sizes in different shuttle materials to particle sizes will be presented. A small number of significant impacts on the windows, radiators and wing leading edge will be highlighted and discussed in detail, including the hypervelocity impact testing performed to estimate particle sizes that produced the damage.

  11. Evaluation of unique identifiers used as keys to match identical publications in Pure and SciVal – a case study from health science

    PubMed Central

    Madsen, Heidi Holst; Madsen, Dicte; Gauffriau, Marianne

    2016-01-01

    Unique identifiers (UID) are seen as an effective key to match identical publications across databases or identify duplicates in a database. The objective of the present study is to investigate how well UIDs work as match keys in the integration between Pure and SciVal, based on a case with publications from the health sciences. We evaluate the matching process based on information about coverage, precision, and characteristics of publications matched versus not matched with UIDs as the match keys. We analyze this information to detect errors, if any, in the matching process. As an example we also briefly discuss how publication sets formed by using UIDs as the match keys may affect the bibliometric indicators number of publications, number of citations, and the average number of citations per publication.  The objective is addressed in a literature review and a case study. The literature review shows that only a few studies evaluate how well UIDs work as a match key. From the literature we identify four error types: Duplicate digital object identifiers (DOI), incorrect DOIs in reference lists and databases, DOIs not registered by the database where a bibliometric analysis is performed, and erroneous optical or special character recognition. The case study explores the use of UIDs in the integration between the databases Pure and SciVal. Specifically journal publications in English are matched between the two databases. We find all error types except erroneous optical or special character recognition in our publication sets. In particular the duplicate DOIs constitute a problem for the calculation of bibliometric indicators as both keeping the duplicates to improve the reliability of citation counts and deleting them to improve the reliability of publication counts will distort the calculation of average number of citations per publication. The use of UIDs as a match key in citation linking is implemented in many settings, and the availability of UIDs may become

  12. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Nagatsuka, Takashi

    This paper introduces the CD-ROM-aided products and their utilization in foreign countries, mainly in U.S.A. CD-ROM is being used in various fields recently. Author classified its products into four groups:1. CD-ROM that substitutes for printed matters such as encyclopedias and dictionaries (ex. Grolier's Electronic Encyclopedia), 2. CD-ROM that substitutes for online databases (ex. Disclosure, Medline), 3. CD-ROM that has some functions such as giving orders for books besides information retrieval (ex. Books in Print Plus), 4. CD-ROM that contains literatures including pictures and figures (ex. ADONIS). The future trends of CD-ROM utilization are also suggested.

  13. Development of Databases on Iodine in Foods and Dietary Supplements

    PubMed Central

    Ershow, Abby G.; Skeaff, Sheila A.; Merkel, Joyce M.; Pehrsson, Pamela R.

    2018-01-01

    Iodine is an essential micronutrient required for normal growth and neurodevelopment; thus, an adequate intake of iodine is particularly important for pregnant and lactating women, and throughout childhood. Low levels of iodine in the soil and groundwater are common in many parts of the world, often leading to diets that are low in iodine. Widespread salt iodization has eradicated severe iodine deficiency, but mild-to-moderate deficiency is still prevalent even in many developed countries. To understand patterns of iodine intake and to develop strategies for improving intake, it is important to characterize all sources of dietary iodine, and national databases on the iodine content of major dietary contributors (including foods, beverages, water, salts, and supplements) provide a key information resource. This paper discusses the importance of well-constructed databases on the iodine content of foods, beverages, and dietary supplements; the availability of iodine databases worldwide; and factors related to variability in iodine content that should be considered when developing such databases. We also describe current efforts in iodine database development in the United States, the use of iodine composition data to develop food fortification policies in New Zealand, and how iodine content databases might be used when considering the iodine intake and status of individuals and populations. PMID:29342090

  14. The Research Potential of the Electronic OED Database at the University of Waterloo: A Case Study.

    ERIC Educational Resources Information Center

    Berg, Donna Lee

    1991-01-01

    Discusses the history and structure of the online database of the second edition of the Oxford English Dictionary (OED) and the software tools developed at the University of Waterloo to manipulate the unusually complex database. Four sample searches that indicate some types of problems that might be encountered are appended. (DB)

  15. ECG-ViEW II, a freely accessible electrocardiogram database

    PubMed Central

    Park, Man Young; Lee, Sukhoon; Jeon, Min Seok; Yoon, Dukyong; Park, Rae Woong

    2017-01-01

    The Electrocardiogram Vigilance with Electronic data Warehouse II (ECG-ViEW II) is a large, single-center database comprising numeric parameter data of the surface electrocardiograms of all patients who underwent testing from 1 June 1994 to 31 July 2013. The electrocardiographic data include the test date, clinical department, RR interval, PR interval, QRS duration, QT interval, QTc interval, P axis, QRS axis, and T axis. These data are connected with patient age, sex, ethnicity, comorbidities, age-adjusted Charlson comorbidity index, prescribed drugs, and electrolyte levels. This longitudinal observational database contains 979,273 electrocardiograms from 461,178 patients over a 19-year study period. This database can provide an opportunity to study electrocardiographic changes caused by medications, disease, or other demographic variables. ECG-ViEW II is freely available at http://www.ecgview.org. PMID:28437484

  16. Towards G2G: Systems of Technology Database Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David

    2005-01-01

    We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.

  17. Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases.

    PubMed

    Seabroke, Suzie; Candore, Gianmario; Juhlin, Kristina; Quarcoo, Naashika; Wisniewski, Antoni; Arani, Ramin; Painter, Jeffery; Tregunno, Philip; Norén, G Niklas; Slattery, Jim

    2016-04-01

    Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.

  18. Genebanks: a comparison of eight proposed international genetic databases.

    PubMed

    Austin, Melissa A; Harding, Sarah; McElroy, Courtney

    2003-01-01

    To identify and compare population-based genetic databases, or "genebanks", that have been proposed in eight international locations between 1998 and 2002. A genebank can be defined as a stored collection of genetic samples in the form of blood or tissue, that can be linked with medical and genealogical or lifestyle information from a specific population, gathered using a process of generalized consent. Genebanks were identified by searching Medline and internet search engines with key words such as "genetic database" and "biobank" and by reviewing literature on previously identified databases such as the deCode project. Collection of genebank characteristics was by an electronic and literature search, augmented by correspondence with informed individuals. The proposed genebanks are located in Iceland, the United Kingdom, Estonia, Latvia, Sweden, Singapore, the Kingdom of Tonga, and Quebec, Canada. Comparisons of the genebanks were based on the following criteria: genebank location and description of purpose, role of government, commercial involvement, consent and confidentiality procedures, opposition to the genebank, and current progress. All of the groups proposing the genebanks plan to search for susceptibility genes for complex diseases while attempting to improve public health and medical care in the region and, in some cases, stimulating the local economy through expansion of the biotechnology sector. While all of the identified plans share these purposes, they differ in many aspects, including funding, subject participation, and organization. The balance of government and commercial involvement in the development of each project varies. Genetic samples and health information will be collected from participants and coded in all of the genebanks, but consent procedures range from presumed consent of the entire eligible population to recruitment of volunteers with informed consent. Issues regarding confidentiality and consent have resulted in opposition to

  19. Enabling heterogenous multi-scale database for emergency service functions through geoinformation technologies

    NASA Astrophysics Data System (ADS)

    Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build

  20. RaftProt: mammalian lipid raft proteome database.

    PubMed

    Shah, Anup; Chen, David; Boda, Akash R; Foster, Leonard J; Davis, Melissa J; Hill, Michelle M

    2015-01-01

    RaftProt (http://lipid-raft-database.di.uq.edu.au/) is a database of mammalian lipid raft-associated proteins as reported in high-throughput mass spectrometry studies. Lipid rafts are specialized membrane microdomains enriched in cholesterol and sphingolipids thought to act as dynamic signalling and sorting platforms. Given their fundamental roles in cellular regulation, there is a plethora of information on the size, composition and regulation of these membrane microdomains, including a large number of proteomics studies. To facilitate the mining and analysis of published lipid raft proteomics studies, we have developed a searchable database RaftProt. In addition to browsing the studies, performing basic queries by protein and gene names, searching experiments by cell, tissue and organisms; we have implemented several advanced features to facilitate data mining. To address the issue of potential bias due to biochemical preparation procedures used, we have captured the lipid raft preparation methods and implemented advanced search option for methodology and sample treatment conditions, such as cholesterol depletion. Furthermore, we have identified a list of high confidence proteins, and enabled searching only from this list of likely bona fide lipid raft proteins. Given the apparent biological importance of lipid raft and their associated proteins, this database would constitute a key resource for the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  1. Development of a standardized Intranet database of formulation records for nonsterile compounding, Part 2.

    PubMed

    Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela

    2012-01-01

    In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.

  2. THE CELL CENTERED DATABASE PROJECT: AN UPDATE ON BUILDING COMMUNITY RESOURCES FOR MANAGING AND SHARING 3D IMAGING DATA

    PubMed Central

    Martone, Maryann E.; Tran, Joshua; Wong, Willy W.; Sargis, Joy; Fong, Lisa; Larson, Stephen; Lamont, Stephan P.; Gupta, Amarnath; Ellisman, Mark H.

    2008-01-01

    Databases have become integral parts of data management, dissemination and mining in biology. At the Second Annual Conference on Electron Tomography, held in Amsterdam in 2001, we proposed that electron tomography data should be shared in a manner analogous to structural data at the protein and sequence scales. At that time, we outlined our progress in creating a database to bring together cell level imaging data across scales, The Cell Centered Database (CCDB). The CCDB was formally launched in 2002 as an on-line repository of high-resolution 3D light and electron microscopic reconstructions of cells and subcellular structures. It contains 2D, 3D and 4D structural and protein distribution information from confocal, multiphoton and electron microscopy, including correlated light and electron microscopy. Many of the data sets are derived from electron tomography of cells and tissues. In the five years since its debut, we have moved the CCDB from a prototype to a stable resource and expanded the scope of the project to include data management and knowledge engineering. Here we provide an update on the CCDB and how it is used by the scientific community. We also describe our work in developing additional knowledge tools, e.g., ontologies, for annotation and query of electron microscopic data. PMID:18054501

  3. SciELO, Scientific Electronic Library Online, a Database of Open Access Journals

    ERIC Educational Resources Information Center

    Meneghini, Rogerio

    2013-01-01

    This essay discusses SciELO, a scientific journal database operating in 14 countries. It covers over 1000 journals providing open access to full text and table sets of scientometrics data. In Brazil it is responsible for a collection of nearly 300 journals, selected along 15 years as the best Brazilian periodicals in natural and social sciences.…

  4. Electronic cigarettes and nicotine clinical pharmacology.

    PubMed

    Schroeder, Megan J; Hoffman, Allison C

    2014-05-01

    To review the available literature evaluating electronic cigarette (e-cigarette) nicotine clinical pharmacology in order to understand the potential impact of e-cigarettes on individual users, nicotine dependence and public health. Literature searches were conducted between 1 October 2012 and 30 September 2013 using key terms in five electronic databases. Studies were included in the review if they were in English and publicly available; non-clinical studies, conference abstracts and studies exclusively measuring nicotine content in e-cigarette cartridges were excluded from the review. Nicotine yields from automated smoking machines suggest that e-cigarettes deliver less nicotine per puff than traditional cigarettes, and clinical studies indicate that e-cigarettes deliver only modest nicotine concentrations to the inexperienced e-cigarette user. However, current e-cigarette smokers are able to achieve systemic nicotine and/or cotinine concentrations similar to those produced from traditional cigarettes. Therefore, user experience is critically important for nicotine exposure, and may contribute to the products' ability to support and maintain nicotine dependence. Knowledge about e-cigarette nicotine pharmacology remains limited. Because a user's e-cigarette experience may significantly impact nicotine delivery, future nicotine pharmacokinetic and pharmacodynamic studies should be conducted in experienced users to accurately assess the products' impact on public health.

  5. Electronic cigarettes and nicotine clinical pharmacology

    PubMed Central

    Schroeder, Megan J; Hoffman, Allison C

    2014-01-01

    Objective To review the available literature evaluating electronic cigarette (e-cigarette) nicotine clinical pharmacology in order to understand the potential impact of e-cigarettes on individual users, nicotine dependence and public health. Methods Literature searches were conducted between 1 October 2012 and 30 September 2013 using key terms in five electronic databases. Studies were included in the review if they were in English and publicly available; non-clinical studies, conference abstracts and studies exclusively measuring nicotine content in e-cigarette cartridges were excluded from the review. Results Nicotine yields from automated smoking machines suggest that e-cigarettes deliver less nicotine per puff than traditional cigarettes, and clinical studies indicate that e-cigarettes deliver only modest nicotine concentrations to the inexperienced e-cigarette user. However, current e-cigarette smokers are able to achieve systemic nicotine and/or cotinine concentrations similar to those produced from traditional cigarettes. Therefore, user experience is critically important for nicotine exposure, and may contribute to the products’ ability to support and maintain nicotine dependence. Conclusions Knowledge about e-cigarette nicotine pharmacology remains limited. Because a user's e-cigarette experience may significantly impact nicotine delivery, future nicotine pharmacokinetic and pharmacodynamic studies should be conducted in experienced users to accurately assess the products’ impact on public health. PMID:24732160

  6. Effects of electronic cigarette smoking on human health.

    PubMed

    Meo, S A; Al Asiri, S A

    2014-01-01

    Electronic cigarette smoking is gaining dramatic popularity and is steadily spreading among the adolescents, high income, urban population around the world. The aim of this study is to highlight the hazards of e-cigarette smoking on human health. In this study, we identified 38 published studies through a systematic database searches including ISI-web of science and pub-med. We searched the related literature by using the key words including Electronic cigarette, E-cigarette, E-vapers, incidence, hazards. Studies in which electronic cigarette smoking hazards was investigated were included in the study. No limitations on publication status, study design of publication were implemented. Finally we included 28 publications and remaining 10 were excluded. E-smoking can cause, nausea, vomiting, headache, dizziness, choking, burn injuries, upper respiratory tract irritation, dry cough, dryness of the eyes and mucous membrane, release of cytokines and pro-inflammatory mediators, allergic airway inflammation, decreased exhaled nitric oxide (FeNO) synthesis in the lungs, change in bronchial gene expression and risk of lung cancer. Electronic cigarettes are swiftly promoted as an alternative to conventional cigarette smoking, although its use is highly controversial. Electronic cigarettes are not a smoking cessation product. Non-scientific claims about e-cigarettes are creating confusion in public perception about e-cigarette and people believe that e-cigarettes are safe and less addictive, but its use is unsafe and hazardous to human health. E-cigarette smoking should be regulated in the same way as traditional cigarettes and must be prohibited to children and adolescents.

  7. ETHNOS: A versatile electronic tool for the development and curation of national genetic databases

    PubMed Central

    2010-01-01

    National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation. PMID:20650823

  8. ETHNOS : A versatile electronic tool for the development and curation of national genetic databases.

    PubMed

    van Baal, Sjozef; Zlotogora, Joël; Lagoumintzis, George; Gkantouna, Vassiliki; Tzimas, Ioannis; Poulas, Konstantinos; Tsakalidis, Athanassios; Romeo, Giovanni; Patrinos, George P

    2010-06-01

    National and ethnic mutation databases (NEMDBs) are emerging online repositories, recording extensive information about the described genetic heterogeneity of an ethnic group or population. These resources facilitate the provision of genetic services and provide a comprehensive list of genomic variations among different populations. As such, they enhance awareness of the various genetic disorders. Here, we describe the features of the ETHNOS software, a simple but versatile tool based on a flat-file database that is specifically designed for the development and curation of NEMDBs. ETHNOS is a freely available software which runs more than half of the NEMDBs currently available. Given the emerging need for NEMDB in genetic testing services and the fact that ETHNOS is the only off-the-shelf software available for NEMDB development and curation, its adoption in subsequent NEMDB development would contribute towards data content uniformity, unlike the diverse contents and quality of the available gene (locus)-specific databases. Finally, we allude to the potential applications of NEMDBs, not only as worldwide central allele frequency repositories, but also, and most importantly, as data warehouses of individual-level genomic data, hence allowing for a comprehensive ethnicity-specific documentation of genomic variation.

  9. KeyWare: an open wireless distributed computing environment

    NASA Astrophysics Data System (ADS)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  10. An ontology-based method for secondary use of electronic dental record data

    PubMed Central

    Schleyer, Titus KL; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P.; Liu, Kaihong; Hernandez, Pedro

    A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance. PMID:24303273

  11. An ontology-based method for secondary use of electronic dental record data.

    PubMed

    Schleyer, Titus Kl; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P; Liu, Kaihong; Hernandez, Pedro

    2013-01-01

    A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance.

  12. Proposal of Network-Based Multilingual Space Dictionary Database System

    NASA Astrophysics Data System (ADS)

    Yoshimitsu, T.; Hashimoto, T.; Ninomiya, K.

    2002-01-01

    The International Academy of Astronautics (IAA) is now constructing a multilingual dictionary database system of space-friendly terms. The database consists of a lexicon and dictionaries of multiple languages. The lexicon is a table which relates corresponding terminology in different languages. Each language has a dictionary which contains terms and their definitions. The database assumes the use on the internet. Updating and searching the terms and definitions are conducted via the network. Maintaining the database is conducted by the international cooperation. A new word arises day by day, thus to easily input new words and their definitions to the database is required for the longstanding success of the system. The main key of the database is an English term which is approved at the table held once or twice with the working group members. Each language has at lease one working group member who is responsible of assigning the corresponding term and the definition of the term of his/her native language. Inputting and updating terms and their definitions can be conducted via the internet from the office of each member which may be located at his/her native country. The system is constructed by freely distributed database server program working on the Linux operating system, which will be installed at the head office of IAA. Once it is installed, it will be open to all IAA members who can search the terms via the internet. Currently the authors are constructing the prototype system which is described in this paper.

  13. Privacy-enhanced electronic mail

    NASA Astrophysics Data System (ADS)

    Bishop, Matt

    1990-06-01

    The security of electronic mail sent through the Internet may be described in exactly three words: there is none. The Privacy and Security Research Group has recommended implementing mechanisms designed to provide security enhancements. The first set of mechanisms provides a protocol to provide privacy, integrity, and authentication for electronic mail; the second provides a certificate-based key management infrastructure to support key distribution throughout the internet, to support the first set of mechanisms. These mechanisms are described, as well as the reasons behind their selection and how these mechanisms can be used to provide some measure of security in the exchange of electronic mail.

  14. FDA toxicity databases and real-time data entry.

    PubMed

    Arvidson, Kirk B

    2008-11-15

    Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributed in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared.

  15. FDA toxicity databases and real-time data entry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arvidson, Kirk B.

    Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributedmore » in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been

  16. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.

  17. Effects of distributed database modeling on evaluation of transaction rollbacks

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.

  18. New Technology, New Questions: Using an Internet Database in Chemistry.

    ERIC Educational Resources Information Center

    Hayward, Roger

    1996-01-01

    Describes chemistry software that is part of a balanced educational program. Provides several applications including graphs of various relationships among the elements. Includes a brief historical treatment of the periodic table and compares the traditional historical approach with perspectives gained by manipulating an electronic database. (DDR)

  19. MitBASE : a comprehensive and integrated mitochondrial DNA database. The present status

    PubMed Central

    Attimonelli, M.; Altamura, N.; Benne, R.; Brennicke, A.; Cooper, J. M.; D’Elia, D.; Montalvo, A. de; Pinto, B. de; De Robertis, M.; Golik, P.; Knoop, V.; Lanave, C.; Lazowska, J.; Licciulli, F.; Malladi, B. S.; Memeo, F.; Monnerot, M.; Pasimeni, R.; Pilbout, S.; Schapira, A. H. V.; Sloof, P.; Saccone, C.

    2000-01-01

    MitBASE is an integrated and comprehensive database of mitochondrial DNA data which collects, under a single interface, databases for Plant, Vertebrate, Invertebrate, Human, Protist and Fungal mtDNA and a Pilot database on nuclear genes involved in mitochondrial biogenesis in Saccharomyces cerevisiae. MitBASE reports all available information from different organisms and from intraspecies variants and mutants. Data have been drawn from the primary databases and from the literature; value adding information has been structured, e.g., editing information on protist mtDNA genomes, pathological information for human mtDNA variants, etc. The different databases, some of which are structured using commercial packages (Microsoft Access, File Maker Pro) while others use a flat-file format, have been integrated under ORACLE. Ad hoc retrieval systems have been devised for some of the above listed databases keeping into account their peculiarities. The database is resident at the EBI and is available at the following site: http://www3.ebi.ac.uk/Research/Mitbase/mitbase.pl . The impact of this project is intended for both basic and applied research. The study of mitochondrial genetic diseases and mitochondrial DNA intraspecies diversity are key topics in several biotechnological fields. The database has been funded within the EU Biotechnology programme. PMID:10592207

  20. Correspondence: World Wide Web access to the British Universities Human Embryo Database

    PubMed Central

    AITON, JAMES F.; MCDONOUGH, ARIANA; MCLACHLAN, JOHN C.; SMART, STEVEN D.; WHITEN, SUSAN C.

    1997-01-01

    The British Universities Human Embryo Database has been created by merging information from the Walmsley Collection of Human Embryos at the School of Biological and Medical Sciences, University of St Andrews and from the Boyd Collection of Human Embryos at the Department of Anatomy, University of Cambridge. The database has been made available electronically on the Internet and World Wide Web browsers can be used to implement interactive access to the information stored in the British Universities Human Embryo Database. The database can, therefore, be accessed and searched from remote sites and specific embryos can be identified in terms of their location, age, developmental stage, plane of section, staining technique, and other parameters. It is intended to add information from other similar collections in the UK as it becomes available. PMID:9034891

  1. Legislation on violence against women: overview of key components.

    PubMed

    Ortiz-Barreda, Gaby; Vives-Cases, Carmen

    2013-01-01

    This study aimed to determine if legislation on violence against women (VAW) worldwide contains key components recommended by the Pan American Health Organization (PAHO) and the United Nations (UN) to help strengthen VAW prevention and provide better integrated victim protection, support, and care. A systematic search for VAW legislation using international legal databases and other electronic sources plus data from previous research identified 124 countries/territories with some type of VAW legislation. Full legal texts were found for legislation from 104 countries/territories. Those available in English, Portuguese, and Spanish were downloaded and compiled and the selection criteria applied (use of any of the common terms related to VAW, including intimate partner violence (IPV), and reference to at least two of six sectors (education, health, judicial system, mass media, police, and social services) with regard to VAW interventions (protection, support, and care). A final sample from 80 countries/territories was selected and analyzed for the presence of key components recommended by PAHO and the UN (reference to the term "violence against women" in the title; definitions of different types of VAW; identification of women as beneficiaries; and promotion of (reference to) the participation of multiple sectors in VAW interventions). Few countries/territories specifically identified women as the beneficiaries of their VAW legislation, including those that labeled their legislation "domestic violence" law ( n = 51), of which only two explicitly mentioned women as complainants/survivors. Only 28 countries/territories defined the main forms of VAW (economic, physical, psychological, and sexual) in their VAW legislation. Most highlighted the role of the judicial system, followed by that of social services and the police. Only 28 mentioned the health sector. Despite considerable efforts worldwide to strengthen VAW legislation, most VAW laws do not incorporate the key

  2. Antiepileptic drug use in seven electronic health record databases in Europe: a methodologic comparison.

    PubMed

    de Groot, Mark C H; Schuerch, Markus; de Vries, Frank; Hesse, Ulrik; Oliva, Belén; Gil, Miguel; Huerta, Consuelo; Requena, Gema; de Abajo, Francisco; Afonso, Ana S; Souverein, Patrick C; Alvarez, Yolanda; Slattery, Jim; Rottenkolber, Marietta; Schmiedl, Sven; Van Dijk, Liset; Schlienger, Raymond G; Reynolds, Robert; Klungel, Olaf H

    2014-05-01

    The annual prevalence of antiepileptic drug (AED) prescribing reported in the literature differs considerably among European countries due to use of different type of data sources, time periods, population distribution, and methodologic differences. This study aimed to measure prevalence of AED prescribing across seven European routine health care databases in Spain, Denmark, The Netherlands, the United Kingdom, and Germany using a standardized methodology and to investigate sources of variation. Analyses on the annual prevalence of AEDs were stratified by sex, age, and AED. Overall prevalences were standardized to the European 2008 reference population. Prevalence of any AED varied from 88 per 10,000 persons (The Netherlands) to 144 per 10,000 in Spain and Denmark in 2001. In all databases, prevalence increased linearly: from 6% in Denmark to 15% in Spain each year since 2001. This increase could be attributed entirely to an increase in "new," recently marketed AEDs while prevalence of AEDs that have been available since the mid-1990s, hardly changed. AED use increased with age for both female and male patients up to the ages of 80 to 89 years old and tended to be somewhat higher in female than in male patients between the ages of 40 and 70. No differences between databases in the number of AEDs used simultaneously by a patient were found. We showed that during the study period of 2001-2009, AED prescribing increased in five European Union (EU) countries and that this increase was due entirely to the newer AEDs marketed since the 1990s. Using a standardized methodology, we showed consistent trends across databases and countries over time. Differences in age and sex distribution explained only part of the variation between countries. Therefore, remaining variation in AED use must originate from other differences in national health care systems. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  3. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  4. Analysis of Landslide Hazard Impact Using the Landslide Database for Germany

    NASA Astrophysics Data System (ADS)

    Klose, M.; Damm, B.

    2014-12-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still shows a comprehensive research history in Germany, but only one focused on development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this contribution, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of different case studies in the German Central Uplands. The case study results exemplify database application in analysis of vulnerability to landslides, impact statistics, and hazard or cost modeling.

  5. Database resources of the National Center for Biotechnology Information.

    PubMed

    Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian

    2011-01-01

    In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Electronic PCR, OrfFinder, Splign, ProSplign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), IBIS, Biosystems, Peptidome, OMSSA, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  6. Database resources of the National Center for Biotechnology Information.

    PubMed

    Wheeler, David L; Barrett, Tanya; Benson, Dennis A; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y; Kapustin, Yuri; Khovayko, Oleg; Landsman, David; Lipman, David J; Madden, Thomas L; Maglott, Donna R; Ostell, James; Miller, Vadim; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Steven T; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusov, Roman L; Tatusova, Tatiana A; Wagner, Lukas; Yaschenko, Eugene

    2007-01-01

    In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, My NCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link(BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genome, Genome Project and related tools, the Trace and Assembly Archives, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs), Viral Genotyping Tools, Influenza Viral Resources, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART) and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. These resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.

  7. Methodological challenges when evaluating potential off-label prescribing of drugs using electronic health care databases: A case study of dabigatran etexilate in Europe.

    PubMed

    Cainzos-Achirica, Miguel; Varas-Lorenzo, Cristina; Pottegård, Anton; Asmar, Joelle; Plana, Estel; Rasmussen, Lotte; Bizouard, Geoffray; Forns, Joan; Hellfritzsch, Maja; Zint, Kristina; Perez-Gutthann, Susana; Pladevall-Vila, Manel

    2018-03-23

    To report and discuss estimated prevalence of potential off-label use and associated methodological challenges using a case study of dabigatran. Observational, cross-sectional study using 3 databases with different types of clinical information available: Cegedim Strategic Data Longitudinal Patient Database (CSD-LPD), France (cardiologist panel, n = 1706; general practitioner panel, n = 2813; primary care data); National Health Databases, Denmark (n = 28 619; hospital episodes and dispensed ambulatory medications); and Clinical Practice Research Datalink (CPRD), UK (linkable to Hospital Episode Statistics [HES], n = 2150; not linkable, n = 1285; primary care data plus hospital data for HES-linkable patients). August 2011 to August 2015. Two definitions were used to estimate potential off-label use: a broad definition of on-label prescribing using codes for disease indication (eg, atrial fibrillation [AF]), and a restrictive definition excluding patients with conditions for which dabigatran is not indicated (eg, valvular AF). Prevalence estimates under the broad definition ranged from 5.7% (CPRD-HES) to 34.0% (CSD-LPD) and, under the restrictive definition, from 17.4% (CPRD-HES) to 44.1% (CSD-LPD). For the majority of potential off-label users, no diagnosis potentially related to anticoagulant use was identified. Key methodological challenges were the limited availability of detailed clinical information, likely leading to overestimation of off-label use, and differences in the information available, which may explain the disparate prevalence estimates across data sources. Estimates of potential off-label use should be interpreted cautiously due to limitations in available information. In this context, CPRD HES-linkable estimates are likely to be the most accurate. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Fingerprint-Based Structure Retrieval Using Electron Density

    PubMed Central

    Yin, Shuangye; Dokholyan, Nikolay V.

    2010-01-01

    We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. PMID:21287628

  9. Technical implementation of an Internet address database with online maintenance module.

    PubMed

    Mischke, K L; Bollmann, F; Ehmer, U

    2002-01-01

    The article describes the technical implementation and management of the Internet address database of the center for ZMK (University of Münster, Dental School) Münster, which is integrated in the "ZMK-Web" website. The editorially maintained system guarantees its topicality primarily due to the electronically organized division of work with the aid of an online maintenance module programmed in JavaScript/PHP, as well as a database-related feedback function for the visitor to the website through configuration-independent direct mail windows programmed in JavaScript/PHP.

  10. Database resources of the National Center for Biotechnology Information: 2002 update

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.

    2002-01-01

    In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, Human¡VMouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:11752242

  11. Key issues surrounding the health impacts of electronic nicotine delivery systems (ENDS) and other sources of nicotine.

    PubMed

    Drope, Jeffrey; Cahn, Zachary; Kennedy, Rosemary; Liber, Alex C; Stoklosa, Michal; Henson, Rosemarie; Douglas, Clifford E; Drope, Jacqui

    2017-11-01

    Answer questions and earn CME/CNE Over the last decade, the use of electronic nicotine delivery systems (ENDS), including the electronic cigarette or e-cigarette, has grown rapidly. More youth now use ENDS than any tobacco product. This extensive research review shows that there are scientifically sound, sometimes competing arguments about ENDS that are not immediately and/or completely resolvable. However, the preponderance of the scientific evidence to date suggests that current-generation ENDS products are demonstrably less harmful than combustible tobacco products such as conventional cigarettes in several key ways, including by generating far lower levels of carcinogens and other toxic compounds than combustible products or those that contain tobacco. To place ENDS in context, the authors begin by reviewing the trends in use of major nicotine-containing products. Because nicotine is the common core-and highly addictive-constituent across all tobacco products, its toxicology is examined. With its long history as the only nicotine product widely accepted as being relatively safe, nicotine-replacement therapy (NRT) is also examined. A section is also included that examines snus, the most debated potential harm-reduction product before ENDS. Between discussions of NRT and snus, ENDS are extensively examined: what they are, knowledge about their level of "harm," their relationship to smoking cessation, the so-called gateway effect, and dual use/poly-use. CA Cancer J Clin 2017;67:449-471. © 2017 American Cancer Society. © 2017 American Cancer Society.

  12. Design and engineering of photosynthetic light-harvesting and electron transfer using length, time, and energy scales.

    PubMed

    Noy, Dror; Moser, Christopher C; Dutton, P Leslie

    2006-02-01

    Decades of research on the physical processes and chemical reaction-pathways in photosynthetic enzymes have resulted in an extensive database of kinetic information. Recently, this database has been augmented by a variety of high and medium resolution crystal structures of key photosynthetic enzymes that now include the two photosystems (PSI and PSII) of oxygenic photosynthetic organisms. Here, we examine the currently available structural and functional information from an engineer's point of view with the long-term goal of reproducing the key features of natural photosystems in de novo designed and custom-built molecular solar energy conversion devices. We find that the basic physics of the transfer processes, namely, the time constraints imposed by the rates of incoming photon flux and the various decay processes allow for a large degree of tolerance in the engineering parameters. Moreover, we find that the requirements to guarantee energy and electron transfer rates that yield high efficiency in natural photosystems are largely met by control of distance between chromophores and redox cofactors. Thus, for projected de novo designed constructions, the control of spatial organization of cofactor molecules within a dense array is initially given priority. Nevertheless, constructions accommodating dense arrays of different cofactors, some well within 1 nm from each other, still presents a significant challenge for protein design.

  13. Biological agents database in the armed forces.

    PubMed

    Niemcewicz, Marcin; Kocik, Janusz; Bielecka, Anna; Wierciński, Michał

    2014-10-01

    Rapid detection and identification of the biological agent during both, natural or deliberate outbreak is crucial for implementation of appropriate control measures and procedures in order to mitigate the spread of disease. Determination of pathogen etiology may not only support epidemiological investigation and safety of human beings, but also enhance forensic efforts in pathogen tracing, collection of evidences and correct inference. The article presents objectives of the Biological Agents Database, which was developed for the purpose of the Ministry of National Defense of the Republic of Poland under the European Defence Agency frame. The Biological Agents Database is an electronic catalogue of genetic markers of highly dangerous pathogens and biological agents of weapon of mass destruction concern, which provides full identification of biological threats emerging in Poland and in locations of activity of Polish troops. The Biological Agents Database is a supportive tool used for tracing biological agents' origin as well as rapid identification of agent causing the disease of unknown etiology. It also provides support in diagnosis, analysis, response and exchange of information between institutions that use information contained in it. Therefore, it can be used not only for military purposes, but also in a civilian environment.

  14. TriatoKey: a web and mobile tool for biodiversity identification of Brazilian triatomine species

    PubMed Central

    Márcia de Oliveira, Luciana; Nogueira de Brito, Raissa; Anderson Souza Guimarães, Paul; Vitor Mastrângelo Amaro dos Santos, Rômulo; Gonçalves Diotaiuti, Liléia; de Cássia Moreira de Souza, Rita

    2017-01-01

    Abstract Triatomines are blood-sucking insects that transmit the causative agent of Chagas disease, Trypanosoma cruzi. Despite being recognized as a difficult task, the correct taxonomic identification of triatomine species is crucial for vector control in Latin America, where the disease is endemic. In this context, we have developed a web and mobile tool based on PostgreSQL database to help healthcare technicians to overcome the difficulties to identify triatomine vectors when the technical expertise is missing. The web and mobile version makes use of real triatomine species pictures and dichotomous key method to support the identification of potential vectors that occur in Brazil. It provides a user example-driven interface with simple language. TriatoKey can also be useful for educational purposes. Database URL: http://triatokey.cpqrr.fiocruz.br PMID:28605769

  15. Thick Films: Electronic Applications. (Latest citations from the Aerospace Database)

    NASA Technical Reports Server (NTRS)

    1996-01-01

    The bibliography contains citations concerning the design, development, fabrication, and evaluation of thick film electronic devices. Thick film solar cells, thick films for radiation conduction, deposition processes, conductive inks are among the topics discussed. Applications in military and civilian avionics are examined.

  16. The landslide database for Germany: Closing the gap at national level

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2015-11-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data still has a long research history in Germany, but one focussed on the development of databases with local or regional coverage. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present paper reports on this project that is based on a landslide database which evolved over the last 15 years to a database covering large parts of Germany. A strategy of systematic retrieval, extraction, and fusion of landslide data is at the heart of the methodology, providing the basis for a database with a broad potential of application. The database offers a data pool of more than 4,200 landslide data sets with over 13,000 single data files and dates back to the 12th century. All types of landslides are covered by the database, which stores not only core attributes, but also various complementary data, including data on landslide causes, impacts, and mitigation. The current database migration to PostgreSQL/PostGIS is focused on unlocking the full scientific potential of the database, while enabling data sharing and knowledge transfer via a web GIS platform. In this paper, the goals and the research strategy of the database project are highlighted at first, with a summary of best practices in database development providing perspective. Next, the focus is on key aspects of the methodology, which is followed by the results of three case studies in the German Central Uplands. The case study results exemplify database application in the analysis of landslide frequency and causes, impact statistics, and landslide susceptibility modeling. Using the example of these case studies, strengths and weaknesses of the database are discussed in detail. The paper concludes with a summary of the database project with regard to previous

  17. Do "Digital Certificates" Hold the Key to Colleges' On-Line Activities?

    ERIC Educational Resources Information Center

    Olsen, Florence

    1999-01-01

    Examines the increasing use of "digital certificates" to validate computer user identity in various applications on college and university campuses, including letting students register for courses, monitoring access to Internet2, and monitoring access to databases and electronic journals. The methodology has been developed by the…

  18. A genotypic and phenotypic information source for marker-assisted selection of cereals: the CEREALAB database

    PubMed Central

    Milc, Justyna; Sala, Antonio; Bergamaschi, Sonia; Pecchioni, Nicola

    2011-01-01

    The CEREALAB database aims to store genotypic and phenotypic data obtained by the CEREALAB project and to integrate them with already existing data sources in order to create a tool for plant breeders and geneticists. The database can help them in unravelling the genetics of economically important phenotypic traits; in identifying and choosing molecular markers associated to key traits; and in choosing the desired parentals for breeding programs. The database is divided into three sub-schemas corresponding to the species of interest: wheat, barley and rice; each sub-schema is then divided into two sub-ontologies, regarding genotypic and phenotypic data, respectively. Database URL: http://www.cerealab.unimore.it/jws/cerealab.jnlp PMID:21247929

  19. A comparative study of six European databases of medically oriented Web resources.

    PubMed

    Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes

    2005-10-01

    The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.

  20. Development of a database for the verification of trans-ionospheric remote sensing systems

    NASA Astrophysics Data System (ADS)

    Leitinger, R.

    2005-08-01

    Remote sensing systems need verification by means of in-situ data or by means of model data. In the case of ionospheric occultation inversion, ionosphere tomography and other imaging methods on the basis of satellite-to-ground or satellite-to-satellite electron content, the availability of in-situ data with adequate spatial and temporal co-location is a very rare case, indeed. Therefore the method of choice for verification is to produce artificial electron content data with realistic properties, subject these data to the inversion/retrieval method, compare the results with model data and apply a suitable type of “goodness of fit” classification. Inter-comparison of inversion/retrieval methods should be done with sets of artificial electron contents in a “blind” (or even “double blind”) way. The set up of a relevant database for the COST 271 Action is described. One part of the database will be made available to everyone interested in testing of inversion/retrieval methods. The artificial electron content data are calculated by means of large-scale models that are “modulated” in a realistic way to include smaller scale and dynamic structures, like troughs and traveling ionospheric disturbances.

  1. Using an International p53 Mutation Database as a Foundation for an Online Laboratory in an Upper Level Undergraduate Biology Class

    ERIC Educational Resources Information Center

    Melloy, Patricia G.

    2015-01-01

    A two-part laboratory exercise was developed to enhance classroom instruction on the significance of p53 mutations in cancer development. Students were asked to mine key information from an international database of p53 genetic changes related to cancer, the IARC TP53 database. Using this database, students designed several data mining activities…

  2. Database Access Systems.

    ERIC Educational Resources Information Center

    Dalrymple, Prudence W.; Roderer, Nancy K.

    1994-01-01

    Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…

  3. Electronic Approval: Another Step toward a Paperless Office.

    ERIC Educational Resources Information Center

    Blythe, Kenneth C.; Morrison, Dennis L.

    1992-01-01

    Pennsylvania State University's award-winning electronic approval system allows administrative documents to be electronically generated, approved, and updated in the university's central database. Campus business can thus be conducted faster, less expensively, more accurately, and with greater security than with traditional paper approval…

  4. Use of large healthcare databases for rheumatology clinical research.

    PubMed

    Desai, Rishi J; Solomon, Daniel H

    2017-03-01

    Large healthcare databases, which contain data collected during routinely delivered healthcare to patients, can serve as a valuable resource for generating actionable evidence to assist medical and healthcare policy decision-making. In this review, we summarize use of large healthcare databases in rheumatology clinical research. Large healthcare data are critical to evaluate medication safety and effectiveness in patients with rheumatologic conditions. Three major sources of large healthcare data are: first, electronic medical records, second, health insurance claims, and third, patient registries. Each of these sources offers unique advantages, but also has some inherent limitations. To address some of these limitations and maximize the utility of these data sources for evidence generation, recent efforts have focused on linking different data sources. Innovations such as randomized registry trials, which aim to facilitate design of low-cost randomized controlled trials built on existing infrastructure provided by large healthcare databases, are likely to make clinical research more efficient in coming years. Harnessing the power of information contained in large healthcare databases, while paying close attention to their inherent limitations, is critical to generate a rigorous evidence-base for medical decision-making and ultimately enhancing patient care.

  5. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  6. A Training Framework for the Department of Defense Public Key Infrastructure

    DTIC Science & Technology

    2001-09-01

    and the growth of electronic commerce within the Department of Defense (DoD) has led to the development and implementation of the DoD Public Key...also grown within the Department of Defense. Electronic commerce and business to business transactions have become more commonplace and have

  7. The Relationship between Searches Performed in Online Databases and the Number of Full-Text Articles Accessed: Measuring the Interaction between Database and E-Journal Collections

    ERIC Educational Resources Information Center

    Lamothe, Alain R.

    2011-01-01

    The purpose of this paper is to report the results of a quantitative analysis exploring the interaction and relationship between the online database and electronic journal collections at the J. N. Desmarais Library of Laurentian University. A very strong relationship exists between the number of searches and the size of the online database…

  8. Fingerprint-based structure retrieval using electron density.

    PubMed

    Yin, Shuangye; Dokholyan, Nikolay V

    2011-03-01

    We present a computational approach that can quickly search a large protein structural database to identify structures that fit a given electron density, such as determined by cryo-electron microscopy. We use geometric invariants (fingerprints) constructed using 3D Zernike moments to describe the electron density, and reduce the problem of fitting of the structure to the electron density to simple fingerprint comparison. Using this approach, we are able to screen the entire Protein Data Bank and identify structures that fit two experimental electron densities determined by cryo-electron microscopy. Copyright © 2010 Wiley-Liss, Inc.

  9. Pathogen Research Databases

    Science.gov Websites

    Hepatitis C Virus (HCV) database project is funded by the Division of Microbiology and Infectious Diseases of the National Institute of Allergies and Infectious Diseases (NIAID). The HCV database project started as a spin-off from the HIV database project. There are two databases for HCV, a sequence database

  10. Impact of metal ions in porphyrin-based applied materials for visible-light photocatalysis: key information from ultrafast electronic spectroscopy.

    PubMed

    Kar, Prasenjit; Sardar, Samim; Alarousu, Erkki; Sun, Jingya; Seddigi, Zaki S; Ahmed, Saleh A; Danish, Ekram Y; Mohammed, Omar F; Pal, Samir Kumar

    2014-08-11

    Protoporphyrin IX-zinc oxide (PP-ZnO) nanohybrids have been synthesized for applications in photocatalytic devices. High-resolution transmission electron microscopy (HRTEM), X-ray diffraction (XRD), and steady-state infrared, absorption, and emission spectroscopies have been used to analyze the structural details and optical properties of these nanohybrids. Time-resolved fluorescence and transient absorption techniques have been applied to study the ultrafast dynamic events that are key to photocatalytic activities. The photocatalytic efficiency under visible-light irradiation in the presence of naturally abundant iron(III) and copper(II) ions has been found to be significantly retarded in the former case, but enhanced in the latter case. More importantly, femtosecond (fs) transient absorption data have clearly demonstrated that the residence of photoexcited electrons from the sensitizer PP in the centrally located iron moiety hinders ground-state bleach recovery of the sensitizer, affecting the overall photocatalytic rate of the nanohybrid. The presence of copper(II) ions, on the other hand, offers additional stability against photobleaching and eventually enhances the efficiency of photocatalysis. In addition, we have also explored the role of UV light in the efficiency of photocatalysis and have rationalized our observations from femtosecond- to picosecond-resolved studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Multiple elastic scattering of electrons in condensed matter

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2017-01-01

    Since the 1940s, much attention has been devoted to the problem of accurate theoretical description of electron transport in condensed matter. The needed information for describing different aspects of the electron transport is the angular distribution of electron directions after multiple elastic collisions. This distribution can be expanded into a series of Legendre polynomials with coefficients, Al. In the present work, a database of these coefficients for all elements up to uranium (Z=92) and a dense grid of electron energies varying from 50 to 5000 eV has been created. The database makes possible the following applications: (i) accurate interpolation of coefficients Al for any element and any energy from the above range, (ii) fast calculations of the differential and total elastic-scattering cross sections, (iii) determination of the angular distribution of directions after multiple collisions, (iv) calculations of the probability of elastic backscattering from solids, and (v) calculations of the calibration curves for determination of the inelastic mean free paths of electrons. The last two applications provide data with comparable accuracy to Monte Carlo simulations, yet the running time is decreased by several orders of magnitude. All of the above applications are implemented in the Fortran program MULTI_SCATT. Numerous illustrative runs of this program are described. Despite a relatively large volume of the database of coefficients Al, the program MULTI_SCATT can be readily run on personal computers.

  12. Aptamer Database

    PubMed Central

    Lee, Jennifer F.; Hesselberth, Jay R.; Meyers, Lauren Ancel; Ellington, Andrew D.

    2004-01-01

    The aptamer database is designed to contain comprehensive sequence information on aptamers and unnatural ribozymes that have been generated by in vitro selection methods. Such data are not normally collected in ‘natural’ sequence databases, such as GenBank. Besides serving as a storehouse of sequences that may have diagnostic or therapeutic utility, the database serves as a valuable resource for theoretical biologists who describe and explore fitness landscapes. The database is updated monthly and is publicly available at http://aptamer.icmb.utexas.edu/. PMID:14681367

  13. Database integration in a multimedia-modeling environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include toolsmore » to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.« less

  14. Electronic health records and cardiac implantable electronic devices: new paradigms and efficiencies.

    PubMed

    Slotwiner, David J

    2016-10-01

    The anticipated advantages of electronic health records (EHRs)-improved efficiency and the ability to share information across the healthcare enterprise-have so far failed to materialize. There is growing recognition that interoperability holds the key to unlocking the greatest value of EHRs. Health information technology (HIT) systems including EHRs must be able to share data and be able to interpret the shared data. This requires a controlled vocabulary with explicit definitions (data elements) as well as protocols to communicate the context in which each data element is being used (syntactic structure). Cardiac implantable electronic devices (CIEDs) provide a clear example of the challenges faced by clinicians when data is not interoperable. The proprietary data formats created by each CIED manufacturer, as well as the multiple sources of data generated by CIEDs (hospital, office, remote monitoring, acute care setting), make it challenging to aggregate even a single patient's data into an EHR. The Heart Rhythm Society and CIED manufacturers have collaborated to develop and implement international standard-based specifications for interoperability that provide an end-to-end solution, enabling structured data to be communicated from CIED to a report generation system, EHR, research database, referring physician, registry, patient portal, and beyond. EHR and other health information technology vendors have been slow to implement these tools, in large part, because there have been no financial incentives for them to do so. It is incumbent upon us, as clinicians, to insist that the tools of interoperability be a prerequisite for the purchase of any and all health information technology systems.

  15. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    ERIC Educational Resources Information Center

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  16. The Database Query Support Processor (QSP)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The number and diversity of databases available to users continues to increase dramatically. Currently, the trend is towards decentralized, client server architectures that (on the surface) are less expensive to acquire, operate, and maintain than information architectures based on centralized, monolithic mainframes. The database query support processor (QSP) effort evaluates the performance of a network level, heterogeneous database access capability. Air Force Material Command's Rome Laboratory has developed an approach, based on ANSI standard X3.138 - 1988, 'The Information Resource Dictionary System (IRDS)' to seamless access to heterogeneous databases based on extensions to data dictionary technology. To successfully query a decentralized information system, users must know what data are available from which source, or have the knowledge and system privileges necessary to find out this information. Privacy and security considerations prohibit free and open access to every information system in every network. Even in completely open systems, time required to locate relevant data (in systems of any appreciable size) would be better spent analyzing the data, assuming the original question was not forgotten. Extensions to data dictionary technology have the potential to more fully automate the search and retrieval for relevant data in a decentralized environment. Substantial amounts of time and money could be saved by not having to teach users what data resides in which systems and how to access each of those systems. Information describing data and how to get it could be removed from the application and placed in a dedicated repository where it belongs. The result simplified applications that are less brittle and less expensive to build and maintain. Software technology providing the required functionality is off the shelf. The key difficulty is in defining the metadata required to support the process. The database query support processor effort will provide

  17. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  18. EDDIX--a database of ionisation double differential cross sections.

    PubMed

    MacGibbon, J H; Emerson, S; Liamsuwan, T; Nikjoo, H

    2011-02-01

    The use of Monte Carlo track structure is a choice method in biophysical modelling and calculations. To precisely model 3D and 4D tracks, the cross section for the ionisation by an incoming ion, double differential in the outgoing electron energy and angle, is required. However, the double differential cross section cannot be theoretically modelled over the full range of parameters. To address this issue, a database of all available experimental data has been constructed. Currently, the database of Experimental Double Differential Ionisation Cross sections (EDDIX) contains over 1200 digitalised experimentally measured datasets from the 1960s to present date, covering all available ion species (hydrogen to uranium) and all available target species. Double differential cross sections are also presented with the aid of an eight parameter functions fitted to the cross sections. The parameters include projectile species and charge, target nuclear charge and atomic mass, projectile atomic mass and energy, electron energy and deflection angle. It is planned to freely distribute EDDIX and make it available to the radiation research community for use in the analytical and numerical modelling of track structure.

  19. TRENDS: A flight test relational database user's guide and reference manual

    NASA Technical Reports Server (NTRS)

    Bondi, M. J.; Bjorkman, W. S.; Cross, J. L.

    1994-01-01

    This report is designed to be a user's guide and reference manual for users intending to access rotocraft test data via TRENDS, the relational database system which was developed as a tool for the aeronautical engineer with no programming background. This report has been written to assist novice and experienced TRENDS users. TRENDS is a complete system for retrieving, searching, and analyzing both numerical and narrative data, and for displaying time history and statistical data in graphical and numerical formats. This manual provides a 'guided tour' and a 'user's guide' for the new and intermediate-skilled users. Examples for the use of each menu item within TRENDS is provided in the Menu Reference section of the manual, including full coverage for TIMEHIST, one of the key tools. This manual is written around the XV-15 Tilt Rotor database, but does include an appendix on the UH-60 Blackhawk database. This user's guide and reference manual establishes a referrable source for the research community and augments NASA TM-101025, TRENDS: The Aeronautical Post-Test, Database Management System, Jan. 1990, written by the same authors.

  20. A comparative cellular and molecular biology of longevity database.

    PubMed

    Stuart, Jeffrey A; Liang, Ping; Luo, Xuemei; Page, Melissa M; Gallagher, Emily J; Christoff, Casey A; Robb, Ellen L

    2013-10-01

    Discovering key cellular and molecular traits that promote longevity is a major goal of aging and longevity research. One experimental strategy is to determine which traits have been selected during the evolution of longevity in naturally long-lived animal species. This comparative approach has been applied to lifespan research for nearly four decades, yielding hundreds of datasets describing aspects of cell and molecular biology hypothesized to relate to animal longevity. Here, we introduce a Comparative Cellular and Molecular Biology of Longevity Database, available at ( http://genomics.brocku.ca/ccmbl/ ), as a compendium of comparative cell and molecular data presented in the context of longevity. This open access database will facilitate the meta-analysis of amalgamated datasets using standardized maximum lifespan (MLSP) data (from AnAge). The first edition contains over 800 data records describing experimental measurements of cellular stress resistance, reactive oxygen species metabolism, membrane composition, protein homeostasis, and genome homeostasis as they relate to vertebrate species MLSP. The purpose of this review is to introduce the database and briefly demonstrate its use in the meta-analysis of combined datasets.

  1. Concepts and data model for a co-operative neurovascular database.

    PubMed

    Mansmann, U; Taylor, W; Porter, P; Bernarding, J; Jäger, H R; Lasjaunias, P; Terbrugge, K; Meisel, J

    2001-08-01

    Problems of clinical management of neurovascular diseases are very complex. This is caused by the chronic character of the diseases, a long history of symptoms and diverse treatments. If patients are to benefit from treatment, then treatment decisions have to rely on reliable and accurate knowledge of the natural history of the disease and the various treatments. Recent developments in statistical methodology and experience from electronic patient records are used to establish an information infrastructure based on a centralized register. A protocol to collect data on neurovascular diseases with technical as well as logistical aspects of implementing a database for neurovascular diseases are described. The database is designed as a co-operative tool of audit and research available to co-operating centres. When a database is linked to a systematic patient follow-up, it can be used to study prognosis. Careful analysis of patient outcome is valuable for decision-making.

  2. Development of Electronic Resources across Networks in Thailand.

    ERIC Educational Resources Information Center

    Ratchatavorn, Phandao

    2002-01-01

    Discusses the development of electronic resources across library networks in Thailand to meet user needs, particularly electronic journals. Topics include concerns about journal access; limited budgets for library acquisitions of journals; and sharing resources through a centralized database system that allows Web access to journals via Internet…

  3. Quasars Probing Quasars. X. The Quasar Pair Spectral Database

    NASA Astrophysics Data System (ADS)

    Findlay, Joseph R.; Prochaska, J. Xavier; Hennawi, Joseph F.; Fumagalli, Michele; Myers, Adam D.; Bartle, Stephanie; Chehade, Ben; DiPompeo, Michael A.; Shanks, Tom; Lau, Marie Wingyee; Rubin, Kate H. R.

    2018-06-01

    The rare close projection of two quasars on the sky provides the opportunity to study the host galaxy environment of a foreground quasar in absorption against the continuum emission of a background quasar. For over a decade the “Quasars probing quasars” series has utilized this technique to further the understanding of galaxy formation and evolution in the presence of a quasar at z > 2, resolving scales as small as a galactic disk and from bound gas in the circumgalactic medium to the diffuse environs of intergalactic space. Presented here is the public release of the quasar pair spectral database utilized in these studies. In addition to projected pairs at z > 2, the database also includes quasar pair members at z < 2, gravitational lens candidates, and quasars closely separated in redshift that are useful for small-scale clustering studies. In total, the database catalogs 5627 distinct objects, with 4083 lying within 5‧ of at least one other source. A spectral library contains 3582 optical and near-infrared spectra for 3028 of the cataloged sources. As well as reporting on 54 newly discovered quasar pairs, we outline the key contributions made by this series over the last 10 years, summarize the imaging and spectroscopic data used for target selection, discuss the target selection methodologies, describe the database content, and explore some avenues for future work. Full documentation for the spectral database, including download instructions, is supplied at http://specdb.readthedocs.io/en/latest/.

  4. Organizing, exploring, and analyzing antibody sequence data: the case for relational-database managers.

    PubMed

    Owens, John

    2009-01-01

    Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.

  5. A Web-Based Tool to Support Data-Based Early Intervention Decision Making

    ERIC Educational Resources Information Center

    Buzhardt, Jay; Greenwood, Charles; Walker, Dale; Carta, Judith; Terry, Barbara; Garrett, Matthew

    2010-01-01

    Progress monitoring and data-based intervention decision making have become key components of providing evidence-based early childhood special education services. Unfortunately, there is a lack of tools to support early childhood service providers' decision-making efforts. The authors describe a Web-based system that guides service providers…

  6. Contamination of sequence databases with adaptor sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshikawa, Takeo; Sanders, A.R.; Detera-Wadleigh, S.D.

    Because of the exponential increase in the amount of DNA sequences being added to the public databases on a daily basis, it has become imperative to identify sources of contamination rapidly. Previously, contaminations of sequence databases have been reported to alert the scientific community to the problem. These contaminations can be divided into two categories. The first category comprises host sequences that have been difficult for submitters to manage or control. Examples include anomalous sequences derived from Escherichia coli, which are inserted into the chromosomes (and plasmids) of the bacterial hosts. Insertion sequences are highly mobile and are capable ofmore » transposing themselves into plasmids during cloning manipulation. Another example of the first category is the infection with yeast genomic DNA or with bacterial DNA of some commercially available cDNA libraries from Clontech. The second category of database contamination is due to the inadvertent inclusion of nonhost sequences. This category includes incorporation of cloning-vector sequences and multicloning sites in the database submission. M13-derived artifacts have been common, since M13-based vectors have been widely used for subcloning DNA fragments. Recognizing this problem, the National Center for Biotechnology Information (NCBI) started to screen, in April 1994, all sequences directly submitted to GenBank, against a set of vector data retrieved from GenBank by use of key-word searches, such as {open_quotes}vector.{close_quotes} In this report, we present evidence for another sequence artifact that is widespread but that, to our knowledge, has not yet been reported. 11 refs., 1 tab.« less

  7. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis.

    PubMed

    Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio

    2016-01-01

    To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be focused on

  8. DianaHealth.com, an On-Line Database Containing Appraisals of the Clinical Value and Appropriateness of Healthcare Interventions: Database Development and Retrospective Analysis

    PubMed Central

    Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio

    2016-01-01

    Objective To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Methods and Findings Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. Conclusions We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability

  9. IRBAS: An online database to collate, analyze, and synthesize ...

    EPA Pesticide Factsheets

    Key questions dominating contemporary ecological research and management concern interactions between biodiversity, ecosystem processes, and ecosystem services provision in the face of global change. This is particularly salient for freshwater biodiversity and in the context of river drying and flow‐regime change. Rivers that stop flowing and dry, herein intermittent rivers, are globally prevalent and dynamic ecosystems on which the body of research is expanding rapidly, consistent with the era of big data. However, the data encapsulated by this work remain largely fragmented, limiting our ability to answer the key questions beyond a case‐by‐case basis. To this end, the Intermittent River Biodiversity Analysis and Synthesis (IRBAS; http://irbas.cesab.org) project has collated, analyzed, and synthesized data from across the world on the biodiversity and environmental characteristics of intermittent rivers. The IRBAS database integrates and provides free access to these data, contributing to the growing, and global, knowledge base on these ubiquitous and important river systems, for both theoretical and applied advancement. The IRBAS database currently houses over 2000 data samples collected from six countries across three continents, primarily describing aquatic invertebrate taxa inhabiting intermittent rivers during flowing hydrological phases. As such, there is room to expand the biogeographic and taxonomic coverage, for example, through addition of data

  10. FIREMON Database

    Treesearch

    John F. Caratti

    2006-01-01

    The FIREMON database software allows users to enter data, store, analyze, and summarize plot data, photos, and related documents. The FIREMON database software consists of a Java application and a Microsoft® Access database. The Java application provides the user interface with FIREMON data through data entry forms, data summary reports, and other data management tools...

  11. Off-line compatible electronic cash method and system

    DOEpatents

    Kravitz, D.W.; Gemmell, P.S.; Brickell, E.F.

    1998-11-03

    An off-line electronic cash system having an electronic coin, a bank B, a payee S, and a user U with an account at the bank B as well as a user password z{sub u,i}, has a method for performing an electronic cash transfer. An electronic coin is withdrawn from the bank B by the user U and an electronic record of the electronic coin is stored by the bank B. The coin is paid to the payee S by the user U. The payee S deposits the coin with the bank B. A determination is made that the coin is spent and the record of the coin is deleted by the bank B. A further deposit of the same coin after the record is deleted is determined. Additionally, a determination is made which user U originally withdrew the coin after deleting the record. To perform these operations a key pair is generated by the user, including public and secret signature keys. The public signature key along with a user password z{sub u,i} and a withdrawal amount are sent to the bank B by the user U. In response, the bank B sends a coin to the user U signed by the secret key of the bank indicating the value of the coin and the public key of the user U. The payee S transmits a challenge counter to the user U prior to receiving the coin. 16 figs.

  12. Off-line compatible electronic cash method and system

    DOEpatents

    Kravitz, David W.; Gemmell, Peter S.; Brickell, Ernest F.

    1998-01-01

    An off-line electronic cash system having an electronic coin, a bank B, a payee S, and a user U with an account at the bank B as well as a user password z.sub.u,i, has a method for performing an electronic cash transfer. An electronic coin is withdrawn from the bank B by the user U and an electronic record of the electronic coin is stored by the bank B. The coin is paid to the payee S by the user U. The payee S deposits the coin with the bank B. A determination is made that the coin is spent and the record of the coin is deleted by the bank B. A further deposit of the same coin after the record is deleted is determined. Additionally, a determination is made which user U originally withdrew the coin after deleting the record. To perform these operations a key pair is generated by the user, including public and secret signature keys. The public signature key along with a user password z.sub.u,i and a withdrawal amount are sent to the bank B by the user U. In response, the bank B sends a coin to the user U signed by the secret key of the bankindicating the value of the coin and the public key of the user U. The payee S transmits a challenge counter to the user U prior to receiving the coin.

  13. Incomplete evidence: the inadequacy of databases in tracing published adverse drug reactions in clinical trials

    PubMed Central

    Derry, Sheena; Kong Loke, Yoon; Aronson, Jeffrey K

    2001-01-01

    Background We would expect information on adverse drug reactions in randomised clinical trials to be easily retrievable from specific searches of electronic databases. However, complete retrieval of such information may not be straightforward, for two reasons. First, not all clinical drug trials provide data on the frequency of adverse effects. Secondly, not all electronic records of trials include terms in the abstract or indexing fields that enable us to select those with adverse effects data. We have determined how often automated search methods, using indexing terms and/or textwords in the title or abstract, would fail to retrieve trials with adverse effects data. Methods We used a sample set of 107 trials known to report frequencies of adverse drug effects, and measured the proportion that (i) were not assigned the appropriate adverse effects indexing terms in the electronic databases, and (ii) did not contain identifiable adverse effects textwords in the title or abstract. Results Of the 81 trials with records on both MEDLINE and EMBASE, 25 were not indexed for adverse effects in either database. Twenty-six trials were indexed in one database but not the other. Only 66 of the 107 trials reporting adverse effects data mentioned this in the abstract or title of the paper. Simultaneous use of textword and indexing terms retrieved only 82/107 (77%) papers. Conclusions Specific search strategies based on adverse effects textwords and indexing terms will fail to identify nearly a quarter of trials that report on the rate of drug adverse effects. PMID:11591220

  14. The SEDIBUD (Sediment Budgets in Cold Environments) Programme: Current activities and future key tasks

    NASA Astrophysics Data System (ADS)

    Beylich, A. A.; Lamoureux, S. F.; Decaulne, A.

    2012-04-01

    geomorphologists, hydrologists, ecologists, permafrost scientists and glaciologists. SEDIBUD has developed manuals and protocols (SEDIFLUX Manual, available online, see below) with a key set of primary surface process monitoring and research data requirements to incorporate results from these diverse projects and allow coordinated quantitative analysis across the programme. Defined SEDIBUD key test sites provide data on annual climate conditions, total discharge and particulate and dissolved fluxes as well as information on other relevant surface processes. A number of selected key test sites is providing high-resolution data on climate conditions, runoff and sedimentary fluxes, which in addition to the annual data contribute to the SEDIBUD metadata database which is currently developed. Comparable datasets from different SEDIBUD key test sites are integrated and analysed to address key research questions as defined in the SEDIBUD Objective (available online, see below). Defined SEDIBUD key tasks for the coming years include (i) The continued generation and compilation of comparable longer-term datasets on contemporary sedimentary fluxes and sediment yields from SEDIBUD key test sites worldwide, (ii) The continued extension of the SEDIBUD metadata database with these datasets, (iii) The testing of defined SEDIBUD hypotheses (available online, see below) by using the datasets continuously compiled in the SEDIBUD metadata database. Detailed information on the I.A.G./A.I.G. SEDIBUD Programme, SEDIBUD meetings, SEDIBUD publications and SEDIBUD online documents and databases is available at the SEDIBUD website under http://www.geomorph.org/wg/wgsb.html.

  15. A Quantum Private Query Protocol for Enhancing both User and Database Privacy

    NASA Astrophysics Data System (ADS)

    Zhou, Yi-Hua; Bai, Xue-Wei; Li, Lei-Lei; Shi, Wei-Min; Yang, Yu-Guang

    2018-01-01

    In order to protect the privacy of query user and database, some QKD-based quantum private query (QPQ) protocols were proposed. Unfortunately some of them cannot resist internal attack from database perfectly; some others can ensure better user privacy but require a reduction of database privacy. In this paper, a novel two-way QPQ protocol is proposed to ensure the privacy of both sides of communication. In our protocol, user makes initial quantum states and derives the key bit by comparing initial quantum state and outcome state returned from database by ctrl or shift mode instead of announcing two non-orthogonal qubits as others which may leak part secret information. In this way, not only the privacy of database be ensured but also user privacy is strengthened. Furthermore, our protocol can also realize the security of loss-tolerance, cheat-sensitive, and resisting JM attack etc. Supported by National Natural Science Foundation of China under Grant Nos. U1636106, 61572053, 61472048, 61602019, 61502016; Beijing Natural Science Foundation under Grant Nos. 4152038, 4162005; Basic Research Fund of Beijing University of Technology (No. X4007999201501); The Scientific Research Common Program of Beijing Municipal Commission of Education under Grant No. KM201510005016

  16. EuCliD (European Clinical Database): a database comparing different realities.

    PubMed

    Marcelli, D; Kirchgessner, J; Amato, C; Steil, H; Mitteregger, A; Moscardò, V; Carioni, C; Orlandini, G; Gatti, E

    2001-01-01

    Quality and variability of dialysis practice are generally gaining more and more importance. Fresenius Medical Care (FMC), as provider of dialysis, has the duty to continuously monitor and guarantee the quality of care delivered to patients treated in its European dialysis units. Accordingly, a new clinical database called EuCliD has been developed. It is a multilingual and fully codified database, using as far as possible international standard coding tables. EuCliD collects and handles sensitive medical patient data, fully assuring confidentiality. The Infrastructure: a Domino server is installed in each country connected to EuCliD. All the centres belonging to a country are connected via modem to the country server. All the Domino Servers are connected via Wide Area Network to the Head Quarter Server in Bad Homburg (Germany). Inside each country server only anonymous data related to that particular country are available. The only place where all the anonymous data are available is the Head Quarter Server. The data collection is strongly supported in each country by "key-persons" with solid relationships to their respective national dialysis units. The quality of the data in EuCliD is ensured at different levels. At the end of January 2001, more than 11,000 patients treated in 135 centres located in 7 countries are already included in the system. FMC has put the patient care at the centre of its activities for many years and now is able to provide transparency to the community (Authorities, Nephrologists, Patients.....) thus demonstrating the quality of the service.

  17. Development of technical skills in Electrical Power Engineering students: A case study of Power Electronics as a Key Course

    NASA Astrophysics Data System (ADS)

    Hussain, I. S.; Azlee Hamid, Fazrena

    2017-08-01

    Technical skills are one of the attributes, an engineering student must attain by the time of graduation, as per recommended by Engineering Accreditation Council (EAC). This paper describes the development of technical skills, Programme Outcome (PO) number 5, in students taking the Bachelor of Electrical Power Engineering (BEPE) programme in Universiti Tenaga Nasional (UNITEN). Seven courses are identified to address the technical skills development. The course outcomes (CO) of the courses are designed to instill the relevant technical skills with suitable laboratory activities. Formative and summative assessments are carried out to gauge students’ acquisition of the skills. Finally, to measure the attainment of the technical skills, key course concept is used. The concept has been implemented since 2013, focusing on improvement of the programme instead of the cohort. From the PO attainment analysis method, three different levels of PO attainment can be calculated: from the programme level, down to the course and student levels. In this paper, the attainment of the courses mapped to PO5 is measured. It is shown that Power Electronics course, which is the key course for PO5, has a strong attainment at above 90%. PO5 of other six courses are also achieved. As a conclusion, by embracing outcome-based education (OBE), the BEPE programme has a sound method to develop technical psychomotor skills in the degree students.

  18. Comparison of the NCI open database with seven large chemical structural databases.

    PubMed

    Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C

    2001-01-01

    Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.

  19. Secure communications using nonlinear silicon photonic keys.

    PubMed

    Grubel, Brian C; Bosworth, Bryan T; Kossey, Michael R; Cooper, A Brinton; Foster, Mark A; Foster, Amy C

    2018-02-19

    We present a secure communication system constructed using pairs of nonlinear photonic physical unclonable functions (PUFs) that harness physical chaos in integrated silicon micro-cavities. Compared to a large, electronically stored one-time pad, our method provisions large amounts of information within the intrinsically complex nanostructure of the micro-cavities. By probing a micro-cavity with a rapid sequence of spectrally-encoded ultrafast optical pulses and measuring the lightwave responses, we experimentally demonstrate the ability to extract 2.4 Gb of key material from a single micro-cavity device. Subsequently, in a secure communication experiment with pairs of devices, we achieve bit error rates below 10 -5 at code rates of up to 0.1. The PUFs' responses are never transmitted over the channel or stored in digital memory, thus enhancing the security of the system. Additionally, the micro-cavity PUFs are extremely small, inexpensive, robust, and fully compatible with telecommunications infrastructure, components, and electronic fabrication. This approach can serve one-time pad or public key exchange applications where high security is required.

  20. PrimateLit Database

    Science.gov Websites

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  1. A New NIST Database for the Simulation of Electron Spectra for Surface Analysis (SESSA): Application to Angle-Resolved X-ray Photoelectron Spectroscopy of HfO2, ZrO2, HfSiO4, and ZrSiO4 Films on Silicon

    NASA Astrophysics Data System (ADS)

    Powell, C. J.; Smekal, W.; Werner, W. S. M.

    2005-09-01

    We describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA). This database provides data for the many parameters needed in quantitative Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS). In addition, AES and XPS spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra. In this way, AES and XPS can provide more detailed characterization of multilayer thin-film materials. We report on the use of SESSA for determining the thicknesses of HfO2, ZrO2, HfSiO4, and ZrSiO4 films on Si by angle-resolved XPS. Practical effective attenuation lengths (EALs) have been computed from SESSA as a function of film thickness and photoelectron emission angle (i.e., to simulate the effects of tilting the sample). These EALs have been compared with similar values obtained from the NIST Electron Effective-Attenuation-Length Database (SRD 82). Generally good agreement was found between corresponding EAL values, but there were differences for film thicknesses less than the inelastic mean free path of the photoelectrons in the overlayer film. These differences are due to a simplifying approximation in the algorithm used to compute EALs in SRD 82. SESSA, with realistic cross sections for elastic and inelastic scattering in the film and substrate materials, is believed to provide more accurate EALs than SRD 82 for thin-film thickness measurements, particularly in applications where the film and substrate have different electron-scattering properties.

  2. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Management of electronic information. 2.1011 Section 2... High-Level Radioactive Waste at a Geologic Repository § 2.1011 Management of electronic information. (a... Language)-compliant (ANSI IX3.135-1992/ISO 9075-1992) database management system (DBMS). Alternatively, the...

  3. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.500... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  4. The Russian effort in establishing large atomic and molecular databases

    NASA Astrophysics Data System (ADS)

    Presnyakov, Leonid P.

    1998-07-01

    The database activities in Russia have been developed in connection with UV and soft X-ray spectroscopic studies of extraterrestrial and laboratory (magnetically confined and laser-produced) plasmas. Two forms of database production are used: i) a set of computer programs to calculate radiative and collisional data for the general atom or ion, and ii) development of numeric database systems with the data stored in the computer. The first form is preferable for collisional data. At the Lebedev Physical Institute, an appropriate set of the codes has been developed. It includes all electronic processes at collision energies from the threshold up to the relativistic limit. The ion -atom (and -ion) collisional data are calculated with the methods developed recently. The program for the calculations of the level populations and line intensities is used for spectrical diagnostics of transparent plasmas. The second form of database production is widely used at the Institute of Physico-Technical Measurements (VNIIFTRI), and the Troitsk Center: the Institute of Spectroscopy and TRINITI. The main results obtained at the centers above are reviewed. Plans for future developments jointly with international collaborations are discussed.

  5. Analysing and Rationalising Molecular and Materials Databases Using Machine-Learning

    NASA Astrophysics Data System (ADS)

    de, Sandip; Ceriotti, Michele

    Computational materials design promises to greatly accelerate the process of discovering new or more performant materials. Several collaborative efforts are contributing to this goal by building databases of structures, containing between thousands and millions of distinct hypothetical compounds, whose properties are computed by high-throughput electronic-structure calculations. The complexity and sheer amount of information has made manual exploration, interpretation and maintenance of these databases a formidable challenge, making it necessary to resort to automatic analysis tools. Here we will demonstrate how, starting from a measure of (dis)similarity between database items built from a combination of local environment descriptors, it is possible to apply hierarchical clustering algorithms, as well as dimensionality reduction methods such as sketchmap, to analyse, classify and interpret trends in molecular and materials databases, as well as to detect inconsistencies and errors. Thanks to the agnostic and flexible nature of the underlying metric, we will show how our framework can be applied transparently to different kinds of systems ranging from organic molecules and oligopeptides to inorganic crystal structures as well as molecular crystals. Funded by National Center for Computational Design and Discovery of Novel Materials (MARVEL) and Swiss National Science Foundation.

  6. Electronic medical record integration with a database for adult congenital heart disease: Early experience and progress in automating multicenter data collection.

    PubMed

    Broberg, Craig S; Mitchell, Julie; Rehel, Silven; Grant, Andrew; Gianola, Ann; Beninato, Peter; Winter, Christiane; Verstappen, Amy; Valente, Anne Marie; Weiss, Joseph; Zaidi, Ali; Earing, Michael G; Cook, Stephen; Daniels, Curt; Webb, Gary; Khairy, Paul; Marelli, Ariane; Gurvitz, Michelle Z; Sahn, David J

    2015-10-01

    The adoption of electronic health records (EHR) has created an opportunity for multicenter data collection, yet the feasibility and reliability of this methodology is unknown. The aim of this study was to integrate EHR data into a homogeneous central repository specifically addressing the field of adult congenital heart disease (ACHD). Target data variables were proposed and prioritized by consensus of investigators at five target ACHD programs. Database analysts determined which variables were available within their institutions' EHR and stratified their accessibility, and results were compared between centers. Data for patients seen in a single calendar year were extracted to a uniform database and subsequently consolidated. From 415 proposed target variables, only 28 were available in discrete formats at all centers. For variables of highest priority, 16/28 (57%) were available at all four sites, but only 11% for those of high priority. Integration was neither simple nor straightforward. Coding schemes in use for congenital heart diagnoses varied and would require additional user input for accurate mapping. There was considerable variability in procedure reporting formats and medication schemes, often with center-specific modifications. Despite the challenges, the final acquisition included limited data on 2161 patients, and allowed for population analysis of race/ethnicity, defect complexity, and body morphometrics. Large-scale multicenter automated data acquisition from EHRs is feasible yet challenging. Obstacles stem from variability in data formats, coding schemes, and adoption of non-standard lists within each EHR. The success of large-scale multicenter ACHD research will require institution-specific data integration efforts. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives

    NASA Astrophysics Data System (ADS)

    Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn

    2016-04-01

    Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.

  8. Knowledge Discovery in Biological Databases for Revealing Candidate Genes Linked to Complex Phenotypes.

    PubMed

    Hassani-Pak, Keywan; Rawlings, Christopher

    2017-06-13

    Genetics and "omics" studies designed to uncover genotype to phenotype relationships often identify large numbers of potential candidate genes, among which the causal genes are hidden. Scientists generally lack the time and technical expertise to review all relevant information available from the literature, from key model species and from a potentially wide range of related biological databases in a variety of data formats with variable quality and coverage. Computational tools are needed for the integration and evaluation of heterogeneous information in order to prioritise candidate genes and components of interaction networks that, if perturbed through potential interventions, have a positive impact on the biological outcome in the whole organism without producing negative side effects. Here we review several bioinformatics tools and databases that play an important role in biological knowledge discovery and candidate gene prioritization. We conclude with several key challenges that need to be addressed in order to facilitate biological knowledge discovery in the future.

  9. Mobile agent application and integration in electronic anamnesis system.

    PubMed

    Liu, Chia-Hui; Chung, Yu-Fang; Chen, Tzer-Shyong; Wang, Sheng-De

    2012-06-01

    Electronic anamnesis is to transform ordinary paper trails to digitally formatted health records, which include the patient's general information, health status, and follow-ups on chronic diseases. Its main purpose is to let the records could be stored for a longer period of time and could be shared easily across departments and hospitals. Which means hospital management could use less resource on maintaining ever-growing database and reduce redundancy, so less money would be spent for managing the health records. In the foreseeable future, building up a comprehensive and integrated medical information system is a must, because it is critical to hospital resource integration and quality improvement. If mobile agent technology is adopted in the electronic anamnesis system, it would help the hospitals to make the medical practices more efficiently and conveniently. Nonetheless, most of the hospitals today are still using paper-based health records to manage the medical information. The reason why the institutions continue using traditional practices to manage the records is because there is no well-trusted and reliable electronic anamnesis system existing and accepted by both institutions and patients. The threat of privacy invasion is one of the biggest concerns when the topic of electronic anamnesis is brought up, because the security threats drag us back from using such a system. So, the medical service quality is difficult to be improved substantially. In this case, we have come up a theory to remove such security threats and make electronic anamnesis more appealing for use. Our theory is to integrate the mobile agent technology with the backbone of electronic anamnesis to construct a hierarchical access control system to retrieve the corresponding information based upon the permission classes. The system would create a classification for permission among the users inside the medical institution. Under this framework, permission control center would distribute an

  10. Combining electronic structure and many-body theory with large databases: A method for predicting the nature of 4 f states in Ce compounds

    NASA Astrophysics Data System (ADS)

    Herper, H. C.; Ahmed, T.; Wills, J. M.; Di Marco, I.; Björkman, T.; Iuşan, D.; Balatsky, A. V.; Eriksson, O.

    2017-08-01

    Recent progress in materials informatics has opened up the possibility of a new approach to accessing properties of materials in which one assays the aggregate properties of a large set of materials within the same class in addition to a detailed investigation of each compound in that class. Here we present a large scale investigation of electronic properties and correlated magnetism in Ce-based compounds accompanied by a systematic study of the electronic structure and 4 f -hybridization function of a large body of Ce compounds. We systematically study the electronic structure and 4 f -hybridization function of a large body of Ce compounds with the goal of elucidating the nature of the 4 f states and their interrelation with the measured Kondo energy in these compounds. The hybridization function has been analyzed for more than 350 data sets (being part of the IMS database) of cubic Ce compounds using electronic structure theory that relies on a full-potential approach. We demonstrate that the strength of the hybridization function, evaluated in this way, allows us to draw precise conclusions about the degree of localization of the 4 f states in these compounds. The theoretical results are entirely consistent with all experimental information, relevant to the degree of 4 f localization for all investigated materials. Furthermore, a more detailed analysis of the electronic structure and the hybridization function allows us to make precise statements about Kondo correlations in these systems. The calculated hybridization functions, together with the corresponding density of states, reproduce the expected exponential behavior of the observed Kondo temperatures and prove a consistent trend in real materials. This trend allows us to predict which systems may be correctly identified as Kondo systems. A strong anticorrelation between the size of the hybridization function and the volume of the systems has been observed. The information entropy for this set of systems is

  11. Biomedical databases: protecting privacy and promoting research.

    PubMed

    Wylie, Jean E; Mineau, Geraldine P

    2003-03-01

    When combined with medical information, large electronic databases of information that identify individuals provide superlative resources for genetic, epidemiology and other biomedical research. Such research resources increasingly need to balance the protection of privacy and confidentiality with the promotion of research. Models that do not allow the use of such individual-identifying information constrain research; models that involve commercial interests raise concerns about what type of access is acceptable. Researchers, individuals representing the public interest and those developing regulatory guidelines must be involved in an ongoing dialogue to identify practical models.

  12. International contributions to IAEA-NEA heat transfer databases for supercritical fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, L. K. H.; Yamada, K.

    2012-07-01

    An IAEA Coordinated Research Project on 'Heat Transfer Behaviour and Thermohydraulics Code Testing for SCWRs' is being conducted to facilitate collaboration and interaction among participants from 15 organizations. While the project covers several key technology areas relevant to the development of SCWR concepts, it focuses mainly on the heat transfer aspect, which has been identified as the most challenging. Through the collaborating effort, large heat-transfer databases have been compiled for supercritical water and surrogate fluids in tubes, annuli, and bundle subassemblies of various orientations over a wide range of flow conditions. Assessments of several supercritical heat-transfer correlations were performed usingmore » the complied databases. The assessment results are presented. (authors)« less

  13. Using a Semi-Realistic Database to Support a Database Course

    ERIC Educational Resources Information Center

    Yue, Kwok-Bun

    2013-01-01

    A common problem for university relational database courses is to construct effective databases for instructions and assignments. Highly simplified "toy" databases are easily available for teaching, learning, and practicing. However, they do not reflect the complexity and practical considerations that students encounter in real-world…

  14. An optimal user-interface for EPIMS database conversions and SSQ 25002 EEE parts screening

    NASA Technical Reports Server (NTRS)

    Watson, John C.

    1996-01-01

    The Electrical, Electronic, and Electromechanical (EEE) Parts Information Management System (EPIMS) database was selected by the International Space Station Parts Control Board for providing parts information to NASA managers and contractors. Parts data is transferred to the EPIMS database by converting parts list data to the EP1MS Data Exchange File Format. In general, parts list information received from contractors and suppliers does not convert directly into the EPIMS Data Exchange File Format. Often parts lists use different variable and record field assignments. Many of the EPES variables are not defined in the parts lists received. The objective of this work was to develop an automated system for translating parts lists into the EPIMS Data Exchange File Format for upload into the EPIMS database. Once EEE parts information has been transferred to the EPIMS database it is necessary to screen parts data in accordance with the provisions of the SSQ 25002 Supplemental List of Qualified Electrical, Electronic, and Electromechanical Parts, Manufacturers, and Laboratories (QEPM&L). The SSQ 2S002 standards are used to identify parts which satisfy the requirements for spacecraft applications. An additional objective for this work was to develop an automated system which would screen EEE parts information against the SSQ 2S002 to inform managers of the qualification status of parts used in spacecraft applications. The EPIMS Database Conversion and SSQ 25002 User Interfaces are designed to interface through the World-Wide-Web(WWW)/Internet to provide accessibility by NASA managers and contractors.

  15. Application of materials database (MAT.DB.) to materials education

    NASA Technical Reports Server (NTRS)

    Liu, Ping; Waskom, Tommy L.

    1994-01-01

    Finding the right material for the job is an important aspect of engineering. Sometimes the choice is as fundamental as selecting between steel and aluminum. Other times, the choice may be between different compositions in an alloy. Discovering and compiling materials data is a demanding task, but it leads to accurate models for analysis and successful materials application. Mat. DB. is a database management system designed for maintaining information on the properties and processing of engineered materials, including metals, plastics, composites, and ceramics. It was developed by the Center for Materials Data of American Society for Metals (ASM) International. The ASM Center for Materials Data collects and reviews material property data for publication in books, reports, and electronic database. Mat. DB was developed to aid the data management and material applications.

  16. Biocuration at the Saccharomyces Genome Database

    PubMed Central

    Skrzypek, Marek S.; Nash, Robert S.

    2015-01-01

    Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. PMID:25997651

  17. DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh

    2014-10-01

    The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.

  18. ALARA database value in future outage work planning and dose management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.W.; Green, W.H.

    1995-03-01

    ALARA database encompassing job-specific duration and man-rem plant specific information over three refueling outages represents an invaluable tool for the outage work planner and ALARA engineer. This paper describes dose-management trends emerging based on analysis of three refueling outages at Clinton Power Station. Conclusions reached based on hard data available from a relational database dose-tracking system is a valuable tool for planning of future outage work. The system`s ability to identify key problem areas during a refueling outage is improving as more outage comparative data becomes available. Trends over a three outage period are identified in this paper in themore » categories of number and type of radiation work permits implemented, duration of jobs, projected vs. actual dose rates in work areas, and accuracy of outage person-rem projection. The value of the database in projecting 1 and 5 year station person-rem estimates is discussed.« less

  19. NATIVE HEALTH DATABASES: NATIVE HEALTH RESEARCH DATABASE (NHRD)

    EPA Science Inventory

    The Native Health Databases contain bibliographic information and abstracts of health-related articles, reports, surveys, and other resource documents pertaining to the health and health care of American Indians, Alaska Natives, and Canadian First Nations. The databases provide i...

  20. NOAA Propagation Database Value in Tsunami Forecast Guidance

    NASA Astrophysics Data System (ADS)

    Eble, M. C.; Wright, L. M.

    2016-02-01

    The National Oceanic and Atmospheric Administration (NOAA) Center for Tsunami Research (NCTR) has developed a tsunami forecasting capability that combines a graphical user interface with data ingestion and numerical models to produce estimates of tsunami wave arrival times, amplitudes, current or water flow rates, and flooding at specific coastal communities. The capability integrates several key components: deep-ocean observations of tsunamis in real-time, a basin-wide pre-computed propagation database of water level and flow velocities based on potential pre-defined seismic unit sources, an inversion or fitting algorithm to refine the tsunami source based on the observations during an event, and tsunami forecast models. As tsunami waves propagate across the ocean, observations from the deep ocean are automatically ingested into the application in real-time to better define the source of the tsunami itself. Since passage of tsunami waves over a deep ocean reporting site is not immediate, we explore the value of the NOAA propagation database in providing placeholder forecasts in advance of deep ocean observations. The propagation database consists of water elevations and flow velocities pre-computed for 50 x 100 [km] unit sources in a continuous series along all known ocean subduction zones. The 2011 Japan Tohoku tsunami is presented as the case study

  1. Phylogenomics databases for facilitating functional genomics in rice.

    PubMed

    Jung, Ki-Hong; Cao, Peijian; Sharma, Rita; Jain, Rashmi; Ronald, Pamela C

    2015-12-01

    The completion of whole genome sequence of rice (Oryza sativa) has significantly accelerated functional genomics studies. Prior to the release of the sequence, only a few genes were assigned a function each year. Since sequencing was completed in 2005, the rate has exponentially increased. As of 2014, 1,021 genes have been described and added to the collection at The Overview of functionally characterized Genes in Rice online database (OGRO). Despite this progress, that number is still very low compared with the total number of genes estimated in the rice genome. One limitation to progress is the presence of functional redundancy among members of the same rice gene family, which covers 51.6 % of all non-transposable element-encoding genes. There remain a significant portion or rice genes that are not functionally redundant, as reflected in the recovery of loss-of-function mutants. To more accurately analyze functional redundancy in the rice genome, we have developed a phylogenomics databases for six large gene families in rice, including those for glycosyltransferases, glycoside hydrolases, kinases, transcription factors, transporters, and cytochrome P450 monooxygenases. In this review, we introduce key features and applications of these databases. We expect that they will serve as a very useful guide in the post-genomics era of research.

  2. Integrated Database And Knowledge Base For Genomic Prospective Cohort Study In Tohoku Medical Megabank Toward Personalized Prevention And Medicine.

    PubMed

    Ogishima, Soichi; Takai, Takako; Shimokawa, Kazuro; Nagaie, Satoshi; Tanaka, Hiroshi; Nakaya, Jun

    2015-01-01

    The Tohoku Medical Megabank project is a national project to revitalization of the disaster area in the Tohoku region by the Great East Japan Earthquake, and have conducted large-scale prospective genome-cohort study. Along with prospective genome-cohort study, we have developed integrated database and knowledge base which will be key database for realizing personalized prevention and medicine.

  3. A theoretical-electron-density databank using a model of real and virtual spherical atoms.

    PubMed

    Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian

    2017-08-01

    A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.

  4. Electronic Voting Protocol Using Identity-Based Cryptography.

    PubMed

    Gallegos-Garcia, Gina; Tapia-Recillas, Horacio

    2015-01-01

    Electronic voting protocols proposed to date meet their properties based on Public Key Cryptography (PKC), which offers high flexibility through key agreement protocols and authentication mechanisms. However, when PKC is used, it is necessary to implement Certification Authority (CA) to provide certificates which bind public keys to entities and enable verification of such public key bindings. Consequently, the components of the protocol increase notably. An alternative is to use Identity-Based Encryption (IBE). With this kind of cryptography, it is possible to have all the benefits offered by PKC, without neither the need of certificates nor all the core components of a Public Key Infrastructure (PKI). Considering the aforementioned, in this paper we propose an electronic voting protocol, which meets the privacy and robustness properties by using bilinear maps.

  5. Electronic Voting Protocol Using Identity-Based Cryptography

    PubMed Central

    Gallegos-Garcia, Gina; Tapia-Recillas, Horacio

    2015-01-01

    Electronic voting protocols proposed to date meet their properties based on Public Key Cryptography (PKC), which offers high flexibility through key agreement protocols and authentication mechanisms. However, when PKC is used, it is necessary to implement Certification Authority (CA) to provide certificates which bind public keys to entities and enable verification of such public key bindings. Consequently, the components of the protocol increase notably. An alternative is to use Identity-Based Encryption (IBE). With this kind of cryptography, it is possible to have all the benefits offered by PKC, without neither the need of certificates nor all the core components of a Public Key Infrastructure (PKI). Considering the aforementioned, in this paper we propose an electronic voting protocol, which meets the privacy and robustness properties by using bilinear maps. PMID:26090515

  6. Evaluation and validity of a LORETA normative EEG database.

    PubMed

    Thatcher, R W; North, D; Biver, C

    2005-04-01

    To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.

  7. Realization of Real-Time Clinical Data Integration Using Advanced Database Technology

    PubMed Central

    Yoo, Sooyoung; Kim, Boyoung; Park, Heekyong; Choi, Jinwook; Chun, Jonghoon

    2003-01-01

    As information & communication technologies have advanced, interest in mobile health care systems has grown. In order to obtain information seamlessly from distributed and fragmented clinical data from heterogeneous institutions, we need solutions that integrate data. In this article, we introduce a method for information integration based on real-time message communication using trigger and advanced database technologies. Messages were devised to conform to HL7, a standard for electronic data exchange in healthcare environments. The HL7 based system provides us with an integrated environment in which we are able to manage the complexities of medical data. We developed this message communication interface to generate and parse HL7 messages automatically from the database point of view. We discuss how easily real time data exchange is performed in the clinical information system, given the requirement for minimum loading of the database system. PMID:14728271

  8. Investigation of Nematode Diversity using Scanning Electron Microscopy and Fluorescent Microscopy

    NASA Astrophysics Data System (ADS)

    Seacor, Taylor; Howell, Carina

    2013-03-01

    Nematode worms account for the vast majority of the animals in the biosphere. They are colossally important to global public health as parasites, and to agriculture both as pests and as beneficial inhabitants of healthy soil. Amphid neurons are the anterior chemosensory neurons in nematodes, mediating critical behaviors including chemotaxis and mating. We are examining the cellular morphology and external anatomy of amphid neurons, using fluorescence microscopy and scanning electron microscopy, respectively, of a wide range of soil nematodes isolated in the wild. We use both classical systematics (e.g. diagnostic keys) and molecular markers (e.g. ribosomal RNA) to classify these wild isolates. Our ultimate aim is to build a detailed anatomical database in order to dissect genetic pathways of neuronal development and function across phylogeny and ecology. Research supported by NSF grants 092304, 0806660, 1058829 and Lock Haven University FPDC grants

  9. Characterization of thin films on the nanometer scale by Auger electron spectroscopy and X-ray photoelectron spectroscopy

    NASA Astrophysics Data System (ADS)

    Powell, C. J.; Jablonski, A.; Werner, W. S. M.; Smekal, W.

    2005-01-01

    We describe two NIST databases that can be used to characterize thin films from Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS) measurements. First, the NIST Electron Effective-Attenuation-Length Database provides values of effective attenuation lengths (EALs) for user-specified materials and measurement conditions. The EALs differ from the corresponding inelastic mean free paths on account of elastic-scattering of the signal electrons. The database supplies "practical" EALs that can be used to determine overlayer-film thicknesses. Practical EALs are plotted as a function of film thickness, and an average value is shown for a user-selected thickness. The average practical EAL can be utilized as the "lambda parameter" to obtain film thicknesses from simple equations in which the effects of elastic-scattering are neglected. A single average practical EAL can generally be employed for a useful range of film thicknesses and for electron emission angles of up to about 60°. For larger emission angles, the practical EAL should be found for the particular conditions. Second, we describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA) to be released in 2004. This database provides data for many parameters needed in quantitative AES and XPS (e.g., excitation cross-sections, electron-scattering cross-sections, lineshapes, fluorescence yields, and backscattering factors). Relevant data for a user-specified experiment are automatically retrieved by a small expert system. In addition, Auger electron and photoelectron spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra, and thus, provide more detailed characterizations of multilayer thin-film materials. SESSA can also

  10. Pattern database applications from design to manufacturing

    NASA Astrophysics Data System (ADS)

    Zhuang, Linda; Zhu, Annie; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh

    2017-03-01

    Pattern-based approaches are becoming more common and popular as the industry moves to advanced technology nodes. At the beginning of a new technology node, a library of process weak point patterns for physical and electrical verification are starting to build up and used to prevent known hotspots from re-occurring on new designs. Then the pattern set is expanded to create test keys for process development in order to verify the manufacturing capability and precheck new tape-out designs for any potential yield detractors. With the database growing, the adoption of pattern-based approaches has expanded from design flows to technology development and then needed for mass-production purposes. This paper will present the complete downstream working flows of a design pattern database(PDB). This pattern-based data analysis flow covers different applications across different functional teams from generating enhancement kits to improving design manufacturability, populating new testing design data based on previous-learning, generating analysis data to improve mass-production efficiency and manufacturing equipment in-line control to check machine status consistency across different fab sites.

  11. The 2015 Nucleic Acids Research Database Issue and molecular biology database collection.

    PubMed

    Galperin, Michael Y; Rigden, Daniel J; Fernández-Suárez, Xosé M

    2015-01-01

    The 2015 Nucleic Acids Research Database Issue contains 172 papers that include descriptions of 56 new molecular biology databases, and updates on 115 databases whose descriptions have been previously published in NAR or other journals. Following the classification that has been introduced last year in order to simplify navigation of the entire issue, these articles are divided into eight subject categories. This year's highlights include RNAcentral, an international community portal to various databases on noncoding RNA; ValidatorDB, a validation database for protein structures and their ligands; SASBDB, a primary repository for small-angle scattering data of various macromolecular complexes; MoonProt, a database of 'moonlighting' proteins, and two new databases of protein-protein and other macromolecular complexes, ComPPI and the Complex Portal. This issue also includes an unusually high number of cancer-related databases and other databases dedicated to genomic basics of disease and potential drugs and drug targets. The size of NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/a/, remained approximately the same, following the addition of 74 new resources and removal of 77 obsolete web sites. The entire Database Issue is freely available online on the Nucleic Acids Research web site (http://nar.oxfordjournals.org/). Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. Advancements in web-database applications for rabies surveillance.

    PubMed

    Rees, Erin E; Gendron, Bruno; Lelièvre, Frédérick; Coté, Nathalie; Bélanger, Denise

    2011-08-02

    Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include (1) automatic integration of multi-agency data and diagnostic results on a daily basis; (2) a web-based data editing interface that enables authorized users to add, edit and extract data; and (3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real

  13. Advancements in web-database applications for rabies surveillance

    PubMed Central

    2011-01-01

    Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1) automatic integration of multi-agency data and diagnostic results on a daily basis; 2) a web-based data editing interface that enables authorized users to add, edit and extract data; and 3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies

  14. Energy Consumption Database

    Science.gov Websites

    Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX

  15. Ophthalmology and vision science research: part 5: surfing or sieving--using literature databases wisely.

    PubMed

    Sherwin, Trevor; Gilhotra, Amardeep K

    2006-02-01

    Literature databases are an ever-expanding resource available to the field of medical sciences. Understanding how to use such databases efficiently is critical for those involved in research. However, for the uninitiated, getting started is a major hurdle to overcome and for the occasional user, the finer points of database searching remain an unacquired skill. In the fifth and final article in this series aimed at those embarking on ophthalmology and vision science research, we look at how the beginning researcher can start to use literature databases and, by using a stepwise approach, how they can optimize their use. This instructional paper gives a hypothetical example of a researcher writing a review article and how he or she acquires the necessary scientific literature for the article. A prototype search of the Medline database is used to illustrate how even a novice might swiftly acquire the skills required for a medium-level search. It provides examples and key tips that can increase the proficiency of the occasional user. Pitfalls of database searching are discussed, as are the limitations of which the user should be aware.

  16. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  17. Data for free--can an electronic medical record provide outcome data for incontinence/prolapse repair procedures?

    PubMed

    Steidl, Matthew; Zimmern, Philippe

    2013-01-01

    We determined whether a custom computer program can improve the extraction and accuracy of key outcome measures from progress notes in an electronic medical record compared to a traditional data recording system for incontinence and prolapse repair procedures. Following institutional review board approval, progress notes were exported from the Epic electronic medical record system for outcome measure extraction by a custom computer program. The extracted data (D1) were compared against a manually maintained outcome measures database (D2). This work took place in 2 phases. During the first phase, volatile data such as questionnaires and standardized physical examination findings using the POP-Q (pelvic organ prolapse quantification) system were extracted from existing progress notes. The second phase used a progress note template incorporating key outcome measures to evaluate improvement in data accuracy and extraction rates. Phase 1 compared 6,625 individual outcome measures from 316 patients in D2 to 3,534 outcome measures extracted from progress notes in D1, resulting in an extraction rate of 53.3%. A subset of 3,763 outcome measures from D1 was created by excluding data that did not exist in the extraction, yielding an accuracy rate of 93.9%. With the use of the template in phase 2, the extraction rate improved to 91.9% (273 of 297) and the accuracy rate improved to 100% (273 of 273). In the field of incontinence and prolapse, the disciplined use of an electronic medical record template containing a preestablished set of key outcome measures can provide the ideal interface between required documentation and clinical research. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. The opportunities and obstacles in developing a vascular birthmark database for clinical and research use.

    PubMed

    Sharma, Vishal K; Fraulin, Frankie Og; Harrop, A Robertson; McPhalen, Donald F

    2011-01-01

    Databases are useful tools in clinical settings. The authors review the benefits and challenges associated with the development and implementation of an efficient electronic database for the multidisciplinary Vascular Birthmark Clinic at the Alberta Children's Hospital, Calgary, Alberta. The content and structure of the database were designed using the technical expertise of a data analyst from the Calgary Health Region. Relevant clinical and demographic data fields were included with the goal of documenting ongoing care of individual patients, and facilitating future epidemiological studies of this patient population. After completion of this database, 10 challenges encountered during development were retrospectively identified. Practical solutions for these challenges are presented. THE CHALLENGES IDENTIFIED DURING THE DATABASE DEVELOPMENT PROCESS INCLUDED: identification of relevant data fields; balancing simplicity and user-friendliness with complexity and comprehensive data storage; database expertise versus clinical expertise; software platform selection; linkage of data from the previous spreadsheet to a new data management system; ethics approval for the development of the database and its utilization for research studies; ensuring privacy and limited access to the database; integration of digital photographs into the database; adoption of the database by support staff in the clinic; and maintaining up-to-date entries in the database. There are several challenges involved in the development of a useful and efficient clinical database. Awareness of these potential obstacles, in advance, may simplify the development of clinical databases by others in various surgical settings.

  19. A Support Database System for Integrated System Health Management (ISHM)

    NASA Technical Reports Server (NTRS)

    Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John

    2007-01-01

    The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between

  20. 3D visualization of molecular structures in the MOGADOC database

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen

    2010-08-01

    The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.

  1. Requirements, Verification, and Compliance (RVC) Database Tool

    NASA Technical Reports Server (NTRS)

    Rainwater, Neil E., II; McDuffee, Patrick B.; Thomas, L. Dale

    2001-01-01

    This paper describes the development, design, and implementation of the Requirements, Verification, and Compliance (RVC) database used on the International Space Welding Experiment (ISWE) project managed at Marshall Space Flight Center. The RVC is a systems engineer's tool for automating and managing the following information: requirements; requirements traceability; verification requirements; verification planning; verification success criteria; and compliance status. This information normally contained within documents (e.g. specifications, plans) is contained in an electronic database that allows the project team members to access, query, and status the requirements, verification, and compliance information from their individual desktop computers. Using commercial-off-the-shelf (COTS) database software that contains networking capabilities, the RVC was developed not only with cost savings in mind but primarily for the purpose of providing a more efficient and effective automated method of maintaining and distributing the systems engineering information. In addition, the RVC approach provides the systems engineer the capability to develop and tailor various reports containing the requirements, verification, and compliance information that meets the needs of the project team members. The automated approach of the RVC for capturing and distributing the information improves the productivity of the systems engineer by allowing that person to concentrate more on the job of developing good requirements and verification programs and not on the effort of being a "document developer".

  2. A New NIST Database for the Simulation of Electron Spectra for Surface Analysis (SESSA): Application to Angle-Resolved X-ray Photoelectron Spectroscopy of HfO2, ZrO2, HfSiO4, and ZrSiO4 Films on Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powell, C.J.; Smekal, W.; Werner, W.S.M.

    2005-09-09

    We describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA). This database provides data for the many parameters needed in quantitative Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS). In addition, AES and XPS spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra. In this way, AES and XPS can provide more detailed characterization of multilayer thin-film materials. Wemore » report on the use of SESSA for determining the thicknesses of HfO2, ZrO2, HfSiO4, and ZrSiO4 films on Si by angle-resolved XPS. Practical effective attenuation lengths (EALs) have been computed from SESSA as a function of film thickness and photoelectron emission angle (i.e., to simulate the effects of tilting the sample). These EALs have been compared with similar values obtained from the NIST Electron Effective-Attenuation-Length Database (SRD 82). Generally good agreement was found between corresponding EAL values, but there were differences for film thicknesses less than the inelastic mean free path of the photoelectrons in the overlayer film. These differences are due to a simplifying approximation in the algorithm used to compute EALs in SRD 82. SESSA, with realistic cross sections for elastic and inelastic scattering in the film and substrate materials, is believed to provide more accurate EALs than SRD 82 for thin-film thickness measurements, particularly in applications where the film and substrate have different electron-scattering properties.« less

  3. A systematic review of administrative and clinical databases of infants admitted to neonatal units.

    PubMed

    Statnikov, Yevgeniy; Ibrahim, Buthaina; Modi, Neena

    2017-05-01

    High quality information, increasingly captured in clinical databases, is a useful resource for evaluating and improving newborn care. We conducted a systematic review to identify neonatal databases, and define their characteristics. We followed a preregistered protocol using MesH terms to search MEDLINE, EMBASE, CINAHL, Web of Science and OVID Maternity and Infant Care Databases for articles identifying patient level databases covering more than one neonatal unit. Full-text articles were reviewed and information extracted on geographical coverage, criteria for inclusion, data source, and maternal and infant characteristics. We identified 82 databases from 2037 publications. Of the country-specific databases there were 39 regional and 39 national. Sixty databases restricted entries to neonatal unit admissions by birth characteristic or insurance cover; 22 had no restrictions. Data were captured specifically for 53 databases; 21 administrative sources; 8 clinical sources. Two clinical databases hold the largest range of data on patient characteristics, USA's Pediatrix BabySteps Clinical Data Warehouse and UK's National Neonatal Research Database. A number of neonatal databases exist that have potential to contribute to evaluating neonatal care. The majority is created by entering data specifically for the database, duplicating information likely already captured in other administrative and clinical patient records. This repetitive data entry represents an unnecessary burden in an environment where electronic patient records are increasingly used. Standardisation of data items is necessary to facilitate linkage within and between countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. 49 CFR 239.303 - Electronic recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Electronic recordkeeping. 239.303 Section 239.303 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... limits and controls accessibility to such information retained in its database system and identifies...

  5. South American foF2 database using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gularte, Erika; Bilitza, Dieter; Carpintero, Daniel; Jaen, Juliana

    2016-07-01

    We present the first step towards a new database of the ionospheric parameter foF2 for the South American region. The foF2 parameter, being the maximum of the ionospheric electronic density profile and its main sculptor, is of great interest not only in atmospheric studies but also in the realm of radio propagation. Due to its importance, its large variability and the difficulty to model it in time and space, it was the subject of an intense study since decades ago. The current databases, used by the IRI (International Reference Ionosphere) model, and based on Fourier expansions, has been built in the 60s from the available ionosondes at that time; therefore, it is still short of South American data. The main goal of this work is to upgrade the database, incorporating the now available data compiled by the RAPEAS (Red Argentina para el Estudio de la Atmósfera Superior, Argentine Network for the Study of the Upper Atmosphere) network. Also, we developed an algorithm to study the foF2 variability, based on the modern technique of genetic algorithms, which has been successfully applied on other disciplines. One of the main advantages of this technique is its ability in working with many variables and with unfavorable samples. The results are compared with the IRI databases, and improvements to the latter are suggested. Finally, it is important to notice that the new database is designed so that new available data can be easily incorporated.

  6. Following an electron bunch for free electron laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2012-01-01

    A video artist's ultra-slow-motion impression of an APEX-style electron gun firing a continuous train of electron bunches into a superconducting linear accelerator (in reality this would happen a million times a second). As they approach the speed of light the bunches contract, maintaining beam quality. After acceleration, the electron bunches are diverted into one or more undulators, the key components of free electron lasers. Oscillating back and forth in the changing magnetic field, they create beams of structured x-ray pulses. Before entering the experimental areas the electron bunches are diverted to a beam dump. (Animation created by Illumina Visual, http://www.illuminavisual.com/,more » for Lawrence Berkeley National Laboratory. Music for this excerpt, "Feeling Dark (Behind The Mask)" is by 7OOP3D http://ccmixter.org/files/7OOP3D/29126 and is licensed under a Creative Commons license: http://creativecommons.org/licenses/by-nc/3.0/)« less

  7. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  8. De-identifying an EHR database - anonymity, correctness and readability of the medical record.

    PubMed

    Pantazos, Kostas; Lauesen, Soren; Lippert, Soren

    2011-01-01

    Electronic health records (EHR) contain a large amount of structured data and free text. Exploring and sharing clinical data can improve healthcare and facilitate the development of medical software. However, revealing confidential information is against ethical principles and laws. We de-identified a Danish EHR database with 437,164 patients. The goal was to generate a version with real medical records, but related to artificial persons. We developed a de-identification algorithm that uses lists of named entities, simple language analysis, and special rules. Our algorithm consists of 3 steps: collect lists of identifiers from the database and external resources, define a replacement for each identifier, and replace identifiers in structured data and free text. Some patient records could not be safely de-identified, so the de-identified database has 323,122 patient records with an acceptable degree of anonymity, readability and correctness (F-measure of 95%). The algorithm has to be adjusted for each culture, language and database.

  9. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  10. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2014-05-01

    This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system

  11. MAGIC database and interfaces: an integrated package for gene discovery and expression.

    PubMed

    Cordonnier-Pratt, Marie-Michèle; Liang, Chun; Wang, Haiming; Kolychev, Dmitri S; Sun, Feng; Freeman, Robert; Sullivan, Robert; Pratt, Lee H

    2004-01-01

    The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC) Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs), and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.

  12. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  13. Electronic locking system

    NASA Astrophysics Data System (ADS)

    Nieuwkoop, E.

    An electronic locking system was developed to remove the disadvantages of conventional mechanical door locks. The electrolock has to replace existing locks. Therefore, the techniques of Surface Mount Technology and Application Specific Integrated Circuit were applied to overcome the space limitations. The key consists of a metal rod with grip equipped with a contactless chip. When the key is inserted in the lock, a magnetic field is generated in the cylinder which induces a voltage in the chip. Therefore a battery is not required. The chip then emits inductively a code which is unique for each key. The electrolock was successfully tested.

  14. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  15. Supporting ontology-based keyword search over medical databases.

    PubMed

    Kementsietsidis, Anastasios; Lim, Lipyeow; Wang, Min

    2008-11-06

    The proliferation of medical terms poses a number of challenges in the sharing of medical information among different stakeholders. Ontologies are commonly used to establish relationships between different terms, yet their role in querying has not been investigated in detail. In this paper, we study the problem of supporting ontology-based keyword search queries on a database of electronic medical records. We present several approaches to support this type of queries, study the advantages and limitations of each approach, and summarize the lessons learned as best practices.

  16. The MAR databases: development and implementation of databases specific for marine metagenomics

    PubMed Central

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen

    2018-01-01

    Abstract We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. PMID:29106641

  17. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  18. 21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 9 2014-04-01 2014-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...

  19. 21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 9 2012-04-01 2012-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...

  20. 21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 9 2011-04-01 2011-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...

  1. 21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 9 2013-04-01 2013-04-01 false Requirements for storing and using a private key... Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for digitally... private key. (b) The certificate holder must provide FIPS-approved secure storage for the private key, as...

  2. Materials properties numerical database system established and operational at CINDAS/Purdue University

    NASA Technical Reports Server (NTRS)

    Ho, C. Y.; Li, H. H.

    1989-01-01

    A computerized comprehensive numerical database system on the mechanical, thermophysical, electronic, electrical, magnetic, optical, and other properties of various types of technologically important materials such as metals, alloys, composites, dielectrics, polymers, and ceramics has been established and operational at the Center for Information and Numerical Data Analysis and Synthesis (CINDAS) of Purdue University. This is an on-line, interactive, menu-driven, user-friendly database system. Users can easily search, retrieve, and manipulate the data from the database system without learning special query language, special commands, standardized names of materials, properties, variables, etc. It enables both the direct mode of search/retrieval of data for specified materials, properties, independent variables, etc., and the inverted mode of search/retrieval of candidate materials that meet a set of specified requirements (which is the computer-aided materials selection). It enables also tabular and graphical displays and on-line data manipulations such as units conversion, variables transformation, statistical analysis, etc., of the retrieved data. The development, content, accessibility, etc., of the database system are presented and discussed.

  3. Electronic tools to support medication reconciliation: a systematic review.

    PubMed

    Marien, Sophie; Krug, Bruno; Spinewine, Anne

    2017-01-01

    Medication reconciliation (MedRec) is essential for reducing patient harm caused by medication discrepancies across care transitions. Electronic support has been described as a promising approach to moving MedRec forward. We systematically reviewed the evidence about electronic tools that support MedRec, by (a) identifying tools; (b) summarizing their characteristics with regard to context, tool, implementation, and evaluation; and (c) summarizing key messages for successful development and implementation. We searched PubMed, the Cumulative Index to Nursing and Allied Health Literature, Embase, PsycINFO, and the Cochrane Library, and identified additional reports from reference lists, reviews, and patent databases. Reports were included if the electronic tool supported medication history taking and the identification and resolution of medication discrepancies. Two researchers independently selected studies, evaluated the quality of reporting, and extracted data. Eighteen reports relative to 11 tools were included. There were eight quality improvement projects, five observational effectiveness studies, three randomized controlled trials (RCTs) or RCT protocols (ie, descriptions of RCTs in progress), and two patents. All tools were developed in academic environments in North America. Most used electronic data from multiple sources and partially implemented functionalities considered to be important. Relevant information on functionalities and implementation features was frequently missing. Evaluations mainly focused on usability, adherence, and user satisfaction. One RCT evaluated the effect on potential adverse drug events. Successful implementation of electronic tools to support MedRec requires favorable context, properly designed tools, and attention to implementation features. Future research is needed to evaluate the effect of these tools on the quality and safety of healthcare. © The Author 2016. Published by Oxford University Press on behalf of the American

  4. 49 CFR 239.303 - Electronic recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Electronic recordkeeping. 239.303 Section 239.303 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... accessibility to such information retained in its database system and identifies those individuals who have such...

  5. 49 CFR 239.303 - Electronic recordkeeping.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Electronic recordkeeping. 239.303 Section 239.303 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... accessibility to such information retained in its database system and identifies those individuals who have such...

  6. 49 CFR 239.303 - Electronic recordkeeping.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Electronic recordkeeping. 239.303 Section 239.303 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... accessibility to such information retained in its database system and identifies those individuals who have such...

  7. 49 CFR 239.303 - Electronic recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Electronic recordkeeping. 239.303 Section 239.303 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... accessibility to such information retained in its database system and identifies those individuals who have such...

  8. Analysis and comparison of NoSQL databases with an introduction to consistent references in big data storage systems

    NASA Astrophysics Data System (ADS)

    Dziedzic, Adam; Mulawka, Jan

    2014-11-01

    NoSQL is a new approach to data storage and manipulation. The aim of this paper is to gain more insight into NoSQL databases, as we are still in the early stages of understanding when to use them and how to use them in an appropriate way. In this submission descriptions of selected NoSQL databases are presented. Each of the databases is analysed with primary focus on its data model, data access, architecture and practical usage in real applications. Furthemore, the NoSQL databases are compared in fields of data references. The relational databases offer foreign keys, whereas NoSQL databases provide us with limited references. An intermediate model between graph theory and relational algebra which can address the problem should be created. Finally, the proposal of a new approach to the problem of inconsistent references in Big Data storage systems is introduced.

  9. Using a Data Quality Framework to Clean Data Extracted from the Electronic Health Record: A Case Study

    PubMed Central

    Dziadkowiec, Oliwier; Callahan, Tiffany; Ozkaynak, Mustafa; Reeder, Blaine; Welton, John

    2016-01-01

    Objectives: We examine the following: (1) the appropriateness of using a data quality (DQ) framework developed for relational databases as a data-cleaning tool for a data set extracted from two EPIC databases, and (2) the differences in statistical parameter estimates on a data set cleaned with the DQ framework and data set not cleaned with the DQ framework. Background: The use of data contained within electronic health records (EHRs) has the potential to open doors for a new wave of innovative research. Without adequate preparation of such large data sets for analysis, the results might be erroneous, which might affect clinical decision-making or the results of Comparative Effectives Research studies. Methods: Two emergency department (ED) data sets extracted from EPIC databases (adult ED and children ED) were used as examples for examining the five concepts of DQ based on a DQ assessment framework designed for EHR databases. The first data set contained 70,061 visits; and the second data set contained 2,815,550 visits. SPSS Syntax examples as well as step-by-step instructions of how to apply the five key DQ concepts these EHR database extracts are provided. Conclusions: SPSS Syntax to address each of the DQ concepts proposed by Kahn et al. (2012)1 was developed. The data set cleaned using Kahn’s framework yielded more accurate results than the data set cleaned without this framework. Future plans involve creating functions in R language for cleaning data extracted from the EHR as well as an R package that combines DQ checks with missing data analysis functions. PMID:27429992

  10. 77 FR 31237 - Electronic Filing in the Copyright Office of Notices of Intention To Obtain a Section 115...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-25

    ... multiple nondramatic musical works may be submitted electronically as XML files. Electronically submitted Notices will be maintained in a database that can be searched using any of the included fields of... the Licensing Division for a search of the database during the interim period. As such, the Office...

  11. An ignition key for atomic-scale engines

    NASA Astrophysics Data System (ADS)

    Dundas, Daniel; Cunningham, Brian; Buchanan, Claire; Terasawa, Asako; Paxton, Anthony T.; Todorov, Tchavdar N.

    2012-10-01

    A current-carrying resonant nanoscale device, simulated by non-adiabatic molecular dynamics, exhibits sharp activation of non-conservative current-induced forces with bias. The result, above the critical bias, is generalized rotational atomic motion with a large gain in kinetic energy. The activation exploits sharp features in the electronic structure, and constitutes, in effect, an ignition key for atomic-scale motors. A controlling factor for the effect is the non-equilibrium dynamical response matrix for small-amplitude atomic motion under current. This matrix can be found from the steady-state electronic structure by a simpler static calculation, providing a way to detect the likely appearance, or otherwise, of non-conservative dynamics, in advance of real-time modelling.

  12. Security Management of Electronic Data Interchange

    DTIC Science & Technology

    1993-06-01

    48 6. Signatures by Tamper-Resistent Electronic seal .................................. 49 7. Resolution of Disputes...Trademark by RSA). Secure communication is not possible without any pi eu uous relationship between parties. Electronic mail may be sealed in a...public key certification. [Ref. 321 6. Signatures by Tamper-Resistent Electronic seal There is a separation between encryption and decryption in a public

  13. Hardening Logic Encryption against Key Extraction Attacks with Circuit Camouflage

    DTIC Science & Technology

    2017-03-01

    camouflage; obfuscation; SAT; key extraction; reverse engineering; security; trusted electronics Introduction Integrated Circuit (IC) designs are...Encryption Algorithms”, Hardware Oriented Security and Trust , 2015. 3. Rajendran J., Pino, Y., Sinanoglu, O., Karri, R., “Security Analysis of Logic

  14. The MAR databases: development and implementation of databases specific for marine metagenomics.

    PubMed

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Central Appalachian basin natural gas database: distribution, composition, and origin of natural gases

    USGS Publications Warehouse

    Román Colón, Yomayra A.; Ruppert, Leslie F.

    2015-01-01

    The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.

  16. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  17. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  18. Databases for Microbiologists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhulin, Igor B.

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  19. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  20. Non-thermal recombination - a neglected source of flare hard X-rays and fast electron diagnostics (Corrigendum)

    NASA Astrophysics Data System (ADS)

    Brown, J. C.; Mallik, P. C. V.; Badnell, N. R.

    2010-06-01

    Brown and Mallik (BM) recently claimed that non-thermal recombination (NTR) can be a dominant source of flare hard X-rays (HXRs) from hot coronal and chromospheric sources. However, major discrepancies between the thermal continua predicted by BM and by the Chianti database as well as RHESSI flare data, led us to discover substantial errors in the heuristic expression used by BM to extend the Kramers expressions beyond the hydrogenic case. Here we present the relevant corrected expressions and show the key modified results. We conclude that, in most cases, NTR emission was overestimated by a factor of 1-8 by BM but is typically still large enough (as much as 20-30% of the total emission) to be very important for electron spectral inference and detection of electron spectral features such as low energy cut-offs since the recombination spectra contain sharp edges. For extreme temperature regimes and/or if the Fe abundance were as high as some values claimed, NTR could even be the dominant source of flare HXRs, reducing the electron number and energy budget, problems such as in the extreme coronal HXR source cases reported by e.g. Krucker et al.

  1. Chemical thermodynamic data. 1. The concept of links to the chemical elements and the historical development of key thermodynamic data

    NASA Astrophysics Data System (ADS)

    Wolery, Thomas J.; Jové Colón, Carlos F.

    2017-09-01

    Chemical thermodynamic data remain a keystone for geochemical modeling and reactive transport simulation as applied to an increasing number of applications in the earth sciences, as well as applications in other areas including metallurgy, material science, and industrial process design. The last century has seen the development of a large body of thermodynamic data and a number of major compilations. The past several decades have seen the development of thermodynamic databases in digital form designed to support computer calculations. However, problems with thermodynamic data appear to be persistent. One problem pertains to the use of inconsistent primary key reference data. Such data pertain to elemental reference forms and key, stoichiometrically simple chemical species including metal oxides, CO2, water, and aqueous species such as Na+ and Cl-. A consistent set of primary key data (standard Gibbs energies, standard enthalpies, and standard entropies for key chemical species) for 298.15 K and 1 bar pressure is essential. Thermochemical convention is to define the standard Gibbs energy and the standard enthalpy of an individual chemical species in terms of formation from reference forms of the constituent chemical elements. We propose a formal concept of "links" to the elemental reference forms. This concept involves a documented understanding of all reactions and calculations leading to values for a formation property (standard Gibbs energy or enthalpy). A valid link consists of two parts: (a) the path of reactions and corrections and (b) the associated data, which are key data. Such a link differs from a bare "key" or "reference" datum in that it requires additional information. Some or all of its associated data may also be key data. In evaluating a reported thermodynamic datum, one should identify the links to the chemical elements, a process which can be time-consuming and which may lead to a dead end (an incomplete link). The use of two or more inconsistent

  2. Online Databases in Physics.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; Verbeck, Alison F.

    1984-01-01

    This overview of 47 online sources for physics information available in the United States--including sub-field databases, transdisciplinary databases, and multidisciplinary databases-- notes content, print source, language, time coverage, and databank. Two discipline-specific databases (SPIN and PHYSICS BRIEFS) are also discussed. (EJS)

  3. Development of a personalized training system using the Lung Image Database Consortium and Image Database resource Initiative Database.

    PubMed

    Lin, Hongli; Wang, Weisheng; Luo, Jiawei; Yang, Xuedong

    2014-12-01

    The aim of this study was to develop a personalized training system using the Lung Image Database Consortium (LIDC) and Image Database resource Initiative (IDRI) Database, because collecting, annotating, and marking a large number of appropriate computed tomography (CT) scans, and providing the capability of dynamically selecting suitable training cases based on the performance levels of trainees and the characteristics of cases are critical for developing a efficient training system. A novel approach is proposed to develop a personalized radiology training system for the interpretation of lung nodules in CT scans using the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) database, which provides a Content-Boosted Collaborative Filtering (CBCF) algorithm for predicting the difficulty level of each case of each trainee when selecting suitable cases to meet individual needs, and a diagnostic simulation tool to enable trainees to analyze and diagnose lung nodules with the help of an image processing tool and a nodule retrieval tool. Preliminary evaluation of the system shows that developing a personalized training system for interpretation of lung nodules is needed and useful to enhance the professional skills of trainees. The approach of developing personalized training systems using the LIDC/IDRL database is a feasible solution to the challenges of constructing specific training program in terms of cost and training efficiency. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  4. Ferritin light-chain subunits: key elements for the electron transfer across the protein cage.

    PubMed

    Carmona, Unai; Li, Le; Zhang, Lianbing; Knez, Mato

    2014-12-18

    The first specific functionality of the light-chain (L-chain) subunit of the universal iron storage protein ferritin was identified. The electrons released during iron-oxidation were transported across the ferritin cage specifically through the L-chains and the inverted electron transport through the L-chains also accelerated the demineralization of ferritin.

  5. Mycobacteriophage genome database.

    PubMed

    Joseph, Jerrine; Rajendran, Vasanthi; Hassan, Sameer; Kumar, Vanaja

    2011-01-01

    Mycobacteriophage genome database (MGDB) is an exclusive repository of the 64 completely sequenced mycobacteriophages with annotated information. It is a comprehensive compilation of the various gene parameters captured from several databases pooled together to empower mycobacteriophage researchers. The MGDB (Version No.1.0) comprises of 6086 genes from 64 mycobacteriophages classified into 72 families based on ACLAME database. Manual curation was aided by information available from public databases which was enriched further by analysis. Its web interface allows browsing as well as querying the classification. The main objective is to collect and organize the complexity inherent to mycobacteriophage protein classification in a rational way. The other objective is to browse the existing and new genomes and describe their functional annotation. The database is available for free at http://mpgdb.ibioinformatics.org/mpgdb.php.

  6. Quantum electron tunneling in respiratory complex I.

    PubMed

    Hayashi, Tomoyuki; Stuchebrukhov, Alexei A

    2011-05-12

    We have simulated the atomistic details of electronic wiring of all Fe/S clusters in complex I, a key enzyme in the respiratory electron transport chain. The tunneling current theory of many-electron systems is applied to the broken-symmetry (BS) states of the protein at the ZINDO level. While the one-electron tunneling approximation is found to hold in electron tunneling between the antiferromagnetic binuclear and tetranuclear Fe/S clusters without major orbital or spin rearrangement of the core electrons, induced polarization of the core electrons contributes significantly to decrease the electron transfer rates to 19-56 %. Calculated tunneling energy is about 3 eV higher than Fermi level in the band gap of the protein, which supports that the mechanism of electron transfer is quantum mechanical tunneling, as in the rest of the electron transport chain. Resulting electron tunneling pathways consist of up to three key contributing protein residues between neighboring Fe/S clusters. A signature of the wave properties of electrons is observed as distinct quantum interferences when multiple tunneling pathways exist. In N6a-N6b, electron tunnels along different pathways depending on the involved BS states, suggesting possible fluctuations of the tunneling pathways driven by the local protein environment. The calculated distance dependence of the electron transfer rates with internal water molecules included is in good agreement with a reported phenomenological relation.

  7. Maize databases

    USDA-ARS?s Scientific Manuscript database

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  8. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  9. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  10. Instruments of scientific visual representation in atomic databases

    NASA Astrophysics Data System (ADS)

    Kazakov, V. V.; Kazakov, V. G.; Meshkov, O. I.

    2017-10-01

    Graphic tools of spectral data representation provided by operating information systems on atomic spectroscopy—ASD NIST, VAMDC, SPECTR-W3, and Electronic Structure of Atoms—for the support of scientific-research and human-resource development are presented. Such tools of visual representation of scientific data as those of the spectrogram and Grotrian diagram plotting are considered. The possibility of comparative analysis of the experimentally obtained spectra and reference spectra of atomic systems formed according to the database of a resource is described. The access techniques to the mentioned graphic tools are presented.

  11. World Key Information Service System Designed For EPCOT Center

    NASA Astrophysics Data System (ADS)

    Kelsey, J. A.

    1984-03-01

    An advanced Bell Laboratories and Western Electric designed electronic information retrieval system utilizing the latest Information Age technologies, and a fiber optic transmission system is featured at the Walt Disney World Resort's newest theme park - The Experimental Prototype Community of Tomorrow (EPCOT Center). The project is an interactive audio, video and text information system that is deployed at key locations within the park. The touch sensitive terminals utilizing the ARIEL (Automatic Retrieval of Information Electronically) System is interconnected by a Western Electric designed and manufactured lightwave transmission system.

  12. Creating Your Own Database.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1982-01-01

    Outlines the important factors to be considered in selecting a database management system for use with a microcomputer and presents a series of guidelines for developing a database. General procedures, report generation, data manipulation, information storage, word processing, data entry, database indexes, and relational databases are among the…

  13. Biocuration at the Saccharomyces genome database.

    PubMed

    Skrzypek, Marek S; Nash, Robert S

    2015-08-01

    Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. © 2015 Wiley Periodicals, Inc.

  14. A database for reproducible manipulation research: CapriDB - Capture, Print, Innovate.

    PubMed

    Pokorny, Florian T; Bekiroglu, Yasemin; Pauwels, Karl; Butepage, Judith; Scherer, Clara; Kragic, Danica

    2017-04-01

    We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB - an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

  15. A global multiproxy database for temperature reconstructions of the Common Era.

    PubMed

    2017-07-11

    Reproducible climate reconstructions of the Common Era (1 CE to present) are key to placing industrial-era warming into the context of natural climatic variability. Here we present a community-sourced database of temperature-sensitive proxy records from the PAGES2k initiative. The database gathers 692 records from 648 locations, including all continental regions and major ocean basins. The records are from trees, ice, sediment, corals, speleothems, documentary evidence, and other archives. They range in length from 50 to 2000 years, with a median of 547 years, while temporal resolution ranges from biweekly to centennial. Nearly half of the proxy time series are significantly correlated with HadCRUT4.2 surface temperature over the period 1850-2014. Global temperature composites show a remarkable degree of coherence between high- and low-resolution archives, with broadly similar patterns across archive types, terrestrial versus marine locations, and screening criteria. The database is suited to investigations of global and regional temperature variability over the Common Era, and is shared in the Linked Paleo Data (LiPD) format, including serializations in Matlab, R and Python.

  16. A global multiproxy database for temperature reconstructions of the Common Era

    USGS Publications Warehouse

    Emile-Geay, Julian; McKay, Nicholas P.; Kaufman, Darrell S.; von Gunten, Lucien; Wang, Jianghao; Anchukaitis, Kevin J.; Abram, Nerilie J.; Addison, Jason A.; Curran, Mark A.J.; Evans, Michael N.; Henley, Benjamin J.; Hao, Zhixin; Martrat, Belen; McGregor, Helen V.; Neukom, Raphael; Pederson, Gregory T.; Stenni, Barbara; Thirumalai, Kaustubh; Werner, Johannes P.; Xu, Chenxi; Divine, Dmitry V.; Dixon, Bronwyn C.; Gergis, Joelle; Mundo, Ignacio A.; Nakatsuka, T.; Phipps, Steven J.; Routson, Cody C.; Steig, Eric J.; Tierney, Jessica E.; Tyler, Jonathan J.; Allen, Kathryn J.; Bertler, Nancy A. N.; Bjorklund, Jesper; Chase, Brian M.; Chen, Min-Te; Cook, Ed; de Jong, Rixt; DeLong, Kristine L.; Dixon, Daniel A.; Ekaykin, Alexey A.; Ersek, Vasile; Filipsson, Helena L.; Francus, Pierre; Freund, Mandy B.; Frezzotti, M.; Gaire, Narayan P.; Gajewski, Konrad; Ge, Quansheng; Goosse, Hugues; Gornostaeva, Anastasia; Grosjean, Martin; Horiuchi, Kazuho; Hormes, Anne; Husum, Katrine; Isaksson, Elisabeth; Kandasamy, Selvaraj; Kawamura, Kenji; Koc, Nalan; Leduc, Guillaume; Linderholm, Hans W.; Lorrey, Andrew M.; Mikhalenko, Vladimir; Mortyn, P. Graham; Motoyama, Hideaki; Moy, Andrew D.; Mulvaney, Robert; Munz, Philipp M.; Nash, David J.; Oerter, Hans; Opel, Thomas; Orsi, Anais J.; Ovchinnikov, Dmitriy V.; Porter, Trevor J.; Roop, Heidi; Saenger, Casey; Sano, Masaki; Sauchyn, David; Saunders, K.M.; Seidenkrantz, Marit-Solveig; Severi, Mirko; Shao, X.; Sicre, Marie-Alexandrine; Sigl, Michael; Sinclair, Kate; St. George, Scott; St. Jacques, Jeannine-Marie; Thamban, Meloth; Thapa, Udya Kuwar; Thomas, E.; Turney, Chris; Uemura, Ryu; Viau, A.E.; Vladimirova, Diana O.; Wahl, Eugene; White, James W. C.; Yu, Z.; Zinke, Jens

    2017-01-01

    Reproducible climate reconstructions of the Common Era (1 CE to present) are key to placing industrial-era warming into the context of natural climatic variability. Here we present a community-sourced database of temperature-sensitive proxy records from the PAGES2k initiative. The database gathers 692 records from 648 locations, including all continental regions and major ocean basins. The records are from trees, ice, sediment, corals, speleothems, documentary evidence, and other archives. They range in length from 50 to 2000 years, with a median of 547 years, while temporal resolution ranges from biweekly to centennial. Nearly half of the proxy time series are significantly correlated with HadCRUT4.2 surface temperature over the period 1850–2014. Global temperature composites show a remarkable degree of coherence between high- and low-resolution archives, with broadly similar patterns across archive types, terrestrial versus marine locations, and screening criteria. The database is suited to investigations of global and regional temperature variability over the Common Era, and is shared in the Linked Paleo Data (LiPD) format, including serializations in Matlab, R and Python.

  17. NASA STI Database, Aerospace Database and ARIN coverage of 'space law'

    NASA Technical Reports Server (NTRS)

    Buchan, Ronald L.

    1992-01-01

    The space-law coverage provided by the NASA STI Database, the Aerospace Database, and ARIN is briefly described. Particular attention is given to the space law content of the two Databases and of ARIN, the NASA Thesauras space law terminology, space law publication forms, and the availability of the space law literature.

  18. Systematic review of mobile health behavioural interventions to improve uptake of HIV testing for vulnerable and key populations.

    PubMed

    Conserve, Donaldson F; Jennings, Larissa; Aguiar, Carolina; Shin, Grace; Handler, Lara; Maman, Suzanne

    2017-02-01

    Introduction This systematic narrative review examined the empirical evidence on the effectiveness of mobile health (mHealth) behavioural interventions designed to increase the uptake of HIV testing among vulnerable and key populations. Methods MEDLINE/PubMed, Embase, Web of Science, and Global Health electronic databases were searched. Studies were eligible for inclusion if they were published between 2005 and 2015, evaluated an mHealth intervention, and reported an outcome relating to HIV testing. We also reviewed the bibliographies of retrieved studies for other relevant citations. The methodological rigor of selected articles was assessed, and narrative analyses were used to synthesize findings from mixed methodologies. Results A total of seven articles met the inclusion criteria. Most mHealth interventions employed a text-messaging feature and were conducted in middle- and high-income countries. The methodological rigor was moderate among studies. The current literature suggests that mHealth interventions can have significant positive effects on HIV testing initiation among vulnerable and key populations, as well as the general public. In some cases, null results were observed. Qualitative themes relating to the use of mobile technologies to increase HIV testing included the benefits of having low-cost, confidential, and motivational communication. Reported barriers included cellular network restrictions, poor linkages with physical testing services, and limited knowledge of appropriate text-messaging dose. Discussion MHealth interventions may prove beneficial in reducing the proportion of undiagnosed persons living with HIV, particularly among vulnerable and key populations. However, more rigorous and tailored interventions are needed to assess the effectiveness of widespread use.

  19. Systematic review of mobile-health behavioral interventions to improve uptake of HIV testing for vulnerable and key populations

    PubMed Central

    Conserve, Donaldson F.; Jennings, Larissa; Aguiar, Carolina; Shin, Grace; Handler, Lara; Maman, Suzanne

    2016-01-01

    Objective This systematic narrative review examined the empirical evidence on the effectiveness of mobile health (mHealth) behavioral interventions designed to increase uptake of HIV testing among vulnerable and key populations. Methods MEDLINE/PubMed, Embase, Web of Science, and Global Health electronic databases were searched. Studies were eligible for inclusion if they were published between 2005 and 2015, evaluated an mHealth intervention, and reported an outcome relating to HIV testing. We also reviewed the bibliographies of retrieved studies for other relevant citations. The methodological rigor of selected articles was assessed, and narrative analyses were used to synthesize findings from mixed methodologies. Results A total of seven articles met the inclusion criteria. Most mHealth interventions employed a text-messaging feature and were conducted in middle- and high-income countries. The methodological rigor was moderate among studies. The current literature suggests that mHealth interventions can have significant positive effects on HIV testing initiation among vulnerable and key populations, as well as the general public. In some cases, null results were observed. Qualitative themes relating to use of mobile technologies to increase HIV testing included the benefits of having low-cost, confidential, and motivational communication. Reported barriers included cellular network restrictions, poor linkages with physical testing services, and limited knowledge of appropriate text-messaging dose. Conclusions MHealth interventions may prove beneficial in reducing the proportion of undiagnosed persons living with HIV, particularly among vulnerable and key populations. However, more rigorous and tailored intervention trials are needed to assess the effectiveness of widespread use. PMID:27056905

  20. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  1. The Fossil Calibration Database-A New Resource for Divergence Dating.

    PubMed

    Ksepka, Daniel T; Parham, James F; Allman, James F; Benton, Michael J; Carrano, Matthew T; Cranston, Karen A; Donoghue, Philip C J; Head, Jason J; Hermsen, Elizabeth J; Irmis, Randall B; Joyce, Walter G; Kohli, Manpreet; Lamm, Kristin D; Leehr, Dan; Patané, Josés L; Polly, P David; Phillips, Matthew J; Smith, N Adam; Smith, Nathan D; Van Tuinen, Marcel; Ware, Jessica L; Warnock, Rachel C M

    2015-09-01

    Fossils provide the principal basis for temporal calibrations, which are critical to the accuracy of divergence dating analyses. Translating fossil data into minimum and maximum bounds for calibrations is the most important-often least appreciated-step of divergence dating. Properly justified calibrations require the synthesis of phylogenetic, paleontological, and geological evidence and can be difficult for nonspecialists to formulate. The dynamic nature of the fossil record (e.g., new discoveries, taxonomic revisions, updates of global or local stratigraphy) requires that calibration data be updated continually lest they become obsolete. Here, we announce the Fossil Calibration Database (http://fossilcalibrations.org), a new open-access resource providing vetted fossil calibrations to the scientific community. Calibrations accessioned into this database are based on individual fossil specimens and follow best practices for phylogenetic justification and geochronological constraint. The associated Fossil Calibration Series, a calibration-themed publication series at Palaeontologia Electronica, will serve as a key pipeline for peer-reviewed calibrations to enter the database. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Building a highly available and intrusion tolerant Database Security and Protection System (DSPS).

    PubMed

    Cai, Liang; Yang, Xiao-Hu; Dong, Jin-Xiang

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performance analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections; and that proactive security is very useful and practical for large, critical applications.

  3. Database Dictionary for Ethiopian National Ground-Water DAtabase (ENGDA) Data Fields

    USGS Publications Warehouse

    Kuniansky, Eve L.; Litke, David W.; Tucci, Patrick

    2007-01-01

    Introduction This document describes the data fields that are used for both field forms and the Ethiopian National Ground-water Database (ENGDA) tables associated with information stored about production wells, springs, test holes, test wells, and water level or water-quality observation wells. Several different words are used in this database dictionary and in the ENGDA database to describe a narrow shaft constructed in the ground. The most general term is borehole, which is applicable to any type of hole. A well is a borehole specifically constructed to extract water from the ground; however, for this data dictionary and for the ENGDA database, the words well and borehole are used interchangeably. A production well is defined as any well used for water supply and includes hand-dug wells, small-diameter bored wells equipped with hand pumps, or large-diameter bored wells equipped with large-capacity motorized pumps. Test holes are borings made to collect information about the subsurface with continuous core or non-continuous core and/or where geophysical logs are collected. Test holes are not converted into wells. A test well is a well constructed for hydraulic testing of an aquifer in order to plan a larger ground-water production system. A water-level or water-quality observation well is a well that is used to collect information about an aquifer and not used for water supply. A spring is any naturally flowing, local, ground-water discharge site. The database dictionary is designed to help define all fields on both field data collection forms (provided in attachment 2 of this report) and for the ENGDA software screen entry forms (described in Litke, 2007). The data entered into each screen entry field are stored in relational database tables within the computer database. The organization of the database dictionary is designed based on field data collection and the field forms, because this is what the majority of people will use. After each field, however, the

  4. Database on unstable rock slopes in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Nordahl, Bo; Bunkholt, Halvor; Nicolaisen, Magnus; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    the database will be shown in the online map service (e.g. processed results of displacement measurements), while more detailed data will not (e.g. raw data of displacement measurements). Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site, a municipality, a county or the entire country. Selected data will also be downloadable free of charge. The present database on unstable rock slopes in Norway will further evolve in the coming years as the systematic mapping conducted by the Geological Survey of Norway progresses and as available techniques and tools evolve.

  5. TogoTable: cross-database annotation system using the Resource Description Framework (RDF) data model.

    PubMed

    Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko

    2014-07-01

    TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Disaster Debris Recovery Database - Recovery

    EPA Pesticide Factsheets

    The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 6,000 composting facilities, demolition contractors, transfer stations, landfills and recycling facilities for construction and demolition materials, electronics, household hazardous waste, metals, tires, and vehicles in the states of Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, North Dakota, Ohio, Pennsylvania, South Dakota, West Virginia and Wisconsin.In this update, facilities in the 7 states that border the EPA Region 5 states were added to assist interstate disaster debris management. Also, the datasets for composters, construction and demolition recyclers, demolition contractors, and metals recyclers were verified and source information added for each record using these sources: AGC, Biocycle, BMRA, CDRA, ISRI, NDA, USCC, FEMA Debris Removal Contractor Registry, EPA Facility Registry System, and State and local listings.

  7. Disaster Debris Recovery Database - Landfills

    EPA Pesticide Factsheets

    The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 6,000 composting facilities, demolition contractors, transfer stations, landfills and recycling facilities for construction and demolition materials, electronics, household hazardous waste, metals, tires, and vehicles in the states of Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, North Dakota, Ohio, Pennsylvania, South Dakota, West Virginia and Wisconsin.In this update, facilities in the 7 states that border the EPA Region 5 states were added to assist interstate disaster debris management. Also, the datasets for composters, construction and demolition recyclers, demolition contractors, and metals recyclers were verified and source information added for each record using these sources: AGC, Biocycle, BMRA, CDRA, ISRI, NDA, USCC, FEMA Debris Removal Contractor Registry, EPA Facility Registry System, and State and local listings.

  8. MitoBreak: the mitochondrial DNA breakpoints database.

    PubMed

    Damas, Joana; Carneiro, João; Amorim, António; Pereira, Filipe

    2014-01-01

    Mitochondrial DNA (mtDNA) rearrangements are key events in the development of many diseases. Investigations of mtDNA regions affected by rearrangements (i.e. breakpoints) can lead to important discoveries about rearrangement mechanisms and can offer important clues about the causes of mitochondrial diseases. Here, we present the mitochondrial DNA breakpoints database (MitoBreak; http://mitobreak.portugene.com), a free, web-accessible comprehensive list of breakpoints from three classes of somatic mtDNA rearrangements: circular deleted (deletions), circular partially duplicated (duplications) and linear mtDNAs. Currently, MitoBreak contains >1400 mtDNA rearrangements from seven species (Homo sapiens, Mus musculus, Rattus norvegicus, Macaca mulatta, Drosophila melanogaster, Caenorhabditis elegans and Podospora anserina) and their associated phenotypic information collected from nearly 400 publications. The database allows researchers to perform multiple types of data analyses through user-friendly interfaces with full or partial datasets. It also permits the download of curated data and the submission of new mtDNA rearrangements. For each reported case, MitoBreak also documents the precise breakpoint positions, junction sequences, disease or associated symptoms and links to the related publications, providing a useful resource to study the causes and consequences of mtDNA structural alterations.

  9. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  10. Inferring pregnancy episodes and outcomes within a network of observational databases

    PubMed Central

    Ryan, Patrick; Fife, Daniel; Gifkins, Dina; Knoll, Chris; Friedman, Andrew

    2018-01-01

    Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE), Truven MarketScan® Multi-state Medicaid (MDCD), and the Optum ClinFormatics® (Optum) database and one non-US database, the United Kingdom (UK) based Clinical Practice Research Datalink (CPRD). Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm’s Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99–100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95–100%, while start date agreement within seven days in either direction ranged from 90–97%. In Optum validation sensitivity analysis, a total of 73% of

  11. Stopping-Power and Range Tables for Electrons, Protons, and Helium Ions

    National Institute of Standards and Technology Data Gateway

    SRD 124 NISStopping-Power and Range Tables for Electrons, Protons, and Helium Ions (Web, free access)   The databases ESTAR, PSTAR, and ASTAR calculate stopping-power and range tables for electrons, protons, or helium ions. Stopping-power and range tables can be calculated for electrons in any user-specified material and for protons and helium ions in 74 materials.

  12. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  13. Software electron counting for low-dose scanning transmission electron microscopy.

    PubMed

    Mittelberger, Andreas; Kramberger, Christian; Meyer, Jannik C

    2018-05-01

    The performance of the detector is of key importance for low-dose imaging in transmission electron microscopy, and counting every single electron can be considered as the ultimate goal. In scanning transmission electron microscopy, low-dose imaging can be realized by very fast scanning, however, this also introduces artifacts and a loss of resolution in the scan direction. We have developed a software approach to correct for artifacts introduced by fast scans, making use of a scintillator and photomultiplier response that extends over several pixels. The parameters for this correction can be directly extracted from the raw image. Finally, the images can be converted into electron counts. This approach enables low-dose imaging in the scanning transmission electron microscope via high scan speeds while retaining the image quality of artifact-free slower scans. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Composition of the mitochondrial electron transport chain in acanthamoeba castellanii: structural and evolutionary insights.

    PubMed

    Gawryluk, Ryan M R; Chisholm, Kenneth A; Pinto, Devanand M; Gray, Michael W

    2012-11-01

    The mitochondrion, derived in evolution from an α-proteobacterial progenitor, plays a key metabolic role in eukaryotes. Mitochondria house the electron transport chain (ETC) that couples oxidation of organic substrates and electron transfer to proton pumping and synthesis of ATP. The ETC comprises several multiprotein enzyme complexes, all of which have counterparts in bacteria. However, mitochondrial ETC assemblies from animals, plants and fungi are generally more complex than their bacterial counterparts, with a number of 'supernumerary' subunits appearing early in eukaryotic evolution. Little is known, however, about the ETC of unicellular eukaryotes (protists), which are key to understanding the evolution of mitochondria and the ETC. We present an analysis of the ETC proteome from Acanthamoeba castellanii, an ecologically, medically and evolutionarily important member of Amoebozoa (sister to Opisthokonta). Data obtained from tandem mass spectrometric (MS/MS) analyses of purified mitochondria as well as ETC complexes isolated via blue native polyacrylamide gel electrophoresis are combined with the results of bioinformatic queries of sequence databases. Our bioinformatic analyses have identified most of the ETC subunits found in other eukaryotes, confirming and extending previous observations. The assignment of proteins as ETC subunits by MS/MS provides important insights into the primary structures of ETC proteins and makes possible, through the use of sensitive profile-based similarity searches, the identification of novel constituents of the ETC along with the annotation of highly divergent but phylogenetically conserved ETC subunits. © 2012 Elsevier B.V. All rights reserved.

  15. The Application and Future of Big Database Studies in Cardiology: A Single-Center Experience.

    PubMed

    Lee, Kuang-Tso; Hour, Ai-Ling; Shia, Ben-Chang; Chu, Pao-Hsien

    2017-11-01

    As medical research techniques and quality have improved, it is apparent that cardiovascular problems could be better resolved by more strict experiment design. In fact, substantial time and resources should be expended to fulfill the requirements of high quality studies. Many worthy ideas and hypotheses were unable to be verified or proven due to ethical or economic limitations. In recent years, new and various applications and uses of databases have received increasing attention. Important information regarding certain issues such as rare cardiovascular diseases, women's heart health, post-marketing analysis of different medications, or a combination of clinical and regional cardiac features could be obtained by the use of rigorous statistical methods. However, there are limitations that exist among all databases. One of the key essentials to creating and correctly addressing this research is through reliable processes of analyzing and interpreting these cardiologic databases.

  16. [Establishment of database with standard 3D tooth crowns based on 3DS MAX].

    PubMed

    Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun

    2009-08-01

    The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system.

  17. Quebec Trophoblastic Disease Registry: how to make an easy-to-use dynamic database.

    PubMed

    Sauthier, Philippe; Breguet, Magali; Rozenholc, Alexandre; Sauthier, Michaël

    2015-05-01

    To create an easy-to-use dynamic database designed specifically for the Quebec Trophoblastic Disease Registry (RMTQ). It is now well established that much of the success in managing trophoblastic diseases comes from the development of national and regional reference centers. Computerized databases allow the optimal use of data stored in these centers. We have created an electronic data registration system by producing a database using FileMaker Pro 12. It uses 11 external tables associated with a unique identification number for each patient. Each table allows specific data to be recorded, incorporating demographics, diagnosis, automated staging, laboratory values, pathological diagnosis, and imaging parameters. From January 1, 2009, to December 31, 2013, we used our database to register 311 patients with 380 diseases and have seen a 39.2% increase in registrations each year between 2009 and 2012. This database allows the automatic generation of semilogarithmic curves, which take into account β-hCG values as a function of time, complete with graphic markers for applied treatments (chemotherapy, radiotherapy, or surgery). It generates a summary sheet for a synthetic vision in real time. We have created, at a low cost, an easy-to-use database specific to trophoblastic diseases that dynamically integrates staging and monitoring. We propose a 10-step procedure for a successful trophoblastic database. It improves patient care, research, and education on trophoblastic diseases in Quebec and leads to an opportunity for collaboration on a national Canadian registry.

  18. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative.

    PubMed

    Won, Brian; Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-03-16

    An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved.  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between

  19. 21 CFR 1311.30 - Requirements for storing and using a private key for digitally signing orders.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Requirements for storing and using a private key... Digital Certificates for Electronic Orders § 1311.30 Requirements for storing and using a private key for... and private key. (b) The certificate holder must provide FIPS-approved secure storage for the private...

  20. Electronic plants

    PubMed Central

    Stavrinidou, Eleni; Gabrielsson, Roger; Gomez, Eliot; Crispin, Xavier; Nilsson, Ove; Simon, Daniel T.; Berggren, Magnus

    2015-01-01

    The roots, stems, leaves, and vascular circuitry of higher plants are responsible for conveying the chemical signals that regulate growth and functions. From a certain perspective, these features are analogous to the contacts, interconnections, devices, and wires of discrete and integrated electronic circuits. Although many attempts have been made to augment plant function with electroactive materials, plants’ “circuitry” has never been directly merged with electronics. We report analog and digital organic electronic circuits and devices manufactured in living plants. The four key components of a circuit have been achieved using the xylem, leaves, veins, and signals of the plant as the template and integral part of the circuit elements and functions. With integrated and distributed electronics in plants, one can envisage a range of applications including precision recording and regulation of physiology, energy harvesting from photosynthesis, and alternatives to genetic modification for plant optimization. PMID:26702448