Sample records for sources electronic databases

  1. Identifying the effective evidence sources to use in developing Clinical Guidelines for Acute Stroke Management: lived experiences of the search specialist and project manager.

    PubMed

    Parkhill, Anne; Hill, Kelvin

    2009-03-01

    The Australian National Stroke Foundation appointed a search specialist to find the best available evidence for the second edition of its Clinical Guidelines for Acute Stroke Management. To identify the relative effectiveness of differing evidence sources for the guideline update. We searched and reviewed references from five valid evidence sources for clinical and economic questions: (i) electronic databases; (ii) reference lists of relevant systematic reviews, guidelines, and/or primary studies; (iii) table of contents of a number of key journals for the last 6 months; (iv) internet/grey literature; and (v) experts. Reference sources were recorded, quantified, and analysed. In the clinical portion of the guidelines document, there was a greater use of previous knowledge and sources other than electronic databases for evidence, while there was a greater use of electronic databases for the economic section. The results confirmed that searchers need to be aware of the context and range of sources for evidence searches. For best available evidence, searchers cannot rely solely on electronic databases and need to encompass many different media and sources.

  2. Pesticide Information Sources in the United States.

    ERIC Educational Resources Information Center

    Alston, Patricia Gayle

    1992-01-01

    Presents an overview of electronic and published sources on pesticides. Includes sources such as databases, CD-ROMs, books, journals, brochures, pamphlets, fact sheets, hotlines, courses, electronic mail, and electronic bulletin boards. (MCO)

  3. Survey on the use of information sources in the field of aging.

    PubMed

    Bird, G; Heekin, J M

    1994-01-01

    This article presents the results of a survey conducted over the summer of 1992 on the use of information sources by professionals in the field of aging. In particular, factors affecting the use of electronic information sources were investigated. The data provide a demographic profile of North American gerontologists, with a predictably wide range of disciplines and types of practice represented. Several factors were found to have an impact on the gerontologists' utilization of electronic information sources. Respondents who used a larger-than-average number of computer applications were found to make relatively more use of electronic sources, including online searches, CD-ROM indexes, library OPACs, and other databases searched by remote access. Attendance at library workshops was found to increase the amount of end-user searching but not the amount of library-mediated searching. Respondents also reported which databases they used and which they considered most important. MEDLINE was the most frequently mentioned database across all disciplines, including the health and social sciences. Computer databases were ranked least important out of six listed sources of information, and only 5% of respondents reported having used an electronic current awareness profile.

  4. DOE technology information management system database study report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less

  5. An Update on Electronic Information Sources.

    ERIC Educational Resources Information Center

    Ackerman, Katherine

    1987-01-01

    This review of new developments and products in online services discusses trends in travel related services; full text databases; statistical source databases; an emphasis on regional and international business news; and user friendly systems. (Author/CLB)

  6. Environmental Information Resources and Electronic Research Systems (ERSs): Eco-Link as an Example of Future Tools.

    ERIC Educational Resources Information Center

    Weiskel, Timothy C.

    1991-01-01

    An online system designed to help global environmental research, the electronic research system called Eco-Link draws data from various electronic sources including online catalogs and databases, CD-ROMs, electronic news sources, and electronic data subscription services to produce briefing booklets on environmental issues. It can be accessed by…

  7. Where Field Staff Get Information. Approaching the Electronic Times.

    ERIC Educational Resources Information Center

    Shih, Win-Yuan; Evans, James F.

    1991-01-01

    Top 3 information sources identified in a survey of 109 extension agents were extension publications, specialists, and personal files. Electronic sources such as satellite programing and bibliographic databases were used infrequently, because of lack of access, user friendliness, and ready applicability of information. (SK)

  8. An Improved Model for Operational Specification of the Electron Density Structure up to Geosynchronous Heights

    DTIC Science & Technology

    2010-07-01

    http://www.iono.noa.gr/ElectronDensity/EDProfile.php The web service has been developed with the following open source tools: a) PHP , for the... MySQL for the database, which was based on the enhancement of the DIAS database. Below we present some screen shots to demonstrate the functionality

  9. Brief Report: Databases in the Asia-Pacific Region: The Potential for a Distributed Network Approach.

    PubMed

    Lai, Edward Chia-Cheng; Man, Kenneth K C; Chaiyakunapruk, Nathorn; Cheng, Ching-Lan; Chien, Hsu-Chih; Chui, Celine S L; Dilokthornsakul, Piyameth; Hardy, N Chantelle; Hsieh, Cheng-Yang; Hsu, Chung Y; Kubota, Kiyoshi; Lin, Tzu-Chieh; Liu, Yanfang; Park, Byung Joo; Pratt, Nicole; Roughead, Elizabeth E; Shin, Ju-Young; Watcharathanakij, Sawaeng; Wen, Jin; Wong, Ian C K; Yang, Yea-Huei Kao; Zhang, Yinghong; Setoguchi, Soko

    2015-11-01

    This study describes the availability and characteristics of databases in Asian-Pacific countries and assesses the feasibility of a distributed network approach in the region. A web-based survey was conducted among investigators using healthcare databases in the Asia-Pacific countries. Potential survey participants were identified through the Asian Pharmacoepidemiology Network. Investigators from a total of 11 databases participated in the survey. Database sources included four nationwide claims databases from Japan, South Korea, and Taiwan; two nationwide electronic health records from Hong Kong and Singapore; a regional electronic health record from western China; two electronic health records from Thailand; and cancer and stroke registries from Taiwan. We identified 11 databases with capabilities for distributed network approaches. Many country-specific coding systems and terminologies have been already converted to international coding systems. The harmonization of health expenditure data is a major obstacle for future investigations attempting to evaluate issues related to medical costs.

  10. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  11. Electronic and Optical Sources of Information for the Humanities: A Survey of Library Users, Products and Projects in Italy.

    ERIC Educational Resources Information Center

    Paci, Augusta Maria; And Others

    1990-01-01

    Discussion of electronic information products for the humanities focuses on users in Italy. Databases are discussed; a user survey of La Sapienza University Arts Faculty is described; an example of research using two databases, FRANCIS and Philosopher's Index, is given; and Italian products and projects for the humanities are reviewed. (15…

  12. Healthcare databases in Europe for studying medicine use and safety during pregnancy.

    PubMed

    Charlton, Rachel A; Neville, Amanda J; Jordan, Sue; Pierini, Anna; Damase-Michel, Christine; Klungsøyr, Kari; Andersen, Anne-Marie Nybo; Hansen, Anne Vinkel; Gini, Rosa; Bos, Jens H J; Puccini, Aurora; Hurault-Delarue, Caroline; Brooks, Caroline J; de Jong-van den Berg, Lolkje T W; de Vries, Corinne S

    2014-06-01

    The aim of this study was to describe a number of electronic healthcare databases in Europe in terms of the population covered, the source of the data captured and the availability of data on key variables required for evaluating medicine use and medicine safety during pregnancy. A sample of electronic healthcare databases that captured pregnancies and prescription data was selected on the basis of contacts within the EUROCAT network. For each participating database, a database inventory was completed. Eight databases were included, and the total population covered was 25 million. All databases recorded live births, seven captured stillbirths and five had full data available on spontaneous pregnancy losses and induced terminations. In six databases, data were usually available to determine the date of the woman's last menstrual period, whereas in the remainder, algorithms were needed to establish a best estimate for at least some pregnancies. In seven databases, it was possible to use data recorded in the databases to identify pregnancies where the offspring had a congenital anomaly. Information on confounding variables was more commonly available in databases capturing data recorded by primary-care practitioners. All databases captured maternal co-prescribing and a measure of socioeconomic status. This study suggests that within Europe, electronic healthcare databases may be valuable sources of data for evaluating medicine use and safety during pregnancy. The suitability of a particular database, however, will depend on the research question, the type of medicine to be evaluated, the prevalence of its use and any adverse outcomes of interest. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd. © 2014 The Authors. Pharmacoepidemiology and Drug Safety published by John Wiley & Sons, Ltd.

  13. My Favorite Things Electronically Speaking, 1997 Edition.

    ERIC Educational Resources Information Center

    Glantz, Shelley

    1997-01-01

    Responding to an informal survey, 96 media specialists named favorite software, CD-ROMs, and online sites. This article lists automation packages, electronic encyclopedias, CD-ROMs, electronic magazine indexes, CD-ROM and online database services, electronic sources of current events, laser disks for grades 6-12, word processing programs for…

  14. Evaluation of Electronic Healthcare Databases for Post-Marketing Drug Safety Surveillance and Pharmacoepidemiology in China.

    PubMed

    Yang, Yu; Zhou, Xiaofeng; Gao, Shuangqing; Lin, Hongbo; Xie, Yanming; Feng, Yuji; Huang, Kui; Zhan, Siyan

    2018-01-01

    Electronic healthcare databases (EHDs) are used increasingly for post-marketing drug safety surveillance and pharmacoepidemiology in Europe and North America. However, few studies have examined the potential of these data sources in China. Three major types of EHDs in China (i.e., a regional community-based database, a national claims database, and an electronic medical records [EMR] database) were selected for evaluation. Forty core variables were derived based on the US Mini-Sentinel (MS) Common Data Model (CDM) as well as the data features in China that would be desirable to support drug safety surveillance. An email survey of these core variables and eight general questions as well as follow-up inquiries on additional variables was conducted. These 40 core variables across the three EHDs and all variables in each EHD along with those in the US MS CDM and Observational Medical Outcomes Partnership (OMOP) CDM were compared for availability and labeled based on specific standards. All of the EHDs' custodians confirmed their willingness to share their databases with academic institutions after appropriate approval was obtained. The regional community-based database contained 1.19 million people in 2015 with 85% of core variables. Resampled annually nationwide, the national claims database included 5.4 million people in 2014 with 55% of core variables, and the EMR database included 3 million inpatients from 60 hospitals in 2015 with 80% of core variables. Compared with MS CDM or OMOP CDM, the proportion of variables across the three EHDs available or able to be transformed/derived from the original sources are 24-83% or 45-73%, respectively. These EHDs provide potential value to post-marketing drug safety surveillance and pharmacoepidemiology in China. Future research is warranted to assess the quality and completeness of these EHDs or additional data sources in China.

  15. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  16. Assessment of COPD-related outcomes via a national electronic medical record database.

    PubMed

    Asche, Carl; Said, Quayyim; Joish, Vijay; Hall, Charles Oaxaca; Brixner, Diana

    2008-01-01

    The technology and sophistication of healthcare utilization databases have expanded over the last decade to include results of lab tests, vital signs, and other clinical information. This review provides an assessment of the methodological and analytical challenges of conducting chronic obstructive pulmonary disease (COPD) outcomes research in a national electronic medical records (EMR) dataset and its potential application towards the assessment of national health policy issues, as well as a description of the challenges or limitations. An EMR database and its application to measuring outcomes for COPD are described. The ability to measure adherence to the COPD evidence-based practice guidelines, generated by the NIH and HEDIS quality indicators, in this database was examined. Case studies, before and after their publication, were used to assess the adherence to guidelines and gauge the conformity to quality indicators. EMR was the only source of information for pulmonary function tests, but low frequency in ordering by primary care was an issue. The EMR data can be used to explore impact of variation in healthcare provision on clinical outcomes. The EMR database permits access to specific lab data and biometric information. The richness and depth of information on "real world" use of health services for large population-based analytical studies at relatively low cost render such databases an attractive resource for outcomes research. Various sources of information exist to perform outcomes research. It is important to understand the desired endpoints of such research and choose the appropriate database source.

  17. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  18. NASA scientific and technical information for the 1990s

    NASA Technical Reports Server (NTRS)

    Cotter, Gladys A.

    1990-01-01

    Projections for NASA scientific and technical information (STI) in the 1990s are outlined. NASA STI for the 1990s will maintain a quality bibliographic and full-text database, emphasizing electronic input and products supplemented by networked access to a wide variety of sources, particularly numeric databases.

  19. Network meta-analyses could be improved by searching more sources and by involving a librarian.

    PubMed

    Li, Lun; Tian, Jinhui; Tian, Hongliang; Moher, David; Liang, Fuxiang; Jiang, Tongxiao; Yao, Liang; Yang, Kehu

    2014-09-01

    Network meta-analyses (NMAs) aim to rank the benefits (or harms) of interventions, based on all available randomized controlled trials. Thus, the identification of relevant data is critical. We assessed the conduct of the literature searches in NMAs. Published NMAs were retrieved by searching electronic bibliographic databases and other sources. Two independent reviewers selected studies and five trained reviewers abstracted data regarding literature searches, in duplicate. Search method details were examined using descriptive statistics. Two hundred forty-nine NMAs were included. Eight used previous systematic reviews to identify primary studies without further searching, and five did not report any literature searches. In the 236 studies that used electronic databases to identify primary studies, the median number of databases was 3 (interquartile range: 3-5). MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials were the most commonly used databases. The most common supplemental search methods included reference lists of included studies (48%), reference lists of previous systematic reviews (40%), and clinical trial registries (32%). None of these supplemental methods was conducted in more than 50% of the NMAs. Literature searches in NMAs could be improved by searching more sources, and by involving a librarian or information specialist. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. ERMes: Open Source Simplicity for Your E-Resource Management

    ERIC Educational Resources Information Center

    Doering, William; Chilton, Galadriel

    2009-01-01

    ERMes, the latest version of electronic resource management system (ERM), is a relational database; content in different tables connects to, and works with, content in other tables. ERMes requires Access 2007 (Windows) or Access 2008 (Mac) to operate as the database utilizes functionality not available in previous versions of Microsoft Access. The…

  1. Internet Database Review: The FDA BBS.

    ERIC Educational Resources Information Center

    Tomaiuolo, Nicholas G.

    1993-01-01

    Describes the electronic bulletin board system (BBS) of the Food and Drug Administration (FDA) that is accessible through the Internet. Highlights include how to gain access; the menu-driven software; other electronic sources of FDA information; and adding value. Examples of the FDA BBS menu and the help screen are included. (LRW)

  2. Atmospheric Sciences Information Resources in the United States--An Overview for Librarians.

    ERIC Educational Resources Information Center

    Layman, Mary; Smith, Shirley

    1993-01-01

    Presents an overview of the types of information and information sources available in the field of atmospheric sciences. Included are major library collections; organizations; government programs, including air pollution control regulations; electronic databases; and networking resources. Addresses are provided for all sources, and definitions of…

  3. Use of large healthcare databases for rheumatology clinical research.

    PubMed

    Desai, Rishi J; Solomon, Daniel H

    2017-03-01

    Large healthcare databases, which contain data collected during routinely delivered healthcare to patients, can serve as a valuable resource for generating actionable evidence to assist medical and healthcare policy decision-making. In this review, we summarize use of large healthcare databases in rheumatology clinical research. Large healthcare data are critical to evaluate medication safety and effectiveness in patients with rheumatologic conditions. Three major sources of large healthcare data are: first, electronic medical records, second, health insurance claims, and third, patient registries. Each of these sources offers unique advantages, but also has some inherent limitations. To address some of these limitations and maximize the utility of these data sources for evidence generation, recent efforts have focused on linking different data sources. Innovations such as randomized registry trials, which aim to facilitate design of low-cost randomized controlled trials built on existing infrastructure provided by large healthcare databases, are likely to make clinical research more efficient in coming years. Harnessing the power of information contained in large healthcare databases, while paying close attention to their inherent limitations, is critical to generate a rigorous evidence-base for medical decision-making and ultimately enhancing patient care.

  4. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  5. A systematic review of administrative and clinical databases of infants admitted to neonatal units.

    PubMed

    Statnikov, Yevgeniy; Ibrahim, Buthaina; Modi, Neena

    2017-05-01

    High quality information, increasingly captured in clinical databases, is a useful resource for evaluating and improving newborn care. We conducted a systematic review to identify neonatal databases, and define their characteristics. We followed a preregistered protocol using MesH terms to search MEDLINE, EMBASE, CINAHL, Web of Science and OVID Maternity and Infant Care Databases for articles identifying patient level databases covering more than one neonatal unit. Full-text articles were reviewed and information extracted on geographical coverage, criteria for inclusion, data source, and maternal and infant characteristics. We identified 82 databases from 2037 publications. Of the country-specific databases there were 39 regional and 39 national. Sixty databases restricted entries to neonatal unit admissions by birth characteristic or insurance cover; 22 had no restrictions. Data were captured specifically for 53 databases; 21 administrative sources; 8 clinical sources. Two clinical databases hold the largest range of data on patient characteristics, USA's Pediatrix BabySteps Clinical Data Warehouse and UK's National Neonatal Research Database. A number of neonatal databases exist that have potential to contribute to evaluating neonatal care. The majority is created by entering data specifically for the database, duplicating information likely already captured in other administrative and clinical patient records. This repetitive data entry represents an unnecessary burden in an environment where electronic patient records are increasingly used. Standardisation of data items is necessary to facilitate linkage within and between countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  6. Intrinsic Radiation Source Generation with the ISC Package: Data Comparisons and Benchmarking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Clell J. Jr.

    The characterization of radioactive emissions from unstable isotopes (intrinsic radiation) is necessary for shielding and radiological-dose calculations from radioactive materials. While most radiation transport codes, e.g., MCNP [X-5 Monte Carlo Team, 2003], provide the capability to input user prescribed source definitions, such as radioactive emissions, they do not provide the capability to calculate the correct radioactive-source definition given the material compositions. Special modifications to MCNP have been developed in the past to allow the user to specify an intrinsic source, but these modification have not been implemented into the primary source base [Estes et al., 1988]. To facilitate the descriptionmore » of the intrinsic radiation source from a material with a specific composition, the Intrinsic Source Constructor library (LIBISC) and MCNP Intrinsic Source Constructor (MISC) utility have been written. The combination of LIBISC and MISC will be herein referred to as the ISC package. LIBISC is a statically linkable C++ library that provides the necessary functionality to construct the intrinsic-radiation source generated by a material. Furthermore, LIBISC provides the ability use different particle-emission databases, radioactive-decay databases, and natural-abundance databases allowing the user flexibility in the specification of the source, if one database is preferred over others. LIBISC also provides functionality for aging materials and producing a thick-target bremsstrahlung photon source approximation from the electron emissions. The MISC utility links to LIBISC and facilitates the description of intrinsic-radiation sources into a format directly usable with the MCNP transport code. Through a series of input keywords and arguments the MISC user can specify the material, age the material if desired, and produce a source description of the radioactive emissions from the material in an MCNP readable format. Further details of using the MISC utility can be obtained from the user guide [Solomon, 2012]. The remainder of this report presents a discussion of the databases available to LIBISC and MISC, a discussion of the models employed by LIBISC, a comparison of the thick-target bremsstrahlung model employed, a benchmark comparison to plutonium and depleted-uranium spheres, and a comparison of the available particle-emission databases.« less

  7. Use of electronic healthcare records in large-scale simple randomized trials at the point of care for the documentation of value-based medicine.

    PubMed

    van Staa, T-P; Klungel, O; Smeeth, L

    2014-06-01

    A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  8. Symposium on electron linear accelerators in honor of Richard B. Neal's 80th birthday: Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siemann, R.H.

    The papers presented at the conference are: (1) the construction of SLAC and the role of R.B. Neal; (2) symposium speech; (3) lessons learned from the SLC; (4) alternate approaches to future electron-positron linear colliders; (5) the NLC technical program; (6) advanced electron linacs; (7) medical uses of linear accelerators; (8) linac-based, intense, coherent X-ray source using self-amplified spontaneous emission. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.

  9. Utilization of open source electronic health record around the world: A systematic review.

    PubMed

    Aminpour, Farzaneh; Sadoughi, Farahnaz; Ahamdi, Maryam

    2014-01-01

    Many projects on developing Electronic Health Record (EHR) systems have been carried out in many countries. The current study was conducted to review the published data on the utilization of open source EHR systems in different countries all over the world. Using free text and keyword search techniques, six bibliographic databases were searched for related articles. The identified papers were screened and reviewed during a string of stages for the irrelevancy and validity. The findings showed that open source EHRs have been wildly used by source limited regions in all continents, especially in Sub-Saharan Africa and South America. It would create opportunities to improve national healthcare level especially in developing countries with minimal financial resources. Open source technology is a solution to overcome the problems of high-costs and inflexibility associated with the proprietary health information systems.

  10. An HL7/CDA Framework for the Design and Deployment of Telemedicine Services

    DTIC Science & Technology

    2001-10-25

    schemes and prescription databases. Furthermore, interoperability with the Electronic Health Re- cord ( EHR ) facilitates automatic retrieval of relevant...local EHR system or the integrated electronic health record (I- EHR ) [9], which indexes all medical contacts of a patient in the regional net- work...suspected medical problem. Interoperability with middleware services of the HII and other data sources such as the local EHR sys- tem affects

  11. Non-thermal recombination - a neglected source of flare hard X-rays and fast electron diagnostics (Corrigendum)

    NASA Astrophysics Data System (ADS)

    Brown, J. C.; Mallik, P. C. V.; Badnell, N. R.

    2010-06-01

    Brown and Mallik (BM) recently claimed that non-thermal recombination (NTR) can be a dominant source of flare hard X-rays (HXRs) from hot coronal and chromospheric sources. However, major discrepancies between the thermal continua predicted by BM and by the Chianti database as well as RHESSI flare data, led us to discover substantial errors in the heuristic expression used by BM to extend the Kramers expressions beyond the hydrogenic case. Here we present the relevant corrected expressions and show the key modified results. We conclude that, in most cases, NTR emission was overestimated by a factor of 1-8 by BM but is typically still large enough (as much as 20-30% of the total emission) to be very important for electron spectral inference and detection of electron spectral features such as low energy cut-offs since the recombination spectra contain sharp edges. For extreme temperature regimes and/or if the Fe abundance were as high as some values claimed, NTR could even be the dominant source of flare HXRs, reducing the electron number and energy budget, problems such as in the extreme coronal HXR source cases reported by e.g. Krucker et al.

  12. Database and interactive monitoring system for the photonics and electronics of RPC Muon Trigger in CMS experiment

    NASA Astrophysics Data System (ADS)

    Wiacek, Daniel; Kudla, Ignacy M.; Pozniak, Krzysztof T.; Bunkowski, Karol

    2005-02-01

    The main task of the RPC (Resistive Plate Chamber) Muon Trigger monitoring system design for the CMS (Compact Muon Solenoid) experiment (at LHC in CERN Geneva) is the visualization of data that includes the structure of electronic trigger system (e.g. geometry and imagery), the way of its processes and to generate automatically files with VHDL source code used for programming of the FPGA matrix. In the near future, the system will enable the analysis of condition, operation and efficiency of individual Muon Trigger elements, registration of information about some Muon Trigger devices and present previously obtained results in interactive presentation layer. A broad variety of different database and programming concepts for design of Muon Trigger monitoring system was presented in this article. The structure and architecture of the system and its principle of operation were described. One of ideas for building this system is use object-oriented programming and design techniques to describe real electronics systems through abstract object models stored in database and implement these models in Java language.

  13. Constraints on Biological Mechanism from Disease Comorbidity Using Electronic Medical Records and Database of Genetic Variants

    PubMed Central

    Bagley, Steven C.; Sirota, Marina; Chen, Richard; Butte, Atul J.; Altman, Russ B.

    2016-01-01

    Patterns of disease co-occurrence that deviate from statistical independence may represent important constraints on biological mechanism, which sometimes can be explained by shared genetics. In this work we study the relationship between disease co-occurrence and commonly shared genetic architecture of disease. Records of pairs of diseases were combined from two different electronic medical systems (Columbia, Stanford), and compared to a large database of published disease-associated genetic variants (VARIMED); data on 35 disorders were available across all three sources, which include medical records for over 1.2 million patients and variants from over 17,000 publications. Based on the sources in which they appeared, disease pairs were categorized as having predominant clinical, genetic, or both kinds of manifestations. Confounding effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similarly shaped incidence patterns. We find that disease pairs that are overrepresented in both electronic medical record systems and in VARIMED come from two main disease classes, autoimmune and neuropsychiatric. We furthermore identify specific genes that are shared within these disease groups. PMID:27115429

  14. Constraints on Biological Mechanism from Disease Comorbidity Using Electronic Medical Records and Database of Genetic Variants.

    PubMed

    Bagley, Steven C; Sirota, Marina; Chen, Richard; Butte, Atul J; Altman, Russ B

    2016-04-01

    Patterns of disease co-occurrence that deviate from statistical independence may represent important constraints on biological mechanism, which sometimes can be explained by shared genetics. In this work we study the relationship between disease co-occurrence and commonly shared genetic architecture of disease. Records of pairs of diseases were combined from two different electronic medical systems (Columbia, Stanford), and compared to a large database of published disease-associated genetic variants (VARIMED); data on 35 disorders were available across all three sources, which include medical records for over 1.2 million patients and variants from over 17,000 publications. Based on the sources in which they appeared, disease pairs were categorized as having predominant clinical, genetic, or both kinds of manifestations. Confounding effects of age on disease incidence were controlled for by only comparing diseases when they fall in the same cluster of similarly shaped incidence patterns. We find that disease pairs that are overrepresented in both electronic medical record systems and in VARIMED come from two main disease classes, autoimmune and neuropsychiatric. We furthermore identify specific genes that are shared within these disease groups.

  15. Utilization of open source electronic health record around the world: A systematic review

    PubMed Central

    Aminpour, Farzaneh; Sadoughi, Farahnaz; Ahamdi, Maryam

    2014-01-01

    Many projects on developing Electronic Health Record (EHR) systems have been carried out in many countries. The current study was conducted to review the published data on the utilization of open source EHR systems in different countries all over the world. Using free text and keyword search techniques, six bibliographic databases were searched for related articles. The identified papers were screened and reviewed during a string of stages for the irrelevancy and validity. The findings showed that open source EHRs have been wildly used by source limited regions in all continents, especially in Sub-Saharan Africa and South America. It would create opportunities to improve national healthcare level especially in developing countries with minimal financial resources. Open source technology is a solution to overcome the problems of high-costs and inflexibility associated with the proprietary health information systems. PMID:24672566

  16. Developing Modern Information Systems and Services: Africa's Challenges for the Future.

    ERIC Educational Resources Information Center

    Chowdhury, G. G.

    1996-01-01

    Discusses the current state of information systems and services in Africa, examines future possibilities, and suggests areas for improvement. Topics include the lack of automation; CD-ROM databases for accessibility to information sources; developing low-cost electronic communication facilities; Internet connectivity; dependence on imported…

  17. Who Researches Functional Literacy?

    ERIC Educational Resources Information Center

    Shaw, Donita; Perry, Kristen H.; Ivanyuk, Lyudmyla; Tham, Sarah

    2017-01-01

    The purpose of our study was to discover who researches functional literacy. This study was situated within a larger systematic literature review. We searched seven electronic databases and identified 90 sources to answer our larger question regarding how functional literacy is defined and conceptualized as well as the specific question pertinent…

  18. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  19. [Active surveillance of adverse drug reaction in the era of big data: challenge and opportunity for control selection].

    PubMed

    Wang, S F; Zhan, S Y

    2016-07-01

    Electronic healthcare databases have become an important source for active surveillance of drug safety in the era of big data. The traditional epidemiology research designs are needed to confirm the association between drug use and adverse events based on these datasets, and the selection of the comparative control is essential to each design. This article aims to explain the principle and application of each type of control selection, introduce the methods and parameters for method comparison, and describe the latest achievements in the batch processing of control selection, which would provide important methodological reference for the use of electronic healthcare databases to conduct post-marketing drug safety surveillance in China.

  20. The Changing Role of a Professional Society Library.

    ERIC Educational Resources Information Center

    Lees, Nigel

    1997-01-01

    Describes developments in the United Kingdom's Royal Society of Chemistry's Library and Information Centre that has changed from a professional and learned society library into a business center. Development of a priced information service, electronic sources of information including online databases and the Internet, and marketing and promotion…

  1. Clinical records anonymisation and text extraction (CRATE): an open-source software system.

    PubMed

    Cardinal, Rudolf N

    2017-04-26

    Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.

  2. Gunshot identification system by integration of open source consumer electronics

    NASA Astrophysics Data System (ADS)

    López R., Juan Manuel; Marulanda B., Jose Ignacio

    2014-05-01

    This work presents a prototype of low-cost gunshots identification system that uses consumer electronics in order to ensure the existence of gunshots and then classify it according to a previously established database. The implementation of this tool in the urban areas is to set records that support the forensics, hence improving law enforcement also on developing countries. An analysis of its effectiveness is presented in comparison with theoretical results obtained with numerical simulations.

  3. Disaster Debris Recovery Database - Recovery

    EPA Pesticide Factsheets

    The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 6,000 composting facilities, demolition contractors, transfer stations, landfills and recycling facilities for construction and demolition materials, electronics, household hazardous waste, metals, tires, and vehicles in the states of Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, North Dakota, Ohio, Pennsylvania, South Dakota, West Virginia and Wisconsin.In this update, facilities in the 7 states that border the EPA Region 5 states were added to assist interstate disaster debris management. Also, the datasets for composters, construction and demolition recyclers, demolition contractors, and metals recyclers were verified and source information added for each record using these sources: AGC, Biocycle, BMRA, CDRA, ISRI, NDA, USCC, FEMA Debris Removal Contractor Registry, EPA Facility Registry System, and State and local listings.

  4. Disaster Debris Recovery Database - Landfills

    EPA Pesticide Factsheets

    The US EPA Region 5 Disaster Debris Recovery Database includes public datasets of over 6,000 composting facilities, demolition contractors, transfer stations, landfills and recycling facilities for construction and demolition materials, electronics, household hazardous waste, metals, tires, and vehicles in the states of Illinois, Indiana, Iowa, Kentucky, Michigan, Minnesota, Missouri, North Dakota, Ohio, Pennsylvania, South Dakota, West Virginia and Wisconsin.In this update, facilities in the 7 states that border the EPA Region 5 states were added to assist interstate disaster debris management. Also, the datasets for composters, construction and demolition recyclers, demolition contractors, and metals recyclers were verified and source information added for each record using these sources: AGC, Biocycle, BMRA, CDRA, ISRI, NDA, USCC, FEMA Debris Removal Contractor Registry, EPA Facility Registry System, and State and local listings.

  5. FCDD: A Database for Fruit Crops Diseases.

    PubMed

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  6. Validation of asthma recording in electronic health records: protocol for a systematic review.

    PubMed

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-05-29

    Asthma is a common, heterogeneous disease with significant morbidity and mortality worldwide. It can be difficult to define in epidemiological studies using electronic health records as the diagnosis is based on non-specific respiratory symptoms and spirometry, neither of which are routinely registered. Electronic health records can nonetheless be valuable to study the epidemiology, management, healthcare use and control of asthma. For health databases to be useful sources of information, asthma diagnoses should ideally be validated. The primary objectives are to provide an overview of the methods used to validate asthma diagnoses in electronic health records and summarise the results of the validation studies. EMBASE and MEDLINE will be systematically searched for appropriate search terms. The searches will cover all studies in these databases up to October 2016 with no start date and will yield studies that have validated algorithms or codes for the diagnosis of asthma in electronic health records. At least one test validation measure (sensitivity, specificity, positive predictive value, negative predictive value or other) is necessary for inclusion. In addition, we require the validated algorithms to be compared with an external golden standard, such as a manual review, a questionnaire or an independent second database. We will summarise key data including author, year of publication, country, time period, date, data source, population, case characteristics, clinical events, algorithms, gold standard and validation statistics in a uniform table. This study is a synthesis of previously published studies and, therefore, no ethical approval is required. The results will be submitted to a peer-reviewed journal for publication. Results from this systematic review can be used to study outcome research on asthma and can be used to identify case definitions for asthma. CRD42016041798. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    National Institute of Standards and Technology Data Gateway

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  8. INFOMAT: The international materials assessment and application centre's internet gateway

    NASA Astrophysics Data System (ADS)

    Branquinho, Carmen Lucia; Colodete, Leandro Tavares

    2004-08-01

    INFOMAT is an electronic directory structured to facilitate the search and retrieval of materials science and technology information sources. Linked to the homepage of the International Materials Assessment and Application Centre, INFOMAT presents descriptions of 392 proprietary databases with links to their host systems as well as direct links to over 180 public domain databases and over 2,400 web sites. Among the web sites are associations/unions, governmental and non-governmental institutions, industries, library holdings, market statistics, news services, on-line publications, standardization and intellectual property organizations, and universities/research groups.

  9. Impact of Orthodontic Treatment on Periodontal Tissues: A Narrative Review of Multidisciplinary Literature

    PubMed Central

    Gorbunkova, Angelina; Pagni, Giorgio; Brizhak, Anna; Farronato, Giampietro; Rasperini, Giulio

    2016-01-01

    The aim of this review is to describe the most commonly observed changes in periodontium caused by orthodontic treatment in order to facilitate specialists' collaboration and communication. An electronic database search was carried out using PubMed abstract and citation database and bibliographic material was then used in order to find other appropriate sources. Soft and hard periodontal tissues changes during orthodontic treatment and maintenance of the patients are discussed in order to provide an exhaustive picture of the possible interactions between these two interwoven disciplines. PMID:26904120

  10. Validation of chronic obstructive pulmonary disease (COPD) diagnoses in healthcare databases: a systematic review protocol.

    PubMed

    Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro

    2016-06-01

    Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. The Joy of Telecomputing: Everything You Need to Know about Going On-Line at Home.

    ERIC Educational Resources Information Center

    Pearlman, Dara

    1984-01-01

    Discusses advantages and pleasures of utilizing a personal computer at home to receive electronic mail; participate in online conferences, software exchanges, and game networks; do shopping and banking; and have access to databases storing volumes of information. Information sources for the services mentioned are included. (MBR)

  12. Use (and Misuse) of the Responsible Drinking Message in Public Health and Alcohol Advertising: A Review

    ERIC Educational Resources Information Center

    Barry, Adam E.; Goodson, Patricia

    2010-01-01

    The objective is to present a comparative analysis examining the alcohol industry's and scholarly researchers' use of the concept "responsible drinking." Electronic databases associated with health, education, sociology, psychology, and medicine were the date sources. Results were limited to English, peer-reviewed articles and commentaries…

  13. A Systematic Review of Narrative-Based Language Intervention with Children Who Have Language Impairment

    ERIC Educational Resources Information Center

    Petersen, Douglas B.

    2011-01-01

    This systematic review focuses on research articles published since 1980 that assess outcomes of narrative-based language intervention for preschool and school-age children with language impairment. The author conducted a comprehensive search of electronic databases and hand searches of other sources for studies using all research designs except…

  14. The 21st Century Writing Program: Collaboration for the Common Good

    ERIC Educational Resources Information Center

    Moberg, Eric

    2010-01-01

    The purpose of this report is to review the literature on theoretical frameworks, best practices, and conceptual models for the 21st century collegiate writing program. Methods include electronic database searches for recent and historical peer-reviewed scholarly literature on collegiate writing programs. The author analyzed over 65 sources from…

  15. Academic Citation Practice: A Sinking Sheep?

    ERIC Educational Resources Information Center

    Rekdal, Ole Bjørn

    2014-01-01

    An explosion in access to electronic databases and digital information is changing the way we view source citation. While the original purpose of referencing--showing the reader exactly where the author got his or her input--is clearly more important than ever, citation is increasingly taking on other roles, ones that have little to do with good…

  16. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  17. Sources of Safety Data and Statistical Strategies for Design and Analysis: Postmarket Surveillance.

    PubMed

    Izem, Rima; Sanchez-Kam, Matilde; Ma, Haijun; Zink, Richard; Zhao, Yueqin

    2018-03-01

    Safety data are continuously evaluated throughout the life cycle of a medical product to accurately assess and characterize the risks associated with the product. The knowledge about a medical product's safety profile continually evolves as safety data accumulate. This paper discusses data sources and analysis considerations for safety signal detection after a medical product is approved for marketing. This manuscript is the second in a series of papers from the American Statistical Association Biopharmaceutical Section Safety Working Group. We share our recommendations for the statistical and graphical methodologies necessary to appropriately analyze, report, and interpret safety outcomes, and we discuss the advantages and disadvantages of safety data obtained from passive postmarketing surveillance systems compared to other sources. Signal detection has traditionally relied on spontaneous reporting databases that have been available worldwide for decades. However, current regulatory guidelines and ease of reporting have increased the size of these databases exponentially over the last few years. With such large databases, data-mining tools using disproportionality analysis and helpful graphics are often used to detect potential signals. Although the data sources have many limitations, analyses of these data have been successful at identifying safety signals postmarketing. Experience analyzing these dynamic data is useful in understanding the potential and limitations of analyses with new data sources such as social media, claims, or electronic medical records data.

  18. Upwelling to Outflowing Oxygen Ions at Auroral Latitudes during Quiet Times: Exploiting a New Satellite Database

    NASA Astrophysics Data System (ADS)

    Redmon, Robert J.

    The mechanisms by which thermal O+ escapes from the top of the ionosphere and into the magnetosphere are not fully understood even with 30 years of active research. This thesis introduces a new database, builds a simulation framework around a thermospheric model and exploits these tools to gain new insights into the study of O+ ion outflows. A dynamic auroral boundary identification system is developed using Defense Meteorological Satellite Program (DMSP) spacecraft observations at 850 km to build a database characterizing the oxygen source region. This database resolves the ambiguity of the expansion and contraction of the auroral zone. Mining this new dataset, new understanding is revealed. We describe the statistical trajectory of the cleft ion fountain return flows over the polar cap as a function of activity and the orientation of the interplanetary magnetic field y-component. A substantial peak in upward moving O+ in the morning hours is discovered. Using published high altitude data we demonstrate that between 850 and 6000 km altitude, O+ is energized predominantly through transverse heating; and acceleration in this altitude region is relatively more important in the cusp than at midnight. We compare data with a thermospheric model to study the effects of solar irradiance, electron precipitation and neutral wind on the distribution of upward O+ at auroral latitudes. EUV irradiance is shown to play a dominant role in establishing a dawn-focused source population of upwelling O+ that is responsible for a pre-noon feature in escaping O+ fluxes. This feature has been corroborated by observations on platforms including the Dynamics Explorer 1 (DE-1), Polar, and Fast Auroral Snapshot SnapshoT (FAST) spacecraft. During quiet times our analysis shows that the neutral wind is more important than electron precipitation in establishing the dayside O+ upwelling distribution. Electron precipitation is found to play a relatively modest role in controlling dayside, and a critical role in controlling nightside, upwelling O+. This thesis provides a new database, and insights into the study of oxygen ion outflows during quiet times. These results and tools will be essential for researchers working on topics involving magnetosphere-ionosphere interactions.

  19. Semiempirical studies of atomic structure. Progress report, 1 July 1991--1 October 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, L.J.

    1993-10-01

    Atomic structure/properties of highly ionized many-electron systems are studied using sensitive semiempirical data systematization, experiment, and theory. Measurements are made using fast ion beams, combined with data from laser- and tokamak-produced plasmas, astrophysical sources, and light sources. Results during this 3-y period are discussed under the following headings: Invited review article (decay rates in systems of negative ions to very heavy one-electron ions), fast ion beam lifetime measurements (Pt sequence, neutral carbon, Na sequence), multiplexed decay curve measurements, multiplexed decay curve measurements (lifetimes of alkali-like resonance transitions, spin-forbidden intercombination lines), lifetimes in Ne sequence, lifetimes for H and He sequences,more » data-based semiempirical formulations, calculations, and accelerator studies.« less

  20. Electron Inelastic-Mean-Free-Path Database

    National Institute of Standards and Technology Data Gateway

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  1. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. DATABASE OF METEOROLOGICAL AND RADIATION MEASUREMENTS MADE IN BELARUS DURING THE FIRST THREE MONTHS FOLLOWING THE CHERNOBYL ACCIDENT

    PubMed Central

    Drozdovitch, Vladimir; Zhukova, Olga; Germenchuk, Maria; Khrutchinsky, Arkady; Kukhta, Tatiana; Luckyanov, Nickolas; Minenko, Victor; Podgaiskaya, Marina; Savkin, Mikhail; Vakulovsky, Sergey; Voillequé, Paul; Bouville, André

    2012-01-01

    Results of all available meteorological and radiation measurements that were performed in Belarus during the first three months after the Chernobyl accident were collected from various sources and incorporated into a single database. Meteorological information such as precipitation, wind speed and direction, and temperature in localities were obtained from meteorological station facilities. Radiation measurements include gamma-exposure rate in air, daily fallout, concentration of different radionuclides in soil, grass, cow’s milk and water as well as total beta-activity in cow’s milk. Considerable efforts were made to evaluate the reliability of the measurements that were collected. The electronic database can be searched according to type of measurement, date, and location. The main purpose of the database is to provide reliable data that can be used in the reconstruction of thyroid doses resulting from the Chernobyl accident. PMID:23103580

  3. Electronic surveillance and using administrative data to identify healthcare associated infections.

    PubMed

    Gastmeier, Petra; Behnke, Michael

    2016-08-01

    Traditional surveillance of healthcare associated infections (HCAI) is time consuming and error-prone. We have analysed literature of the past year to look at new developments in this field. It is divided into three parts: new algorithms for electronic surveillance, the use of administrative data for surveillance of HCAI, and the definition of new endpoints of surveillance, in accordance with an automatic surveillance approach. Most studies investigating electronic surveillance of HCAI have concentrated on bloodstream infection or surgical site infection. However, the lack of important parameters in hospital databases can lead to misleading results. The accuracy of administrative coding data was poor at identifying HCAI. New endpoints should be defined for automatic detection, with the most crucial step being to win clinicians' acceptance. Electronic surveillance with conventional endpoints is a successful method when hospital information systems implemented key changes and enhancements. One requirement is the access to systems for hospital administration and clinical databases.Although the primary source of data for HCAI surveillance is not administrative coding data, these are important components of a hospital-wide programme of automated surveillance. The implementation of new endpoints for surveillance is an approach which needs to be discussed further.

  4. Transport modeling of L- and H-mode discharges with LHCD on EAST

    NASA Astrophysics Data System (ADS)

    Li, M. H.; Ding, B. J.; Imbeaux, F.; Decker, J.; Zhang, X. J.; Kong, E. H.; Zhang, L.; Wei, W.; Shan, J. F.; Liu, F. K.; Wang, M.; Xu, H. D.; Yang, Y.; Peysson, Y.; Basiuk, V.; Artaud, J.-F.; Yuynh, P.; Wan, B. N.

    2013-04-01

    High-confinement (H-mode) discharges with lower hybrid current drive (LHCD) as the only heating source are obtained on EAST. In this paper, an empirical transport model of mixed Bohm/gyro-Bohm for electron and ion heat transport was first calibrated against a database of 3 L-mode shots on EAST. The electron and ion temperature profiles are well reproduced in the predictive modeling with the calibrated model coupled to the suite of codes CRONOS. CRONOS calculations with experimental profiles are also performed for electron power balance analysis. In addition, the time evolutions of LHCD are calculated by the C3PO/LUKE code involving current diffusion, and the results are compared with experimental observations.

  5. Designing Semiconductor Heterostructures Using Digitally Accessible Electronic-Structure Data

    NASA Astrophysics Data System (ADS)

    Shapera, Ethan; Schleife, Andre

    Semiconductor sandwich structures, so-called heterojunctions, are at the heart of modern applications with tremendous societal impact: Light-emitting diodes shape the future of lighting and solar cells are promising for renewable energy. However, their computer-based design is hampered by the high cost of electronic structure techniques used to select materials based on alignment of valence and conduction bands and to evaluate excited state properties. We describe, validate, and demonstrate an open source Python framework which rapidly screens existing online databases and user-provided data to find combinations of suitable, previously fabricated materials for optoelectronic applications. The branch point energy aligns valence and conduction bands of different materials, requiring only the bulk density functional theory band structure. We train machine learning algorithms to predict the dielectric constant, electron mobility, and hole mobility with material descriptors available in online databases. Using CdSe and InP as emitting layers for LEDs and CH3NH3PbI3 and nanoparticle PbS as absorbers for solar cells, we demonstrate our broadly applicable, automated method.

  6. Food and nutrient availability in New Zealand: an analysis of supermarket sales data.

    PubMed

    Hamilton, Sally; Mhurchu, Cliona Ni; Priest, Patricia

    2007-12-01

    To examine food and nutrient availability in New Zealand using supermarket sales data in conjunction with a brand-specific supermarket food composition database (SFD). The SFD was developed by selecting the top-selling supermarket food products and linking them to food composition data from a variety of sources, before merging with individualised sales data. Supermarket food and nutrient data were then compared with data from national nutrition and household budget/economic surveys. A supermarket in Wellington, New Zealand. Eight hundred and eighty-two customers (73% female; mean age 38 years) who shopped regularly at the participating supermarket store and for whom electronic sales data were available for the period February 2004-January 2005. Top-selling supermarket food products included full-fat milk, white bread, sugary soft drinks and butter. Key food sources of macronutrients were similar between the supermarket sales database and national nutrition surveys. For example, bread was the major source of energy and contributed 12-13% of energy in all three data sources. Proportional expenditure on fruit, vegetables, meat, poultry, fish, farm products and oils, and cereal products recorded in the Household Economic Survey and supermarket sales data were within 2% of each other. Electronic supermarket sales data can be used to evaluate a number of important aspects of food and nutrient availability. Many of our findings were broadly comparable with national nutrition and food expenditure survey data, and supermarket sales have the advantage of being an objective, convenient, up-to-date and cost-effective measure of household food purchases.

  7. Standardizing terminology and definitions of medication adherence and persistence in research employing electronic databases.

    PubMed

    Raebel, Marsha A; Schmittdiel, Julie; Karter, Andrew J; Konieczny, Jennifer L; Steiner, John F

    2013-08-01

    To propose a unifying set of definitions for prescription adherence research utilizing electronic health record prescribing databases, prescription dispensing databases, and pharmacy claims databases and to provide a conceptual framework to operationalize these definitions consistently across studies. We reviewed recent literature to identify definitions in electronic database studies of prescription-filling patterns for chronic oral medications. We then develop a conceptual model and propose standardized terminology and definitions to describe prescription-filling behavior from electronic databases. The conceptual model we propose defines 2 separate constructs: medication adherence and persistence. We define primary and secondary adherence as distinct subtypes of adherence. Metrics for estimating secondary adherence are discussed and critiqued, including a newer metric (New Prescription Medication Gap measure) that enables estimation of both primary and secondary adherence. Terminology currently used in prescription adherence research employing electronic databases lacks consistency. We propose a clear, consistent, broadly applicable conceptual model and terminology for such studies. The model and definitions facilitate research utilizing electronic medication prescribing, dispensing, and/or claims databases and encompasses the entire continuum of prescription-filling behavior. Employing conceptually clear and consistent terminology to define medication adherence and persistence will facilitate future comparative effectiveness research and meta-analytic studies that utilize electronic prescription and dispensing records.

  8. LexisNexis

    EPA Pesticide Factsheets

    LexisNexis provides access to electronic legal and non-legal research databases to the Agency's attorneys, administrative law judges, law clerks, investigators, and certain non-legal staff (e.g. staff in the Office of Public Affairs). The agency requires access to the following types of electronic databases: Legal databases, Non-legal databases, Public Records databases, and Financial databases.

  9. Extending Marine Species Distribution Maps Using Non-Traditional Sources

    PubMed Central

    Moretzsohn, Fabio; Gibeaut, James

    2015-01-01

    Abstract Background Traditional sources of species occurrence data such as peer-reviewed journal articles and museum-curated collections are included in species databases after rigorous review by species experts and evaluators. The distribution maps created in this process are an important component of species survival evaluations, and are used to adapt, extend and sometimes contract polygons used in the distribution mapping process. New information During an IUCN Red List Gulf of Mexico Fishes Assessment Workshop held at The Harte Research Institute for Gulf of Mexico Studies, a session included an open discussion on the topic of including other sources of species occurrence data. During the last decade, advances in portable electronic devices and applications enable 'citizen scientists' to record images, location and data about species sightings, and submit that data to larger species databases. These applications typically generate point data. Attendees of the workshop expressed an interest in how that data could be incorporated into existing datasets, how best to ascertain the quality and value of that data, and what other alternate data sources are available. This paper addresses those issues, and provides recommendations to ensure quality data use. PMID:25941453

  10. Electron-Impact Ionization Cross Section Database

    National Institute of Standards and Technology Data Gateway

    SRD 107 Electron-Impact Ionization Cross Section Database (Web, free access)   This is a database primarily of total ionization cross sections of molecules by electron impact. The database also includes cross sections for a small number of atoms and energy distributions of ejected electrons for H, He, and H2. The cross sections were calculated using the Binary-Encounter-Bethe (BEB) model, which combines the Mott cross section with the high-incident energy behavior of the Bethe cross section. Selected experimental data are included.

  11. Integration of a clinical trial database with a PACS

    NASA Astrophysics Data System (ADS)

    van Herk, M.

    2014-03-01

    Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.

  12. NASA Aeroelasticity Handbook Volume 2: Design Guides Part 2

    NASA Technical Reports Server (NTRS)

    Ramsey, John K. (Editor)

    2006-01-01

    The NASA Aeroelasticity Handbook comprises a database (in three formats) of NACA and NASA aeroelasticity flutter data through 1998 and a collection of aeroelasticity design guides. The Microsoft Access format provides the capability to search for specific data, retrieve it, and present it in a tabular or graphical form unique to the application. The full-text NACA and NASA documents from which the data originated are provided in portable document format (PDF), and these are hyperlinked to their respective data records. This provides full access to all available information from the data source. Two other electronic formats, one delimited by commas and the other by spaces, are provided for use with other software capable of reading text files. To the best of the author s knowledge, this database represents the most extensive collection of NACA and NASA flutter data in electronic form compiled to date by NASA. Volume 2 of the handbook contains a convenient collection of aeroelastic design guides covering fixed wings, turbomachinery, propellers and rotors, panels, and model scaling. This handbook provides an interactive database and design guides for use in the preliminary aeroelastic design of aerospace systems and can also be used in validating or calibrating flutter-prediction software.

  13. Impact of the mass media on calls to the CDC National AIDS Hotline.

    PubMed

    Fan, D P

    1996-06-01

    This paper considers new computer methodologies for assessing the impact of different types of public health information. The example used public service announcements (PSAs) and mass media news to predict the volume of attempts to call the CDC National AIDS Hotline from December 1992 through to the end of 1993. The analysis relied solely on data from electronic databases. Newspaper stories and television news transcripts were obtained from the NEXIS electronic database and were scored by machine for AIDS coverage. The PSA database was generated by computer monitoring of advertising distributed by the Centers for Disease Control and Prevention (CDC) and by others. The volume of call attempts was collected automatically by the public branch exchange (PBX) of the Hotline telephone system. The call attempts, the PSAs and the news story data were related to each other using both a standard time series method and the statistical model of ideodynamics. The analysis indicated that the only significant explanatory variable for the call attempts was PSAs produced by the CDC. One possible explanation was that these commercials all included the Hotline telephone number while the other information sources did not.

  14. Security Techniques for the Electronic Health Records.

    PubMed

    Kruse, Clemens Scott; Smith, Brenna; Vanderlinden, Hannah; Nealand, Alexandra

    2017-08-01

    The privacy of patients and the security of their information is the most imperative barrier to entry when considering the adoption of electronic health records in the healthcare industry. Considering current legal regulations, this review seeks to analyze and discuss prominent security techniques for healthcare organizations seeking to adopt a secure electronic health records system. Additionally, the researchers sought to establish a foundation for further research for security in the healthcare industry. The researchers utilized the Texas State University Library to gain access to three online databases: PubMed (MEDLINE), CINAHL, and ProQuest Nursing and Allied Health Source. These sources were used to conduct searches on literature concerning security of electronic health records containing several inclusion and exclusion criteria. Researchers collected and analyzed 25 journals and reviews discussing security of electronic health records, 20 of which mentioned specific security methods and techniques. The most frequently mentioned security measures and techniques are categorized into three themes: administrative, physical, and technical safeguards. The sensitive nature of the information contained within electronic health records has prompted the need for advanced security techniques that are able to put these worries at ease. It is imperative for security techniques to cover the vast threats that are present across the three pillars of healthcare.

  15. Preparing Nursing Home Data from Multiple Sites for Clinical Research – A Case Study Using Observational Health Data Sciences and Informatics

    PubMed Central

    Boyce, Richard D.; Handler, Steven M.; Karp, Jordan F.; Perera, Subashan; Reynolds, Charles F.

    2016-01-01

    Introduction: A potential barrier to nursing home research is the limited availability of research quality data in electronic form. We describe a case study of converting electronic health data from five skilled nursing facilities to a research quality longitudinal dataset by means of open-source tools produced by the Observational Health Data Sciences and Informatics (OHDSI) collaborative. Methods: The Long-Term Care Minimum Data Set (MDS), drug dispensing, and fall incident data from five SNFs were extracted, translated, and loaded into version 4 of the OHDSI common data model. Quality assurance involved identifying errors using the Achilles data characterization tool and comparing both quality measures and drug exposures in the new database for concordance with externally available sources. Findings: Records for a total 4,519 patients (95.1%) made it into the final database. Achilles identified 10 different types of errors that were addressed in the final dataset. Drug exposures based on dispensing were generally accurate when compared with medication administration data from the pharmacy services provider. Quality measures were generally concordant between the new database and Nursing Home Compare for measures with a prevalence ≥ 10%. Fall data recorded in MDS was found to be more complete than data from fall incident reports. Conclusions: The new dataset is ready to support observational research on topics of clinical importance in the nursing home including patient-level prediction of falls. The extraction, translation, and loading process enabled the use of OHDSI data characterization tools that improved the quality of the final dataset. PMID:27891528

  16. 76 FR 4072 - Registration of Claims of Copyright

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-24

    ... registration of automated databases that predominantly consist of photographs, and applications for group... to submit electronic applications to register copyrights of such photographic databases or of groups... automated databases, an electronic application for group registration of an automated database that consists...

  17. Constructing Effective Search Strategies for Electronic Searching.

    ERIC Educational Resources Information Center

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  18. Information literacy for evidence-based practice in perianesthesia nurses: readiness for evidence-based practice.

    PubMed

    Ross, Jacqueline

    2010-04-01

    Information literacy, the recognition of information required, and the development of skills for locating, evaluating, and effectively using relevant evidence is needed for evidence-based practice (EBP). The purpose of this study was to examine perianesthesia nurses' perception of searching skills and access to evidence sources. The design was a descriptive, exploratory survey. The sample consisted of ASPAN members (n = 64) and nonmembers (n = 64). The Information Literacy for Evidence-Based Nursing Practice instrument was used. Findings were that ASPAN members read more journal articles, were more proficient with computers, and used Cumulative Index to Nursing and Allied Health Literature (CINAHL) more frequently than nonmembers. The three top barriers to use of research were: lack of understanding of organization or structure of electronic databases, lack of skills to critique and/or synthesize the literature, and difficulty in accessing research materials. In conclusion, education is needed for critiquing literature and understanding electronic databases and research articles to promote EBP in perianesthesia areas. Copyright 2010. Published by Elsevier Inc.

  19. Developing Surveillance Methodology for Agricultural and Logging Injury in New Hampshire Using Electronic Administrative Data Sets.

    PubMed

    Scott, Erika E; Hirabayashi, Liane; Krupa, Nicole L; Sorensen, Julie A; Jenkins, Paul L

    2015-08-01

    Agriculture and logging rank among industries with the highest rates of occupational fatality and injury. Establishing a nonfatal injury surveillance system is a top priority in the National Occupational Research Agenda. Sources of data such as patient care reports (PCRs) and hospitalization data have recently transitioned to electronic databases. Using narrative and location codes from PCRs, along with International Classification of Diseases, 9th Revision, external cause of injury codes (E-codes) in hospital data, researchers are designing a surveillance system to track farm and logging injury. A total of 357 true agricultural or logging cases were identified. These data indicate that it is possible to identify agricultural and logging injury events in PCR and hospital data. Multiple data sources increase catchment; nevertheless, limitations in methods of identification of agricultural and logging injury contribute to the likely undercount of injury events.

  20. 8 CFR 338.11 - Execution and issuance of certificate of naturalization by clerk of court.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the petitioner. If the court maintains naturalization records on an electronic database then only the... and maintained in the court's electronic database. (b) The certificate shall show under “former..., or if using automation equipment, ensure it is part of the electronic database record. The clerk of...

  1. 8 CFR 338.11 - Execution and issuance of certificate of naturalization by clerk of court.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... the petitioner. If the court maintains naturalization records on an electronic database then only the... and maintained in the court's electronic database. (b) The certificate shall show under “former..., or if using automation equipment, ensure it is part of the electronic database record. The clerk of...

  2. Systematic Review of Health Economic Evaluation Studies Developed in Brazil from 1980 to 2013.

    PubMed

    Decimoni, Tassia Cristina; Leandro, Roseli; Rozman, Luciana Martins; Craig, Dawn; Iglesias, Cynthia P; Novaes, Hillegonda Maria Dutilh; de Soárez, Patrícia Coelho

    2018-01-01

    Brazil has sought to use economic evaluation to support healthcare decision-making processes. While a number of health economic evaluations (HEEs) have been conducted, no study has systematically reviewed the quality of Brazilian HEE. The objective of this systematic review was to provide an overview regarding the state of HEE research and to evaluate the number, characteristics, and quality of reporting of published HEE studies conducted in a Brazilian setting. We systematically searched electronic databases (MEDLINE, EMBASE, Latin American, and Caribbean Literature on Health Sciences Database, Scientific Electronic Library Online, NHS Economic Evaluation Database, health technology assessment Database, Bireme, and Biblioteca Virtual em Saúde Economia da Saúde ); citation indexes (SCOPUS, Web of Science), and Sistema de Informação da Rede Brasileira de Avaliação de Tecnologia em Saúde . Partial and full HEEs published between 1980 and 2013 that referred to a Brazilian setting were considered for inclusion. In total, 535 studies were included in the review, 36.8% of these were considered to be full HEE. The category of healthcare technologies more frequently assessed were procedures (34.8%) and drugs (28.8%) which main objective was treatment (72.1%). Forty-four percent of the studies reported their funding source and 36% reported a conflict of interest. Overall, the full HEE quality of reporting was satisfactory. But some items were generally poorly reported and significant improvement is required: (1) methods used to estimate healthcare resource use quantities and unit costs, (2) methods used to estimate utility values, (3) sources of funding, and (4) conflicts of interest. A steady number of HEE have been published in Brazil since 1980. To improve their contribution to inform national healthcare policy efforts need to be made to enhance the quality of reporting of HEEs and promote improvements in the way HEEs are designed, implemented (i.e., using sound methods for HEEs) and reported.

  3. 19 CFR 351.304 - Establishing business proprietary treatment of information.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...

  4. 19 CFR 351.304 - Establishing business proprietary treatment of information.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...

  5. 19 CFR 351.304 - Establishing business proprietary treatment of information.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...

  6. Computer Cataloging of Electronic Journals in Unstable Aggregator Databases: The Hong Kong Baptist University Library Experience.

    ERIC Educational Resources Information Center

    Li, Yiu-On; Leung, Shirley W.

    2001-01-01

    Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…

  7. Estimating Causal Effects in Observational Studies using Electronic Health Data: Challenges and (Some) Solutions

    PubMed Central

    Stuart, Elizabeth A.; DuGoff, Eva; Abrams, Michael; Salkever, David; Steinwachs, Donald

    2013-01-01

    Electronic health data sets, including electronic health records (EHR) and other administrative databases, are rich data sources that have the potential to help answer important questions about the effects of clinical interventions as well as policy changes. However, analyses using such data are almost always non-experimental, leading to concerns that those who receive a particular intervention are likely different from those who do not in ways that may confound the effects of interest. This paper outlines the challenges in estimating causal effects using electronic health data and offers some solutions, with particular attention paid to propensity score methods that help ensure comparisons between similar groups. The methods are illustrated with a case study describing the design of a study using Medicare and Medicaid administrative data to estimate the effect of the Medicare Part D prescription drug program on individuals with serious mental illness. PMID:24921064

  8. X-ray Photoelectron Spectroscopy Database (Version 4.1)

    National Institute of Standards and Technology Data Gateway

    SRD 20 X-ray Photoelectron Spectroscopy Database (Version 4.1) (Web, free access)   The NIST XPS Database gives access to energies of many photoelectron and Auger-electron spectral lines. The database contains over 22,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines.

  9. Academic impact of a public electronic health database: bibliometric analysis of studies using the general practice research database.

    PubMed

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.

  10. Evaluation of five full-text drug databases by pharmacy students, faculty, and librarians: do the groups agree?

    PubMed

    Kupferberg, Natalie; Jones Hartel, Lynda

    2004-01-01

    The purpose of this study is to assess the usefulness of five full-text drug databases as evaluated by medical librarians, pharmacy faculty, and pharmacy students at an academic health center. Study findings and recommendations are offered as guidance to librarians responsible for purchasing decisions. Four pharmacy students, four pharmacy faculty members, and four medical librarians answered ten drug information questions using the databases AHFS Drug Information (STAT!Ref); DRUGDEX (Micromedex); eFacts (Drug Facts and Comparisons); Lexi-Drugs Online (Lexi-Comp); and the PDR Electronic Library (Micromedex). Participants noted whether each database contained answers to the questions and evaluated each database on ease of navigation, screen readability, overall satisfaction, and product recommendation. While each study group found that DRUGDEX provided the most direct answers to the ten questions, faculty members gave Lexi-Drugs the highest overall rating. Students favored eFacts. The faculty and students found the PDR least useful. Librarians ranked DRUGDEX the highest and AHFS the lowest. The comments of pharmacy faculty and students show that these groups preferred concise, easy-to-use sources; librarians focused on the comprehensiveness, layout, and supporting references of the databases. This study demonstrates the importance of consulting with primary clientele before purchasing databases. Although there are many online drug databases to consider, present findings offer strong support for eFacts, Lexi-Drugs, and DRUGDEX.

  11. Evaluation of five full-text drug databases by pharmacy students, faculty, and librarians: do the groups agree?

    PubMed Central

    Kupferberg, Natalie; Hartel, Lynda Jones

    2004-01-01

    Objectives: The purpose of this study is to assess the usefulness of five full-text drug databases as evaluated by medical librarians, pharmacy faculty, and pharmacy students at an academic health center. Study findings and recommendations are offered as guidance to librarians responsible for purchasing decisions. Methods: Four pharmacy students, four pharmacy faculty members, and four medical librarians answered ten drug information questions using the databases AHFS Drug Information (STAT!Ref); DRUGDEX (Micromedex); eFacts (Drug Facts and Comparisons); Lexi-Drugs Online (Lexi-Comp); and the PDR Electronic Library (Micromedex). Participants noted whether each database contained answers to the questions and evaluated each database on ease of navigation, screen readability, overall satisfaction, and product recommendation. Results: While each study group found that DRUGDEX provided the most direct answers to the ten questions, faculty members gave Lexi-Drugs the highest overall rating. Students favored eFacts. The faculty and students found the PDR least useful. Librarians ranked DRUGDEX the highest and AHFS the lowest. The comments of pharmacy faculty and students show that these groups preferred concise, easy-to-use sources; librarians focused on the comprehensiveness, layout, and supporting references of the databases. Conclusion: This study demonstrates the importance of consulting with primary clientele before purchasing databases. Although there are many online drug databases to consider, present findings offer strong support for eFacts, Lexi-Drugs, and DRUGDEX. PMID:14762464

  12. Event Recording Data Acquisition System and Experiment Data Management System for Neutron Experiments at MLF, J-PARC

    NASA Astrophysics Data System (ADS)

    Nakatani, T.; Inamura, Y.; Moriyama, K.; Ito, T.; Muto, S.; Otomo, T.

    Neutron scattering can be a powerful probe in the investigation of many phenomena in the materials and life sciences. The Materials and Life Science Experimental Facility (MLF) at the Japan Proton Accelerator Research Complex (J-PARC) is a leading center of experimental neutron science and boasts one of the most intense pulsed neutron sources in the world. The MLF currently has 18 experimental instruments in operation that support a wide variety of users from across a range of research fields. The instruments include optical elements, sample environment apparatus and detector systems that are controlled and monitored electronically throughout an experiment. Signals from these components and those from the neutron source are converted into a digital format by the data acquisition (DAQ) electronics and recorded as time-tagged event data in the DAQ computers using "DAQ-Middleware". Operating in event mode, the DAQ system produces extremely large data files (˜GB) under various measurement conditions. Simultaneously, the measurement meta-data indicating each measurement condition is recorded in XML format by the MLF control software framework "IROHA". These measurement event data and meta-data are collected in the MLF common storage and cataloged by the MLF Experimental Database (MLF EXP-DB) based on a commercial XML database. The system provides a web interface for users to manage and remotely analyze experimental data.

  13. Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin

    How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.

  14. Mobile Source Observation Database (MSOD)

    EPA Pesticide Factsheets

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  15. [The use of bibliographic information resources and Web 2.0 by neuropaediatricians].

    PubMed

    González de Dios, Javier; Camino-León, Rafael; Ramos-Lizana, Julio

    2011-06-16

    To determine the state of knowledge and use of the main sources of bibliographic information and Web 2.0 resources in a sample of pediatricians linked professionally to child neurology. Anonymous opinion survey to 44 pediatricians (36 neuropediatric staffs and 8 residents) with two sections: sources of bibliographic information: (25 questions) and Web 2.0 resources (14 questions). The most consulted journals are Revista de Neurología and Anales de Pediatría. All use PubMed database and less frequently Índice Médico Español (40%) and Embase (27%); less than 20% use of other international and national databases. 81% of respondents used the Cochrane Library, and less frequently other sources of evidence-based medicine: Tripdatabase (39%), National Guideline Clearinghouse (37%), Excelencia Clínica (12%) and Sumsearch (3%). 45% regularly receive some e-TOC (electronic table of contents) of biomedical journals, but only 7% reported having used the RSS (really system syndication). The places to start searching for information are PubMed (55%) and Google (23%). The four resources most used of Web 2.0 are YouTube (73%), Facebook (43%), Picasa (27%) and blogs (25%). We don't found differences in response between the group of minus or equal to 34 and major or equal to 35 years. Knowledge of the patterns of use of information databases and Web 2.0 resources can identify the limitations and opportunities for improvement in the field of pediatric neurology training and information.

  16. [Concordance in the registry of dementia among the main sources of clinical information].

    PubMed

    Marta-Moreno, Javier; Obón-Azuara, Blanca; Gimeno-Felíu, Luis; Achkar-Tuglaman, Nesib Nicolás; Poblador-Plou, Beatriz; Calderón-Larrañaga, Amaia; Prados-Torres, Alexandra

    2016-01-01

    The objective of this work was to analyse the concordance in the registry of dementia among the main sources of clinical information, with the aim of determining their usefulness for epidemiological and clinical research. Descriptive study of patients assigned to the Aragon Health Service in 2010 (n=1,344,891). (i)the pharmacy billing database (n=9,392); (ii)Primary Care electronic health records (EHR) (n=9,471), and (iii)the hospital minimum basic data set (n=3,289). When studying the concordance of the databases, the group of patients with a specific treatment for dementia (i.e., acetylcholinesterase inhibitors and/or memantine) was taken as the reference. The diagnosis in Primary Care was missing for 47.3% of patients taking anti-dementia drugs. The same occurred with 38.3% of dementia patients admitted to hospital during the study year. Among patients with a diagnosis of dementia in the EHR, only half (52.3%) was under treatment for this condition. This percentage decreased to 34.4% in patients with the diagnosis registered in the hospital database. The weak concordance in the registry of the dementia diagnosis between the main health information systems makes their use and analysis more complex, and supports the need to include all available health data sources in order to gain a global picture of the epidemiological and clinical reality of this health condition. Copyright © 2015 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. NOAO observing proposal processing system

    NASA Astrophysics Data System (ADS)

    Bell, David J.; Gasson, David; Hartman, Mia

    2002-12-01

    Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online. The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.

  18. Online Databases in Physics.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; Verbeck, Alison F.

    1984-01-01

    This overview of 47 online sources for physics information available in the United States--including sub-field databases, transdisciplinary databases, and multidisciplinary databases-- notes content, print source, language, time coverage, and databank. Two discipline-specific databases (SPIN and PHYSICS BRIEFS) are also discussed. (EJS)

  19. Concierge: Personal Database Software for Managing Digital Research Resources

    PubMed Central

    Sakai, Hiroyuki; Aoyama, Toshihiro; Yamaji, Kazutsuna; Usui, Shiro

    2007-01-01

    This article introduces a desktop application, named Concierge, for managing personal digital research resources. Using simple operations, it enables storage of various types of files and indexes them based on content descriptions. A key feature of the software is a high level of extensibility. By installing optional plug-ins, users can customize and extend the usability of the software based on their needs. In this paper, we also introduce a few optional plug-ins: literature management, electronic laboratory notebook, and XooNlps client plug-ins. XooNIps is a content management system developed to share digital research resources among neuroscience communities. It has been adopted as the standard database system in Japanese neuroinformatics projects. Concierge, therefore, offers comprehensive support from management of personal digital research resources to their sharing in open-access neuroinformatics databases such as XooNIps. This interaction between personal and open-access neuroinformatics databases is expected to enhance the dissemination of digital research resources. Concierge is developed as an open source project; Mac OS X and Windows XP versions have been released at the official site (http://concierge.sourceforge.jp). PMID:18974800

  20. Systematic review of scope and quality of electronic patient record data in primary care

    PubMed Central

    Thiru, Krish; Hassey, Alan; Sullivan, Frank

    2003-01-01

    Objective To systematically review measures of data quality in electronic patient records (EPRs) in primary care. Design Systematic review of English language publications, 1980-2001. Data sources Bibliographic searches of medical databases, specialist medical informatics databases, conference proceedings, and institutional contacts. Study selection Studies selected according to a predefined framework for categorising review papers. Data extraction Reference standards and measurements used to judge quality. Results Bibliographic searches identified 4589 publications. After primary exclusions 174 articles were classified, 52 of which met the inclusion criteria for review. Selected studies were primarily descriptive surveys. Variability in methods prevented meta-analysis of results. Forty eight publications were concerned with diagnostic data, 37 studies measured data quality, and 15 scoped EPR quality. Reliability of data was assessed with rate comparison. Measures of sensitivity were highly dependent on the element of EPR data being investigated, while the positive predictive value was consistently high, indicating good validity. Prescribing data were generally of better quality than diagnostic or lifestyle data. Conclusion The lack of standardised methods for assessment of quality of data in electronic patient records makes it difficult to compare results between studies. Studies should present data quality measures with clear numerators, denominators, and confidence intervals. Ambiguous terms such as “accuracy” should be avoided unless precisely defined. PMID:12750210

  1. A voxel-based mouse for internal dose calculations using Monte Carlo simulations (MCNP).

    PubMed

    Bitar, A; Lisbona, A; Thedrez, P; Sai Maurel, C; Le Forestier, D; Barbet, J; Bardies, M

    2007-02-21

    Murine models are useful for targeted radiotherapy pre-clinical experiments. These models can help to assess the potential interest of new radiopharmaceuticals. In this study, we developed a voxel-based mouse for dosimetric estimates. A female nude mouse (30 g) was frozen and cut into slices. High-resolution digital photographs were taken directly on the frozen block after each section. Images were segmented manually. Monoenergetic photon or electron sources were simulated using the MCNP4c2 Monte Carlo code for each source organ, in order to give tables of S-factors (in Gy Bq-1 s-1) for all target organs. Results obtained from monoenergetic particles were then used to generate S-factors for several radionuclides of potential interest in targeted radiotherapy. Thirteen source and 25 target regions were considered in this study. For each source region, 16 photon and 16 electron energies were simulated. Absorbed fractions, specific absorbed fractions and S-factors were calculated for 16 radionuclides of interest for targeted radiotherapy. The results obtained generally agree well with data published previously. For electron energies ranging from 0.1 to 2.5 MeV, the self-absorbed fraction varies from 0.98 to 0.376 for the liver, and from 0.89 to 0.04 for the thyroid. Electrons cannot be considered as 'non-penetrating' radiation for energies above 0.5 MeV for mouse organs. This observation can be generalized to radionuclides: for example, the beta self-absorbed fraction for the thyroid was 0.616 for I-131; absorbed fractions for Y-90 for left kidney-to-left kidney and for left kidney-to-spleen were 0.486 and 0.058, respectively. Our voxel-based mouse allowed us to generate a dosimetric database for use in preclinical targeted radiotherapy experiments.

  2. Biological particle identification apparatus

    DOEpatents

    Salzman, Gary C.; Gregg, Charles T.; Grace, W. Kevin; Hiebert, Richard D.

    1989-01-01

    An apparatus and method for making multiparameter light scattering measurements from suspensions of biological particles is described. Fourteen of the sixteen Mueller matrix elements describing the particles under investigation can be substantially individually determined as a function of scattering angle and probing radiations wavelength, eight elements simultaneously for each of two apparatus configurations using an apparatus which incluees, in its simplest form, two polarization modulators each operating at a chosen frequency, one polarizer, a source of monochromatic electromagnetic radiation, a detector sensitive to the wavelength of radiation employed, eight phase-sensitive detectors, and appropriate electronics. A database of known biological particle suspensions can be assembled, and unknown samples can be quickly identified once measurements are performed on it according to the teachings of the subject invention, and a comparison is made with the database.

  3. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  4. 77 FR 21618 - 60-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... DEPARTMENT OF STATE [Public Notice 7843] 60-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096... Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB Control Number: 1405-0168...

  5. 17 CFR 38.552 - Elements of an acceptable audit trail program.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...

  6. 77 FR 47690 - 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... DEPARTMENT OF STATE [Public Notice 7976] 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096.... Title of Information Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB...

  7. 17 CFR 38.552 - Elements of an acceptable audit trail program.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of the order shall also be captured. (b) Transaction history database. A designated contract market's audit trail program must include an electronic transaction history database. An adequate transaction history database includes a history of all trades executed via open outcry or via entry into an electronic...

  8. Mobile Source Observation Database (MSOD)

    EPA Pesticide Factsheets

    The Mobile Source Observation Database (MSOD) is a relational database being developed by the Assessment and Standards Division (ASD) of the US Environmental Protection Agency Office of Transportation and Air Quality (formerly the Office of Mobile Sources). The MSOD contains emission test data from in-use mobile air- pollution sources such as cars, trucks, and engines from trucks and nonroad vehicles. Data in the database was collected from 1982 to the present. The data is intended to be representative of in-use vehicle emissions in the United States.

  9. Inside a VAMDC data node—putting standards into practical software

    NASA Astrophysics Data System (ADS)

    Regandell, Samuel; Marquart, Thomas; Piskunov, Nikolai

    2018-03-01

    Access to molecular and atomic data is critical for many forms of remote sensing analysis across different fields. Many atomic and molecular databases are however highly specialised for their intended application, complicating querying and combination data between sources. The Virtual Atomic and Molecular Data Centre, VAMDC, is an electronic infrastructure that allows each database to register as a ‘node’. Through services such as VAMDC’s portal website, users can then access and query all nodes in a homogenised way. Today all major Atomic and Molecular databases are attached to VAMDC This article describes the software tools we developed to help data providers create and manage a VAMDC node. It gives an overview of the VAMDC infrastructure and of the various standards it uses. The article then discusses the development choices made and how the standards are implemented in practice. It concludes with a full example of implementing a VAMDC node using a real-life case as well as future plans for the node software.

  10. Mouse Genome Database: From sequence to phenotypes and disease models

    PubMed Central

    Richardson, Joel E.; Kadin, James A.; Smith, Cynthia L.; Blake, Judith A.; Bult, Carol J.

    2015-01-01

    Summary The Mouse Genome Database (MGD, www.informatics.jax.org) is the international scientific database for genetic, genomic, and biological data on the laboratory mouse to support the research requirements of the biomedical community. To accomplish this goal, MGD provides broad data coverage, serves as the authoritative standard for mouse nomenclature for genes, mutants, and strains, and curates and integrates many types of data from literature and electronic sources. Among the key data sets MGD supports are: the complete catalog of mouse genes and genome features, comparative homology data for mouse and vertebrate genes, the authoritative set of Gene Ontology (GO) annotations for mouse gene functions, a comprehensive catalog of mouse mutations and their phenotypes, and a curated compendium of mouse models of human diseases. Here, we describe the data acquisition process, specifics about MGD's key data areas, methods to access and query MGD data, and outreach and user help facilities. genesis 53:458–473, 2015. © 2015 The Authors. Genesis Published by Wiley Periodicals, Inc. PMID:26150326

  11. Freshwater Biological Traits Database (Data Sources)

    EPA Science Inventory

    When EPA release the final report, Freshwater Biological Traits Database, it referenced numerous data sources that are included below. The Traits Database report covers the development of a database of freshwater biological traits with additional traits that are relevan...

  12. Comet: an open-source MS/MS sequence database search tool.

    PubMed

    Eng, Jimmy K; Jahan, Tahmina A; Hoopmann, Michael R

    2013-01-01

    Proteomics research routinely involves identifying peptides and proteins via MS/MS sequence database search. Thus the database search engine is an integral tool in many proteomics research groups. Here, we introduce the Comet search engine to the existing landscape of commercial and open-source database search tools. Comet is open source, freely available, and based on one of the original sequence database search tools that has been widely used for many years. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Development of an electronic database for Acute Pain Service outcomes

    PubMed Central

    Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH

    2012-01-01

    BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364

  14. Minimum reaction network necessary to describe Ar/CF4 plasma etch

    NASA Astrophysics Data System (ADS)

    Helpert, Sofia; Chopra, Meghali; Bonnecaze, Roger T.

    2018-03-01

    Predicting the etch and deposition profiles created using plasma processes is challenging due to the complexity of plasma discharges and plasma-surface interactions. Volume-averaged global models allow for efficient prediction of important processing parameters and provide a means to quickly determine the effect of a variety of process inputs on the plasma discharge. However, global models are limited based on simplifying assumptions to describe the chemical reaction network. Here a database of 128 reactions is compiled and their corresponding rate constants collected from 24 sources for an Ar/CF4 plasma using the platform RODEo (Recipe Optimization for Deposition and Etching). Six different reaction sets were tested which employed anywhere from 12 to all 128 reactions to evaluate the impact of the reaction database on particle species densities and electron temperature. Because many the reactions used in our database had conflicting rate constants as reported in literature, we also present a method to deal with those uncertainties when constructing the model which includes weighting each reaction rate and filtering outliers. By analyzing the link between a reaction's rate constant and its impact on the predicted plasma densities and electron temperatures, we determine the conditions at which a reaction is deemed necessary to the plasma model. The results of this study provide a foundation for determining which minimal set of reactions must be included in the reaction set of the plasma model.

  15. 47 CFR 101.1523 - Sharing and coordination among non-government licensees and between non-government and government...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...

  16. 47 CFR 101.1523 - Sharing and coordination among non-government licensees and between non-government and government...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...

  17. 47 CFR 101.1523 - Sharing and coordination among non-government licensees and between non-government and government...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...

  18. 47 CFR 101.1523 - Sharing and coordination among non-government licensees and between non-government and government...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...

  19. 47 CFR 101.1523 - Sharing and coordination among non-government licensees and between non-government and government...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Wireless Telecommunications Bureau announces by public notice the implementation of a third-party database...) Provide an electronic copy of an interference analysis to the third-party database manager which...-party database managers shall receive and retain the interference analyses electronically and make them...

  20. [Antidotes: use guidelines and minimun stock in an emergency department].

    PubMed

    García-Martín, A; Torres Santos-Olmos, R

    2012-01-01

    To develop a guide for antidotes and other medications used to counteract poisoning, and define the stock in an emergency department, as a safety priority for the part-time pharmacist assigned to the unit. A search of specialist databases and web portals of the Spanish Society of Toxicology and the British National Poisons Information Service, as well as toxicology databases, TOXICONET, information from other hospitals, tertiary sources, Micromedex and Medline. The Guide contains 42 active ingredients and is accessible to the Pharmacy and Emergency departments in electronic format. A minimum emergency stock was agreed based on the daily treatment of a 100 kg patient. This information, including updated expiry dates, is available at the emergency department antidote stock facilities and in electronic format. On a monthly basis, the pharmacist reviews the need to replace any drugs, due to their expiry date or lack of use. The lack of evidence from high quality antidote studies, the variability due to the difficulties of updating sources and some geographical differences in their use means that decision-making can be difficult. It would be useful to have minimum quantity recommendations from societies of toxicology, regulatory agencies and organisations such as the Joint Commission on the Accreditation of Healthcare Organisations. It would also be useful to have a suprahospital risk assessment to optimise management and ensure the availability of antidotes which are expensive, have a limited shelf life, or of which demand is difficult to forecast. Copyright © 2011 SEFH. Published by Elsevier Espana. All rights reserved.

  1. Argonne Geothermal Geochemical Database v2.0

    DOE Data Explorer

    Harto, Christopher

    2013-05-22

    A database of geochemical data from potential geothermal sources aggregated from multiple sources as of March 2010. The database contains fields for the location, depth, temperature, pH, total dissolved solids concentration, chemical composition, and date of sampling. A separate tab contains data on non-condensible gas compositions. The database contains records for over 50,000 wells, although many entries are incomplete. Current versions of source documentation are listed in the dataset.

  2. Academic Impact of a Public Electronic Health Database: Bibliometric Analysis of Studies Using the General Practice Research Database

    PubMed Central

    Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas

    2011-01-01

    Background Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. Methodology and Findings A total of 749 studies published between 1995 and 2009 with ‘General Practice Research Database’ as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of “Pharmacology and Pharmacy”, “General and Internal Medicine”, and “Public, Environmental and Occupational Health”. The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. Conclusions A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research. PMID:21731733

  3. An ontology-based method for secondary use of electronic dental record data.

    PubMed

    Schleyer, Titus Kl; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P; Liu, Kaihong; Hernandez, Pedro

    2013-01-01

    A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance.

  4. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  5. Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient.

    PubMed

    Brossier, David; El Taani, Redha; Sauthier, Michael; Roumeliotis, Nadia; Emeriaud, Guillaume; Jouvet, Philippe

    2018-04-01

    Our objective was to construct a prospective high-quality and high-frequency database combining patient therapeutics and clinical variables in real time, automatically fed by the information system and network architecture available through fully electronic charting in our PICU. The purpose of this article is to describe the data acquisition process from bedside to the research electronic database. Descriptive report and analysis of a prospective database. A 24-bed PICU, medical ICU, surgical ICU, and cardiac ICU in a tertiary care free-standing maternal child health center in Canada. All patients less than 18 years old were included at admission to the PICU. None. Between May 21, 2015, and December 31, 2016, 1,386 consecutive PICU stays from 1,194 patients were recorded in the database. Data were prospectively collected from admission to discharge, every 5 seconds from monitors and every 30 seconds from mechanical ventilators and infusion pumps. These data were linked to the patient's electronic medical record. The database total volume was 241 GB. The patients' median age was 2.0 years (interquartile range, 0.0-9.0). Data were available for all mechanically ventilated patients (n = 511; recorded duration, 77,678 hr), and respiratory failure was the most frequent reason for admission (n = 360). The complete pharmacologic profile was synched to database for all PICU stays. Following this implementation, a validation phase is in process and several research projects are ongoing using this high-fidelity database. Using the existing bedside information system and network architecture of our PICU, we implemented an ongoing high-fidelity prospectively collected electronic database, preventing the continuous loss of scientific information. This offers the opportunity to develop research on clinical decision support systems and computational models of cardiorespiratory physiology for example.

  6. An open-source wireless sensor stack: from Arduino to SDI-12 to Water One Flow

    NASA Astrophysics Data System (ADS)

    Hicks, S.; Damiano, S. G.; Smith, K. M.; Olexy, J.; Horsburgh, J. S.; Mayorga, E.; Aufdenkampe, A. K.

    2013-12-01

    Implementing a large-scale streaming environmental sensor network has previously been limited by the high cost of the datalogging and data communication infrastructure. The Christina River Basin Critical Zone Observatory (CRB-CZO) is overcoming the obstacles to large near-real-time data collection networks by using Arduino, an open source electronics platform, in combination with XBee ZigBee wireless radio modules. These extremely low-cost and easy-to-use open source electronics are at the heart of the new DIY movement and have provided solutions to countless projects by over half a million users worldwide. However, their use in environmental sensing is in its infancy. At present a primary limitation to widespread deployment of open-source electronics for environmental sensing is the lack of a simple, open-source software stack to manage streaming data from heterogeneous sensor networks. Here we present a functioning prototype software stack that receives sensor data over a self-meshing ZigBee wireless network from over a hundred sensors, stores the data locally and serves it on demand as a CUAHSI Water One Flow (WOF) web service. We highlight a few new, innovative components, including: (1) a versatile open data logger design based the Arduino electronics platform and ZigBee radios; (2) a software library implementing SDI-12 communication protocol between any Arduino platform and SDI12-enabled sensors without the need for additional hardware (https://github.com/StroudCenter/Arduino-SDI-12); and (3) 'midStream', a light-weight set of Python code that receives streaming sensor data, appends it with metadata on the fly by querying a relational database structured on an early version of the Observations Data Model version 2.0 (ODM2), and uses the WOFpy library to serve the data as WaterML via SOAP and REST web services.

  7. [Electronic poison information management system].

    PubMed

    Kabata, Piotr; Waldman, Wojciech; Kaletha, Krystian; Sein Anand, Jacek

    2013-01-01

    We describe deployment of electronic toxicological information database in poison control center of Pomeranian Center of Toxicology. System was based on Google Apps technology, by Google Inc., using electronic, web-based forms and data tables. During first 6 months from system deployment, we used it to archive 1471 poisoning cases, prepare monthly poisoning reports and facilitate statistical analysis of data. Electronic database usage made Poison Center work much easier.

  8. An ab initio electronic transport database for inorganic materials.

    PubMed

    Ricci, Francesco; Chen, Wei; Aydemir, Umut; Snyder, G Jeffrey; Rignanese, Gian-Marco; Jain, Anubhav; Hautier, Geoffroy

    2017-07-04

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material's band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present the workflow to generate the data, the data validation procedure, and the database structure. Our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.

  9. EBMPracticeNet: A Bilingual National Electronic Point-Of-Care Project for Retrieval of Evidence-Based Clinical Guideline Information and Decision Support

    PubMed Central

    2013-01-01

    Background In Belgium, the construction of a national electronic point-of-care information service, EBMPracticeNet, was initiated in 2011 to optimize quality of care by promoting evidence-based decision-making. The collaboration of the government, health care providers, evidence-based medicine (EBM) partners, and vendors of electronic health records (EHR) is unique to this project. All Belgian health care professionals get free access to an up-to-date database of validated Belgian and nearly 1000 international guidelines, incorporated in a portal that also provides EBM information from other sources than guidelines, including computerized clinical decision support that is integrated in the EHRs. Objective The objective of this paper was to describe the development strategy, the overall content, and the management of EBMPracticeNet which may be of relevance to other health organizations creating national or regional electronic point-of-care information services. Methods Several candidate providers of comprehensive guideline solutions were evaluated and one database was selected. Translation of the guidelines to Dutch and French was done with translation software, post-editing by translators and medical proofreading. A strategy is determined to adapt the guideline content to the Belgian context. Acceptance of the computerized clinical decision support tool has been tested and a randomized controlled trial is planned to evaluate the effect on process and patient outcomes. Results Currently, EBMPracticeNet is in "work in progress" state. Reference is made to the results of a pilot study and to further planned research including a randomized controlled trial. Conclusions The collaboration of government, health care providers, EBM partners, and vendors of EHRs is unique. The potential value of the project is great. The link between all the EHRs from different vendors and a national database held on a single platform that is controlled by all EBM organizations in Belgium are the strengths of EBMPracticeNet. PMID:23842038

  10. Emission Database for Global Atmospheric Research (EDGAR).

    ERIC Educational Resources Information Center

    Olivier, J. G. J.; And Others

    1994-01-01

    Presents the objective and methodology chosen for the construction of a global emissions source database called EDGAR and the structural design of the database system. The database estimates on a regional and grid basis, 1990 annual emissions of greenhouse gases, and of ozone depleting compounds from all known sources. (LZ)

  11. Online Sources of Competitive Intelligence.

    ERIC Educational Resources Information Center

    Wagers, Robert

    1986-01-01

    Presents an approach to using online sources of information for competitor intelligence (i.e., monitoring industry and tracking activities of competitors); identifies principal sources; and suggests some ways of making use of online databases. Types and sources of information and sources and database charts are appended. Eight references are…

  12. An ab initio electronic transport database for inorganic materials

    DOE PAGES

    Ricci, Francesco; Chen, Wei; Aydemir, Umut; ...

    2017-07-04

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material’s band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present themore » workflow to generate the data, the data validation procedure, and the database structure. In conclusion, our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.« less

  13. An ab initio electronic transport database for inorganic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricci, Francesco; Chen, Wei; Aydemir, Umut

    Electronic transport in materials is governed by a series of tensorial properties such as conductivity, Seebeck coefficient, and effective mass. These quantities are paramount to the understanding of materials in many fields from thermoelectrics to electronics and photovoltaics. Transport properties can be calculated from a material’s band structure using the Boltzmann transport theory framework. We present here the largest computational database of electronic transport properties based on a large set of 48,000 materials originating from the Materials Project database. Our results were obtained through the interpolation approach developed in the BoltzTraP software, assuming a constant relaxation time. We present themore » workflow to generate the data, the data validation procedure, and the database structure. In conclusion, our aim is to target the large community of scientists developing materials selection strategies and performing studies involving transport properties.« less

  14. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR DATABASE TREE AND DATA SOURCES (UA-D-41.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the database storage organization, as well as describe the sources of data for each database used during the Arizona NHEXAS project and the "Border" study. Keywords: data; database; organization.

    The National Human Exposure Assessment Sur...

  15. 77 FR 31237 - Electronic Filing in the Copyright Office of Notices of Intention To Obtain a Section 115...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-25

    ... multiple nondramatic musical works may be submitted electronically as XML files. Electronically submitted Notices will be maintained in a database that can be searched using any of the included fields of... the Licensing Division for a search of the database during the interim period. As such, the Office...

  16. Electron and Positron Stopping Powers of Materials

    National Institute of Standards and Technology Data Gateway

    SRD 7 NIST Electron and Positron Stopping Powers of Materials (PC database for purchase)   The EPSTAR database provides rapid calculations of stopping powers (collisional, radiative, and total), CSDA ranges, radiation yields and density effect corrections for incident electrons or positrons with kinetic energies from 1 keV to 10 GeV, and for any chemically defined target material.

  17. Sensory overload: A concept analysis.

    PubMed

    Scheydt, Stefan; Müller Staub, Maria; Frauenfelder, Fritz; Nielsen, Gunnar H; Behrens, Johann; Needham, Ian

    2017-04-01

    In the context of mental disorders sensory overload is a widely described phenomenon used in conjunction with psychiatric interventions such as removal from stimuli. However, the theoretical foundation of sensory overload as addressed in the literature can be described as insufficient and fragmentary. To date, the concept of sensory overload has not yet been sufficiently specified or analyzed. The aim of the study was to analyze the concept of sensory overload in mental health care. A literature search was undertaken using specific electronic databases, specific journals and websites, hand searches, specific library catalogues, and electronic publishing databases. Walker and Avant's method of concept analysis was used to analyze the sources included in the analysis. All aspects of the method of Walker and Avant were covered in this concept analysis. The conceptual understanding has become more focused, the defining attributes, influencing factors and consequences are described and empirical referents identified. The concept analysis is a first step in the development of a middle-range descriptive theory of sensory overload based on social scientific and stress-theoretical approaches. This specification may serve as a fundament for further research, for the development of a nursing diagnosis or for guidelines. © 2017 Australian College of Mental Health Nurses Inc.

  18. Academic health sciences library Website navigation: an analysis of forty-one Websites and their navigation tools.

    PubMed

    Brower, Stewart M

    2004-10-01

    The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites.

  19. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, William P.; Hartmann-Siantar, Christine L.; Rathkopf, James A.

    1999-01-01

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media.

  20. Calculation of radiation therapy dose using all particle Monte Carlo transport

    DOEpatents

    Chandler, W.P.; Hartmann-Siantar, C.L.; Rathkopf, J.A.

    1999-02-09

    The actual radiation dose absorbed in the body is calculated using three-dimensional Monte Carlo transport. Neutrons, protons, deuterons, tritons, helium-3, alpha particles, photons, electrons, and positrons are transported in a completely coupled manner, using this Monte Carlo All-Particle Method (MCAPM). The major elements of the invention include: computer hardware, user description of the patient, description of the radiation source, physical databases, Monte Carlo transport, and output of dose distributions. This facilitated the estimation of dose distributions on a Cartesian grid for neutrons, photons, electrons, positrons, and heavy charged-particles incident on any biological target, with resolutions ranging from microns to centimeters. Calculations can be extended to estimate dose distributions on general-geometry (non-Cartesian) grids for biological and/or non-biological media. 57 figs.

  1. Computational Thermochemistry of Jet Fuels and Rocket Propellants

    NASA Technical Reports Server (NTRS)

    Crawford, T. Daniel

    2002-01-01

    The design of new high-energy density molecules as candidates for jet and rocket fuels is an important goal of modern chemical thermodynamics. The NASA Glenn Research Center is home to a database of thermodynamic data for over 2000 compounds related to this goal, in the form of least-squares fits of heat capacities, enthalpies, and entropies as functions of temperature over the range of 300 - 6000 K. The chemical equilibrium with applications (CEA) program written and maintained by researchers at NASA Glenn over the last fifty years, makes use of this database for modeling the performance of potential rocket propellants. During its long history, the NASA Glenn database has been developed based on experimental results and data published in the scientific literature such as the standard JANAF tables. The recent development of efficient computational techniques based on quantum chemical methods provides an alternative source of information for expansion of such databases. For example, it is now possible to model dissociation or combustion reactions of small molecules to high accuracy using techniques such as coupled cluster theory or density functional theory. Unfortunately, the current applicability of reliable computational models is limited to relatively small molecules containing only around a dozen (non-hydrogen) atoms. We propose to extend the applicability of coupled cluster theory- often referred to as the 'gold standard' of quantum chemical methods- to molecules containing 30-50 non-hydrogen atoms. The centerpiece of this work is the concept of local correlation, in which the description of the electron interactions- known as electron correlation effects- are reduced to only their most important localized components. Such an advance has the potential to greatly expand the current reach of computational thermochemistry and thus to have a significant impact on the theoretical study of jet and rocket propellants.

  2. The STEP (Safety and Toxicity of Excipients for Paediatrics) database: part 2 - the pilot version.

    PubMed

    Salunke, Smita; Brandys, Barbara; Giacoia, George; Tuleu, Catherine

    2013-11-30

    The screening and careful selection of excipients is a critical step in paediatric formulation development as certain excipients acceptable in adult formulations, may not be appropriate for paediatric use. While there is extensive toxicity data that could help in better understanding and highlighting the gaps in toxicity studies, the data are often scattered around the information sources and saddled with incompatible data types and formats. This paper is the second in a series that presents the update on the Safety and Toxicity of Excipients for Paediatrics ("STEP") database being developed by Eu-US PFIs, and describes the architecture data fields and functions of the database. The STEP database is a user designed resource that compiles the safety and toxicity data of excipients that is scattered over various sources and presents it in one freely accessible source. Currently, in the pilot database data from over 2000 references/10 excipients presenting preclinical, clinical, regulatory information and toxicological reviews, with references and source links. The STEP database allows searching "FOR" excipients and "BY" excipients. This dual nature of the STEP database, in which toxicity and safety information can be searched in both directions, makes it unique from existing sources. If the pilot is successful, the aim is to increase the number of excipients in the existing database so that a database large enough to be of practical research use will be available. It is anticipated that this source will prove to be a useful platform for data management and data exchange of excipient safety information. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Between power and powerlessness: a meta-ethnography of sources of resilience in young refugees.

    PubMed

    Sleijpen, Marieke; Boeije, Hennie R; Kleber, Rolf J; Mooren, Trudy

    2016-01-01

    This article reviews available qualitative studies that report young refugees' ways of dealing with adversity to address their sources of resilience. We searched five electronic databases. Twenty-six empirical studies were included in the review. A meta-ethnography approach was used to synthesize these qualitative studies. Six sources of resilience emerged: (1) social support, (2) acculturation strategies, (3) education, (4) religion, (5) avoidance, and (6) hope. These sources indicated social as well as personal factors that confer resilience in young refugees, but most of them also had counterproductive aspects. The results, from an ecological developmental perspective, stressed the interplay between protective and risk processes in the mental health of young refugees who had resettled in Western countries, and they emphasized the variability as well as the universality of resilience-promoting processes. Further research is needed to explore the cultural shape of resilience and the long-term consequences of war and migration on young refugees.

  4. Characterization of thin films on the nanometer scale by Auger electron spectroscopy and X-ray photoelectron spectroscopy

    NASA Astrophysics Data System (ADS)

    Powell, C. J.; Jablonski, A.; Werner, W. S. M.; Smekal, W.

    2005-01-01

    We describe two NIST databases that can be used to characterize thin films from Auger electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS) measurements. First, the NIST Electron Effective-Attenuation-Length Database provides values of effective attenuation lengths (EALs) for user-specified materials and measurement conditions. The EALs differ from the corresponding inelastic mean free paths on account of elastic-scattering of the signal electrons. The database supplies "practical" EALs that can be used to determine overlayer-film thicknesses. Practical EALs are plotted as a function of film thickness, and an average value is shown for a user-selected thickness. The average practical EAL can be utilized as the "lambda parameter" to obtain film thicknesses from simple equations in which the effects of elastic-scattering are neglected. A single average practical EAL can generally be employed for a useful range of film thicknesses and for electron emission angles of up to about 60°. For larger emission angles, the practical EAL should be found for the particular conditions. Second, we describe a new NIST database for the Simulation of Electron Spectra for Surface Analysis (SESSA) to be released in 2004. This database provides data for many parameters needed in quantitative AES and XPS (e.g., excitation cross-sections, electron-scattering cross-sections, lineshapes, fluorescence yields, and backscattering factors). Relevant data for a user-specified experiment are automatically retrieved by a small expert system. In addition, Auger electron and photoelectron spectra can be simulated for layered samples. The simulated spectra, for layer compositions and thicknesses specified by the user, can be compared with measured spectra. The layer compositions and thicknesses can then be adjusted to find maximum consistency between simulated and measured spectra, and thus, provide more detailed characterizations of multilayer thin-film materials. SESSA can also provide practical EALs, and we compare values provided by the NIST EAL database and SESSA for hafnium dioxide. Differences of up to 10% were found for film thicknesses less than 20 Å due to the use of different physical models in each database.

  5. Chemotion ELN: an Open Source electronic lab notebook for chemists in academia.

    PubMed

    Tremouilhac, Pierre; Nguyen, An; Huang, Yu-Chieh; Kotov, Serhii; Lütjohann, Dominic Sebastian; Hübsch, Florian; Jung, Nicole; Bräse, Stefan

    2017-09-25

    The development of an electronic lab notebook (ELN) for researchers working in the field of chemical sciences is presented. The web based application is available as an Open Source software that offers modern solutions for chemical researchers. The Chemotion ELN is equipped with the basic functionalities necessary for the acquisition and processing of chemical data, in particular the work with molecular structures and calculations based on molecular properties. The ELN supports planning, description, storage, and management for the routine work of organic chemists. It also provides tools for communicating and sharing the recorded research data among colleagues. Meeting the requirements of a state of the art research infrastructure, the ELN allows the search for molecules and reactions not only within the user's data but also in conventional external sources as provided by SciFinder and PubChem. The presented development makes allowance for the growing dependency of scientific activity on the availability of digital information by providing Open Source instruments to record and reuse research data. The current version of the ELN has been using for over half of a year in our chemistry research group, serves as a common infrastructure for chemistry research and enables chemistry researchers to build their own databases of digital information as a prerequisite for the detailed, systematic investigation and evaluation of chemical reactions and mechanisms.

  6. The South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLAM BRC) case register: development and descriptive data.

    PubMed

    Stewart, Robert; Soremekun, Mishael; Perera, Gayan; Broadbent, Matthew; Callard, Felicity; Denis, Mike; Hotopf, Matthew; Thornicroft, Graham; Lovestone, Simon

    2009-08-12

    Case registers have been used extensively in mental health research. Recent developments in electronic medical records, and in computer software to search and analyse these in anonymised format, have the potential to revolutionise this research tool. We describe the development of the South London and Maudsley NHS Foundation Trust (SLAM) Biomedical Research Centre (BRC) Case Register Interactive Search tool (CRIS) which allows research-accessible datasets to be derived from SLAM, the largest provider of secondary mental healthcare in Europe. All clinical data, including free text, are available for analysis in the form of anonymised datasets. Development involved both the building of the system and setting in place the necessary security (with both functional and procedural elements). Descriptive data are presented for the Register database as of October 2008. The database at that point included 122,440 cases, 35,396 of whom were receiving active case management under the Care Programme Approach. In terms of gender and ethnicity, the database was reasonably representative of the source population. The most common assigned primary diagnoses were within the ICD mood disorders (n = 12,756) category followed by schizophrenia and related disorders (8158), substance misuse (7749), neuroses (7105) and organic disorders (6414). The SLAM BRC Case Register represents a 'new generation' of this research design, built on a long-running system of fully electronic clinical records and allowing in-depth secondary analysis of both numerical, string and free text data, whilst preserving anonymity through technical and procedural safeguards.

  7. Cross-Matching Source Observations from the Palomar Transient Factory (PTF)

    NASA Astrophysics Data System (ADS)

    Laher, Russ; Grillmair, C.; Surace, J.; Monkewitz, S.; Jackson, E.

    2009-01-01

    Over the four-year lifetime of the PTF project, approximately 40 billion instances of astronomical-source observations will be extracted from the image data. The instances will correspond to the same astronomical objects being observed at roughly 25-50 different times, and so a very large catalog containing important object-variability information will be the chief PTF product. Organizing astronomical-source catalogs is conventionally done by dividing the catalog into declination zones and sorting by right ascension within each zone (e.g., the USNOA star catalog), in order to facilitate catalog searches. This method was reincarnated as the "zones" algorithm in a SQL-Server database implementation (Szalay et al., MSR-TR-2004-32), with corrections given by Gray et al. (MSR-TR-2006-52). The primary advantage of this implementation is that all of the work is done entirely on the database server and client/server communication is eliminated. We implemented the methods outlined in Gray et al. for a PostgreSQL database. We programmed the methods as database functions in PL/pgSQL procedural language. The cross-matching is currently based on source positions, but we intend to extend it to use both positions and positional uncertainties to form a chi-square statistic for optimal thresholding. The database design includes three main tables, plus a handful of internal tables. The Sources table stores the SExtractor source extractions taken at various times; the MergedSources table stores statistics about the astronomical objects, which are the result of cross-matching records in the Sources table; and the Merges table, which associates cross-matched primary keys in the Sources table with primary keys in the MergedSoures table. Besides judicious database indexing, we have also internally partitioned the Sources table by declination zone, in order to speed up the population of Sources records and make the database more manageable. The catalog will be accessible to the public after the proprietary period through IRSA (irsa.ipac.caltech.edu).

  8. The establishment and use of the point source catalog database of the 2MASS near infrared survey

    NASA Astrophysics Data System (ADS)

    Gao, Y. F.; Shan, H. G.; Cheng, D.

    2003-02-01

    The 2MASS near infrared survey project is introduced briefly. The 2MASS point sources catalog (2MASS PSC) database and the network query system are established by using the PHP Hypertext Preprocessor and MySQL database server. By using the system, one can not only query information of sources listed in the catalog, but also draw the plots related. Moreover, after the 2MASS data are diagnosed , some research fields which can be benefited from this database are suggested.

  9. Design of a control system for the LECR3

    NASA Astrophysics Data System (ADS)

    Zhou, Wen-Xiong; Wang, Yan-Yu; Zhou, De-Tai; Lin, Fu-Yuan; Luo, Jin-Fu; Yu, Yan-Juan; Feng, Yu-Cheng; Lu, Wang

    2013-11-01

    The Lanzhou Electron Cyclotron Resonance Ion Source No. 3 (LECR3) plays an important role in supplying many kinds of ion beams to the Heavy Ion Research Facility in Lanzhou (HIRFL). In this paper, we provide a detailed description of a new remote control system for the LECR3 that we designed and implemented. This system uses typical distribution control for both the LECR3 and the newly-built Lanzhou All Permanent Magnet ECR Ion Source No. 1 (LAPECR1). The entire project, including the construction of hardware and the software, was completed in September 2012. The hardware consists of an industry computer (IPC), an intranet composed of a switch, and various controllers with Ethernet access functions. The software is written in C++ and is used to control all of the respective equipment through the intranet to ensure that the useful information is stored in a database for later analysis. The entire system can efficiently acquire the necessary data from the respective equipment at a speed of 3 times per second, after which the data is stored in the database. The system can also complete the interlock protection and alarm process in one second.

  10. The implications of starvation induced psychological changes for the ethical treatment of hunger strikers

    PubMed Central

    Fessler, D

    2003-01-01

    Design: Electronic databases were searched for (a) editorials and ethical proclamations on hunger strikers and their treatment; (b) studies of voluntary and involuntary starvation, and (c) legal cases pertaining to hunger striking. Additional studies were gathered in a snowball fashion from the published material cited in these databases. Material was included if it (a) provided ethical or legal guidelines; (b) shed light on psychological changes accompanying starvation, or (c) illustrated the practice of hunger striking. Authors' observations, opinions, and conclusions were noted. Conclusions: Although the heterogeneous nature of the sources precluded statistical analysis, starvation appears to be accompanied by marked psychological changes. Some changes clearly impair competence, in which case physicians are advised to follow advance directives obtained early in the hunger strike. More problematic are increases in impulsivity and aggressivity, changes which, while not impairing competence, enhance the likelihood that patients will starve themselves to death. PMID:12930863

  11. Locating grey literature on communication disorders.

    PubMed

    Shpilko, Inna

    2005-01-01

    This article provides an overview of selected Web-based resources containing grey literature in the area of communication disorders. It is geared to practitioners, researchers, students, and consumers seeking reliable, freely available scientific information. Grey (or gray) literature has been defined as "that which is produced on all levels of government, academics, business, and industry in print and electronic formats, but which is not controlled by commercial publishers."1 This paper reviews various general reference sources potentially containing grey literature on communication disorders. This review includes identification of the methods specialists in this field use to obtain this valuable, yet often overlooked, literature. Access points and search tools for identifying grey literature on communication disorders are recommended. Commercial databases containing grey literature are not included. Conclusions presented in this article are considered complementary to traditionally published information resources on communication disorders, such as scholarly journals, online databases, etc.

  12. Acupuncture for patients with Alzheimer's disease: a systematic review protocol

    PubMed Central

    Zhou, Jing; Peng, Weina; Li, Wang; Liu, Zhishun

    2014-01-01

    Introduction The aim of this protocol is to provide the methods used to assess the effectiveness and safety of acupuncture for the treatment of patients with Alzheimer's disease. Methods and analysis We will search the following electronic databases: The Cochrane Library, PubMed, Medline, Embase, PsycINFO, Chinese Biomedical Literature Database, Chinese Medical Current Contents and China National Knowledge Infrastructure without restriction of language and publication status. Other sources such as Chinese acupuncture journals and the reference list of selected studies will also be searched. After screening the studies, a meta-analysis of randomised controlled trials will be conducted, if possible. Results expressed as risk ratios for dichotomous data and standardised or weighted mean differences for continuous data, will be used for data synthesis. Dissemination The protocol of this systematic review will be disseminated in a peer-reviewed journal and presented at a relevant conference. Trial registration number PROSPERO CRD42014009619 PMID:25142265

  13. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR DATABASE TREE AND DATA SOURCES (UA-D-41.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the database storage organization, and to describe the sources of data for each database used during the Arizona NHEXAS project and the Border study. Keywords: data; database; organization.

    The U.S.-Mexico Border Program is sponsored by t...

  14. Development of the Global Earthquake Model’s neotectonic fault database

    USGS Publications Warehouse

    Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert

    2015-01-01

    The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.

  15. Effectiveness of Using Mobile Phone Image Capture for Collecting Secondary Data: A Case Study on Immunization History Data Among Children in Remote Areas of Thailand.

    PubMed

    Jandee, Kasemsak; Kaewkungwal, Jaranit; Khamsiriwatchara, Amnat; Lawpoolsri, Saranath; Wongwit, Waranya; Wansatid, Peerawat

    2015-07-20

    Entering data onto paper-based forms, then digitizing them, is a traditional data-management method that might result in poor data quality, especially when the secondary data are incomplete, illegible, or missing. Transcription errors from source documents to case report forms (CRFs) are common, and subsequently the errors pass from the CRFs to the electronic database. This study aimed to demonstrate the usefulness and to evaluate the effectiveness of mobile phone camera applications in capturing health-related data, aiming for data quality and completeness as compared to current routine practices exercised by government officials. In this study, the concept of "data entry via phone image capture" (DEPIC) was introduced and developed to capture data directly from source documents. This case study was based on immunization history data recorded in a mother and child health (MCH) logbook. The MCH logbooks (kept by parents) were updated whenever parents brought their children to health care facilities for immunization. Traditionally, health providers are supposed to key in duplicate information of the immunization history of each child; both on the MCH logbook, which is returned to the parents, and on the individual immunization history card, which is kept at the health care unit to be subsequently entered into the electronic health care information system (HCIS). In this study, DEPIC utilized the photographic functionality of mobile phones to capture images of all immunization-history records on logbook pages and to transcribe these records directly into the database using a data-entry screen corresponding to logbook data records. DEPIC data were then compared with HCIS data-points for quality, completeness, and consistency. As a proof-of-concept, DEPIC captured immunization history records of 363 ethnic children living in remote areas from their MCH logbooks. Comparison of the 2 databases, DEPIC versus HCIS, revealed differences in the percentage of completeness and consistency of immunization history records. Comparing the records of each logbook in the DEPIC and HCIS databases, 17.3% (63/363) of children had complete immunization history records in the DEPIC database, but no complete records were reported in the HCIS database. Regarding the individual's actual vaccination dates, comparison of records taken from MCH logbook and those in the HCIS found that 24.2% (88/363) of the children's records were absolutely inconsistent. In addition, statistics derived from the DEPIC records showed a higher immunization coverage and much more compliance to immunization schedule by age group when compared to records derived from the HCIS database. DEPIC, or the concept of collecting data via image capture directly from their primary sources, has proven to be a useful data collection method in terms of completeness and consistency. In this study, DEPIC was implemented in data collection of a single survey. The DEPIC concept, however, can be easily applied in other types of survey research, for example, collecting data on changes or trends based on image evidence over time. With its image evidence and audit trail features, DEPIC has the potential for being used even in clinical studies since it could generate improved data integrity and more reliable statistics for use in both health care and research settings.

  16. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications.

    PubMed

    Hartung, Daniel M; Zarin, Deborah A; Guise, Jeanne-Marie; McDonagh, Marian; Paynter, Robin; Helfand, Mark

    2014-04-01

    ClinicalTrials.gov requires reporting of result summaries for many drug and device trials. To evaluate the consistency of reporting of trials that are registered in the ClinicalTrials.gov results database and published in the literature. ClinicalTrials.gov results database and matched publications identified through ClinicalTrials.gov and a manual search of 2 electronic databases. 10% random sample of phase 3 or 4 trials with results in the ClinicalTrials.gov results database, completed before 1 January 2009, with 2 or more groups. One reviewer extracted data about trial design and results from the results database and matching publications. A subsample was independently verified. Of 110 trials with results, most were industry-sponsored, parallel-design drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). Sixteen trials (15%) reported the primary outcome description inconsistently, and 22 (20%) reported the primary outcome value inconsistently. Thirty-eight trials inconsistently reported the number of individuals with a serious adverse event (SAE); of these, 33 (87%) reported more SAEs in ClinicalTrials.gov. Among the 84 trials that reported SAEs in ClinicalTrials.gov, 11 publications did not mention SAEs, 5 reported them as zero or not occurring, and 21 reported a different number of SAEs. Among 29 trials that reported deaths in ClinicalTrials.gov, 28% differed from the matched publication. Small sample that included earliest results posted to the database. Reporting discrepancies between the ClinicalTrials.gov results database and matching publications are common. Which source contains the more accurate account of results is unclear, although ClinicalTrials.gov may provide a more comprehensive description of adverse events than the publication. Agency for Healthcare Research and Quality.

  17. International Data on Radiological Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martha Finck; Margaret Goldberg

    2010-07-01

    ABSTRACT The mission of radiological dispersal device (RDD) nuclear forensics is to identify the provenance of nuclear and radiological materials used in RDDs and to aid law enforcement in tracking nuclear materials and routes. The application of databases to radiological forensics is to match RDD source material to a source model in the database, provide guidance regarding a possible second device, and aid the FBI by providing a short list of manufacturers and distributors, and ultimately to the last legal owner of the source. The Argonne/Idaho National Laboratory RDD attribution database is a powerful technical tool in radiological forensics. Themore » database (1267 unique vendors) includes all sealed sources and a device registered in the U.S., is complemented by data from the IAEA Catalogue, and is supported by rigorous in-lab characterization of selected sealed sources regarding physical form, radiochemical composition, and age-dating profiles. Close working relationships with global partners in the commercial sealed sources industry provide invaluable technical information and expertise in the development of signature profiles. These profiles are critical to the down-selection of potential candidates in either pre- or post- event RDD attribution. The down-selection process includes a match between an interdicted (or detonated) source and a model in the database linked to one or more manufacturers and distributors.« less

  18. Use of large electronic health record databases for environmental epidemiology studies.

    EPA Science Inventory

    Background: Electronic health records (EHRs) are a ubiquitous component of the United States healthcare system and capture nearly all data collected in a clinic or hospital setting. EHR databases are attractive for secondary data analysis as they may contain detailed clinical rec...

  19. CTEPP STANDARD OPERATING PROCEDURE FOR ENTERING OR IMPORTING ELECTRONIC DATA INTO THE CTEPP DATABASE (SOP-4.12)

    EPA Science Inventory

    This SOP described the method used to automatically parse analytical data generated from gas chromatography/mass spectrometry (GC/MS) analyses into CTEPP summary spreadsheets and electronically import the summary spreadsheets into the CTEPP study database.

  20. Antiepileptic drug use in seven electronic health record databases in Europe: a methodologic comparison.

    PubMed

    de Groot, Mark C H; Schuerch, Markus; de Vries, Frank; Hesse, Ulrik; Oliva, Belén; Gil, Miguel; Huerta, Consuelo; Requena, Gema; de Abajo, Francisco; Afonso, Ana S; Souverein, Patrick C; Alvarez, Yolanda; Slattery, Jim; Rottenkolber, Marietta; Schmiedl, Sven; Van Dijk, Liset; Schlienger, Raymond G; Reynolds, Robert; Klungel, Olaf H

    2014-05-01

    The annual prevalence of antiepileptic drug (AED) prescribing reported in the literature differs considerably among European countries due to use of different type of data sources, time periods, population distribution, and methodologic differences. This study aimed to measure prevalence of AED prescribing across seven European routine health care databases in Spain, Denmark, The Netherlands, the United Kingdom, and Germany using a standardized methodology and to investigate sources of variation. Analyses on the annual prevalence of AEDs were stratified by sex, age, and AED. Overall prevalences were standardized to the European 2008 reference population. Prevalence of any AED varied from 88 per 10,000 persons (The Netherlands) to 144 per 10,000 in Spain and Denmark in 2001. In all databases, prevalence increased linearly: from 6% in Denmark to 15% in Spain each year since 2001. This increase could be attributed entirely to an increase in "new," recently marketed AEDs while prevalence of AEDs that have been available since the mid-1990s, hardly changed. AED use increased with age for both female and male patients up to the ages of 80 to 89 years old and tended to be somewhat higher in female than in male patients between the ages of 40 and 70. No differences between databases in the number of AEDs used simultaneously by a patient were found. We showed that during the study period of 2001-2009, AED prescribing increased in five European Union (EU) countries and that this increase was due entirely to the newer AEDs marketed since the 1990s. Using a standardized methodology, we showed consistent trends across databases and countries over time. Differences in age and sex distribution explained only part of the variation between countries. Therefore, remaining variation in AED use must originate from other differences in national health care systems. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  1. A Semantic Transformation Methodology for the Secondary Use of Observational Healthcare Data in Postmarketing Safety Studies.

    PubMed

    Pacaci, Anil; Gonul, Suat; Sinaci, A Anil; Yuksel, Mustafa; Laleci Erturkmen, Gokce B

    2018-01-01

    Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract-Transform-Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.

  2. An ontology-based method for secondary use of electronic dental record data

    PubMed Central

    Schleyer, Titus KL; Ruttenberg, Alan; Duncan, William; Haendel, Melissa; Torniai, Carlo; Acharya, Amit; Song, Mei; Thyvalikakath, Thankam P.; Liu, Kaihong; Hernandez, Pedro

    A key question for healthcare is how to operationalize the vision of the Learning Healthcare System, in which electronic health record data become a continuous information source for quality assurance and research. This project presents an initial, ontology-based, method for secondary use of electronic dental record (EDR) data. We defined a set of dental clinical research questions; constructed the Oral Health and Disease Ontology (OHD); analyzed data from a commercial EDR database; and created a knowledge base, with the OHD used to represent clinical data about 4,500 patients from a single dental practice. Currently, the OHD includes 213 classes and reuses 1,658 classes from other ontologies. We have developed an initial set of SPARQL queries to allow extraction of data about patients, teeth, surfaces, restorations and findings. Further work will establish a complete, open and reproducible workflow for extracting and aggregating data from a variety of EDRs for research and quality assurance. PMID:24303273

  3. Classification of Antibiotic Resistance Patterns of Indicator Bacteria by Discriminant Analysis: Use in Predicting the Source of Fecal Contamination in Subtropical Waters

    PubMed Central

    Harwood, Valerie J.; Whitlock, John; Withington, Victoria

    2000-01-01

    The antibiotic resistance patterns of fecal streptococci and fecal coliforms isolated from domestic wastewater and animal feces were determined using a battery of antibiotics (amoxicillin, ampicillin, cephalothin, chlortetracycline, oxytetracycline, tetracycline, erythromycin, streptomycin, and vancomycin) at four concentrations each. The sources of animal feces included wild birds, cattle, chickens, dogs, pigs, and raccoons. Antibiotic resistance patterns of fecal streptococci and fecal coliforms from known sources were grouped into two separate databases, and discriminant analysis of these patterns was used to establish the relationship between the antibiotic resistance patterns and the bacterial source. The fecal streptococcus and fecal coliform databases classified isolates from known sources with similar accuracies. The average rate of correct classification for the fecal streptococcus database was 62.3%, and that for the fecal coliform database was 63.9%. The sources of fecal streptococci and fecal coliforms isolated from surface waters were identified by discriminant analysis of their antibiotic resistance patterns. Both databases identified the source of indicator bacteria isolated from surface waters directly impacted by septic tank discharges as human. At sample sites selected for relatively low anthropogenic impact, the dominant sources of indicator bacteria were identified as various animals. The antibiotic resistance analysis technique promises to be a useful tool in assessing sources of fecal contamination in subtropical waters, such as those in Florida. PMID:10966379

  4. Use of Electronic Health Records in sub-Saharan Africa: Progress and challenges

    PubMed Central

    Akanbi, Maxwell O.; Ocheke, Amaka N.; Agaba, Patricia A.; Daniyam, Comfort A.; Agaba, Emmanuel I.; Okeke, Edith N.; Ukoli, Christiana O.

    2012-01-01

    Background The Electronic Health Record (EHR) is a key component of medical informatics that is increasingly being utilized in industrialized nations to improve healthcare. There is limited information on the use of EHR in sub-Saharan Africa. This paper reviews availability of EHRs in sub-Saharan Africa. Methods Searches were performed on PubMed and Google Scholar databases using the terms ‘Electronic Health Records OR Electronic Medical Records OR e-Health and Africa’. References from identified publications were reviewed. Inclusion criterion was documented use of EHR in Africa. Results The search yielded 147 publications of which 21papers from 15 sub-Saharan African countries documented the use of EHR in Africa and were reviewed. About 91% reported use of Open Source healthcare software, with OpenMRS being the most widely used. Most reports were from HIV related health centers. Barriers to adoption of EHRs include high cost of procurement and maintenance, poor network infrastructure and lack of comfort among health workers with electronic medical records. Conclusion There has been an increase in the use of EHRs in sub-Saharan Africa, largely driven by utilization by HIV treatment programs. Penetration is still however very low. PMID:25243111

  5. Development Of New Databases For Tsunami Hazard Analysis In California

    NASA Astrophysics Data System (ADS)

    Wilson, R. I.; Barberopoulou, A.; Borrero, J. C.; Bryant, W. A.; Dengler, L. A.; Goltz, J. D.; Legg, M.; McGuire, T.; Miller, K. M.; Real, C. R.; Synolakis, C.; Uslu, B.

    2009-12-01

    The California Geological Survey (CGS) has partnered with other tsunami specialists to produce two statewide databases to facilitate the evaluation of tsunami hazard products for both emergency response and land-use planning and development. A robust, State-run tsunami deposit database is being developed that compliments and expands on existing databases from the National Geophysical Data Center (global) and the USGS (Cascadia). Whereas these existing databases focus on references or individual tsunami layers, the new State-maintained database concentrates on the location and contents of individual borings/trenches that sample tsunami deposits. These data provide an important observational benchmark for evaluating the results of tsunami inundation modeling. CGS is collaborating with and sharing the database entry form with other states to encourage its continued development beyond California’s coastline so that historic tsunami deposits can be evaluated on a regional basis. CGS is also developing an internet-based, tsunami source scenario database and forum where tsunami source experts and hydrodynamic modelers can discuss the validity of tsunami sources and their contribution to hazard assessments for California and other coastal areas bordering the Pacific Ocean. The database includes all distant and local tsunami sources relevant to California starting with the forty scenarios evaluated during the creation of the recently completed statewide series of tsunami inundation maps for emergency response planning. Factors germane to probabilistic tsunami hazard analyses (PTHA), such as event histories and recurrence intervals, are also addressed in the database and discussed in the forum. Discussions with other tsunami source experts will help CGS determine what additional scenarios should be considered in PTHA for assessing the feasibility of generating products of value to local land-use planning and development.

  6. 75 FR 70047 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-16

    ... to the Office of Management and Budget for approval. The Securities and Exchange Commission has begun the design of a new Electronic Data Collection System database (the Database) and invites comment on... Investor Education and Advocacy, Washington, DC 20549-0213. Electronic Data Collection System Notice is...

  7. THE CELL CENTERED DATABASE PROJECT: AN UPDATE ON BUILDING COMMUNITY RESOURCES FOR MANAGING AND SHARING 3D IMAGING DATA

    PubMed Central

    Martone, Maryann E.; Tran, Joshua; Wong, Willy W.; Sargis, Joy; Fong, Lisa; Larson, Stephen; Lamont, Stephan P.; Gupta, Amarnath; Ellisman, Mark H.

    2008-01-01

    Databases have become integral parts of data management, dissemination and mining in biology. At the Second Annual Conference on Electron Tomography, held in Amsterdam in 2001, we proposed that electron tomography data should be shared in a manner analogous to structural data at the protein and sequence scales. At that time, we outlined our progress in creating a database to bring together cell level imaging data across scales, The Cell Centered Database (CCDB). The CCDB was formally launched in 2002 as an on-line repository of high-resolution 3D light and electron microscopic reconstructions of cells and subcellular structures. It contains 2D, 3D and 4D structural and protein distribution information from confocal, multiphoton and electron microscopy, including correlated light and electron microscopy. Many of the data sets are derived from electron tomography of cells and tissues. In the five years since its debut, we have moved the CCDB from a prototype to a stable resource and expanded the scope of the project to include data management and knowledge engineering. Here we provide an update on the CCDB and how it is used by the scientific community. We also describe our work in developing additional knowledge tools, e.g., ontologies, for annotation and query of electron microscopic data. PMID:18054501

  8. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  9. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  10. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  11. 49 CFR 1104.3 - Copies.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... fully evaluate evidence, all spreadsheets must be fully accessible and manipulable. Electronic databases... Microsoft Open Database Connectivity (ODBC) standard. ODBC is a Windows technology that allows a database software package to import data from a database created using a different software package. We currently...

  12. Database Development for Electrical, Electronic, and Electromechanical (EEE) Parts for the International Space Station Alpha

    NASA Technical Reports Server (NTRS)

    Wassil-Grimm, Andrew D.

    1997-01-01

    More effective electronic communication processes are needed to transfer contractor and international partner data into NASA and prime contractor baseline database systems. It is estimated that the International Space Station Alpha (ISSA) parts database will contain up to one million parts each of which may require database capabilities for approximately one thousand bytes of data for each part. The resulting gigabyte database must provide easy access to users who will be preparing multiple analyses and reports in order to verify as-designed, as-built, launch, on-orbit, and return configurations for up to 45 missions associated with the construction of the ISSA. Additionally, Internet access to this data base is strongly indicated to allow multiple user access from clients located in many foreign countries. This summer's project involved familiarization and evaluation of the ISSA Electrical, Electronic, and Electromechanical (EEE) Parts data and the process of electronically managing these data. Particular attention was devoted to improving the interfaces among the many elements of the ISSA information system and its global customers and suppliers. Additionally, prototype queries were developed to facilitate the identification of data changes in the data base, verifications that the designs used only approved parts, and certifications that the flight hardware containing EEE parts was ready for flight. This project also resulted in specific recommendations to NASA for further development in the area of EEE parts database development and usage.

  13. The Longhorn Array Database (LAD): An Open-Source, MIAME compliant implementation of the Stanford Microarray Database (SMD)

    PubMed Central

    Killion, Patrick J; Sherlock, Gavin; Iyer, Vishwanath R

    2003-01-01

    Background The power of microarray analysis can be realized only if data is systematically archived and linked to biological annotations as well as analysis algorithms. Description The Longhorn Array Database (LAD) is a MIAME compliant microarray database that operates on PostgreSQL and Linux. It is a fully open source version of the Stanford Microarray Database (SMD), one of the largest microarray databases. LAD is available at Conclusions Our development of LAD provides a simple, free, open, reliable and proven solution for storage and analysis of two-color microarray data. PMID:12930545

  14. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.500... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  15. 76 FR 26776 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-09

    ... current collection of information to the Office of Management and Budget for approval. The Securities and Exchange Commission has begun the design of a new Electronic Data Collection System database (the Database..., Washington, DC 20549-0213. Extension: Electronic Data Collection System; OMB Control No. 3235-0672; SEC File...

  16. A theoretical-electron-density databank using a model of real and virtual spherical atoms.

    PubMed

    Nassour, Ayoub; Domagala, Slawomir; Guillot, Benoit; Leduc, Theo; Lecomte, Claude; Jelsch, Christian

    2017-08-01

    A database describing the electron density of common chemical groups using combinations of real and virtual spherical atoms is proposed, as an alternative to the multipolar atom modelling of the molecular charge density. Theoretical structure factors were computed from periodic density functional theory calculations on 38 crystal structures of small molecules and the charge density was subsequently refined using a density model based on real spherical atoms and additional dummy charges on the covalent bonds and on electron lone-pair sites. The electron-density parameters of real and dummy atoms present in a similar chemical environment were averaged on all the molecules studied to build a database of transferable spherical atoms. Compared with the now-popular databases of transferable multipolar parameters, the spherical charge modelling needs fewer parameters to describe the molecular electron density and can be more easily incorporated in molecular modelling software for the computation of electrostatic properties. The construction method of the database is described. In order to analyse to what extent this modelling method can be used to derive meaningful molecular properties, it has been applied to the urea molecule and to biotin/streptavidin, a protein/ligand complex.

  17. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database

    DOE PAGES

    Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less

  18. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database.

    PubMed

    Scofield, Patricia A; Smith, Linda L; Johnson, David N

    2017-07-01

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y-12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations on Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package-1988 computer model files. This database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.

  19. Oak Ridge Reservation Environmental Protection Rad Neshaps Radionuclide Inventory Web Database and Rad Neshaps Source and Dose Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scofield, Patricia A.; Smith, Linda Lenell; Johnson, David N.

    The U.S. Environmental Protection Agency promulgated national emission standards for emissions of radionuclides other than radon from US Department of Energy facilities in Chapter 40 of the Code of Federal Regulations (CFR) 61, Subpart H. This regulatory standard limits the annual effective dose that any member of the public can receive from Department of Energy facilities to 0.1 mSv. As defined in the preamble of the final rule, all of the facilities on the Oak Ridge Reservation, i.e., the Y–12 National Security Complex, Oak Ridge National Laboratory, East Tennessee Technology Park, and any other U.S. Department of Energy operations onmore » Oak Ridge Reservation, combined, must meet the annual dose limit of 0.1 mSv. At Oak Ridge National Laboratory, there are monitored sources and numerous unmonitored sources. To maintain radiological source and inventory information for these unmonitored sources, e.g., laboratory hoods, equipment exhausts, and room exhausts not currently venting to monitored stacks on the Oak Ridge National Laboratory campus, the Environmental Protection Rad NESHAPs Inventory Web Database was developed. This database is updated annually and is used to compile emissions data for the annual Radionuclide National Emission Standards for Hazardous Air Pollutants (Rad NESHAPs) report required by 40 CFR 61.94. It also provides supporting documentation for facility compliance audits. In addition, a Rad NESHAPs source and dose database was developed to import the source and dose summary data from Clean Air Act Assessment Package—1988 computer model files. As a result, this database provides Oak Ridge Reservation and facility-specific source inventory; doses associated with each source and facility; and total doses for the Oak Ridge Reservation dose.« less

  20. A database de-identification framework to enable direct queries on medical data for secondary use.

    PubMed

    Erdal, B S; Liu, J; Ding, J; Chen, J; Marsh, C B; Kamal, J; Clymer, B D

    2012-01-01

    To qualify the use of patient clinical records as non-human-subject for research purpose, electronic medical record data must be de-identified so there is minimum risk to protected health information exposure. This study demonstrated a robust framework for structured data de-identification that can be applied to any relational data source that needs to be de-identified. Using a real world clinical data warehouse, a pilot implementation of limited subject areas were used to demonstrate and evaluate this new de-identification process. Query results and performances are compared between source and target system to validate data accuracy and usability. The combination of hashing, pseudonyms, and session dependent randomizer provides a rigorous de-identification framework to guard against 1) source identifier exposure; 2) internal data analyst manually linking to source identifiers; and 3) identifier cross-link among different researchers or multiple query sessions by the same researcher. In addition, a query rejection option is provided to refuse queries resulting in less than preset numbers of subjects and total records to prevent users from accidental subject identification due to low volume of data. This framework does not prevent subject re-identification based on prior knowledge and sequence of events. Also, it does not deal with medical free text de-identification, although text de-identification using natural language processing can be included due its modular design. We demonstrated a framework resulting in HIPAA Compliant databases that can be directly queried by researchers. This technique can be augmented to facilitate inter-institutional research data sharing through existing middleware such as caGrid.

  1. Database Search Strategies & Tips. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 17 articles presenting strategies and tips for searching databases online appear in this collection, which is one in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  2. CDM analysis

    NASA Technical Reports Server (NTRS)

    Larson, Robert E.; Mcentire, Paul L.; Oreilly, John G.

    1993-01-01

    The C Data Manager (CDM) is an advanced tool for creating an object-oriented database and for processing queries related to objects stored in that database. The CDM source code was purchased and will be modified over the course of the Arachnid project. In this report, the modified CDM is referred to as MCDM. Using MCDM, a detailed series of experiments was designed and conducted on a Sun Sparcstation. The primary results and analysis of the CDM experiment are provided in this report. The experiments involved creating the Long-form Faint Source Catalog (LFSC) database and then analyzing it with respect to following: (1) the relationships between the volume of data and the time required to create a database; (2) the storage requirements of the database files; and (3) the properties of query algorithms. The effort focused on defining, implementing, and analyzing seven experimental scenarios: (1) find all sources by right ascension--RA; (2) find all sources by declination--DEC; (3) find all sources in the right ascension interval--RA1, RA2; (4) find all sources in the declination interval--DEC1, DEC2; (5) find all sources in the rectangle defined by--RA1, RA2, DEC1, DEC2; (6) find all sources that meet certain compound conditions; and (7) analyze a variety of query algorithms. Throughout this document, the numerical results obtained from these scenarios are reported; conclusions are presented at the end of the document.

  3. Academic health sciences library Website navigation: an analysis of forty-one Websites and their navigation tools

    PubMed Central

    Brower, Stewart M.

    2004-01-01

    Background: The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Methods: Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Results: Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. Conclusions: These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites. PMID:15494756

  4. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  5. 76 FR 5106 - Deposit Requirements for Registration of Automated Databases That Predominantly Consist of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-28

    ... Registration of Automated Databases That Predominantly Consist of Photographs AGENCY: Copyright Office, Library... regarding electronic registration of automated databases that consist predominantly of photographs and group... applications for automated databases that consist predominantly of photographs. The proposed amendments would...

  6. A Web-based open-source database for the distribution of hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun

    2006-10-01

    With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.

  7. Preliminary geologic map of the Oat Mountain 7.5' quadrangle, Southern California: a digital database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, Russell H.

    1995-01-01

    This database, identified as "Preliminary Geologic Map of the Oat Mountain 7.5' Quadrangle, southern California: A Digital Database," has been approved for release and publication by the Director of the USGS. Although this database has been reviewed and is substantially complete, the USGS reserves the right to revise the data pursuant to further analysis and review. This database is released on condition that neither the USGS nor the U. S. Government may be held liable for any damages resulting from its use. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1993). More specific information about the units may be available in the original sources.

  8. The plant phenological online database (PPODB): an online database for long-term phenological data

    NASA Astrophysics Data System (ADS)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  9. The South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLAM BRC) case register: development and descriptive data

    PubMed Central

    Stewart, Robert; Soremekun, Mishael; Perera, Gayan; Broadbent, Matthew; Callard, Felicity; Denis, Mike; Hotopf, Matthew; Thornicroft, Graham; Lovestone, Simon

    2009-01-01

    Background Case registers have been used extensively in mental health research. Recent developments in electronic medical records, and in computer software to search and analyse these in anonymised format, have the potential to revolutionise this research tool. Methods We describe the development of the South London and Maudsley NHS Foundation Trust (SLAM) Biomedical Research Centre (BRC) Case Register Interactive Search tool (CRIS) which allows research-accessible datasets to be derived from SLAM, the largest provider of secondary mental healthcare in Europe. All clinical data, including free text, are available for analysis in the form of anonymised datasets. Development involved both the building of the system and setting in place the necessary security (with both functional and procedural elements). Results Descriptive data are presented for the Register database as of October 2008. The database at that point included 122,440 cases, 35,396 of whom were receiving active case management under the Care Programme Approach. In terms of gender and ethnicity, the database was reasonably representative of the source population. The most common assigned primary diagnoses were within the ICD mood disorders (n = 12,756) category followed by schizophrenia and related disorders (8158), substance misuse (7749), neuroses (7105) and organic disorders (6414). Conclusion The SLAM BRC Case Register represents a 'new generation' of this research design, built on a long-running system of fully electronic clinical records and allowing in-depth secondary analysis of both numerical, string and free text data, whilst preserving anonymity through technical and procedural safeguards. PMID:19674459

  10. openBIS ELN-LIMS: an open-source database for academic laboratories.

    PubMed

    Barillari, Caterina; Ottoz, Diana S M; Fuentes-Serna, Juan Mariano; Ramakrishnan, Chandrasekhar; Rinn, Bernd; Rudolf, Fabian

    2016-02-15

    The open-source platform openBIS (open Biology Information System) offers an Electronic Laboratory Notebook and a Laboratory Information Management System (ELN-LIMS) solution suitable for the academic life science laboratories. openBIS ELN-LIMS allows researchers to efficiently document their work, to describe materials and methods and to collect raw and analyzed data. The system comes with a user-friendly web interface where data can be added, edited, browsed and searched. The openBIS software, a user guide and a demo instance are available at https://openbis-eln-lims.ethz.ch. The demo instance contains some data from our laboratory as an example to demonstrate the possibilities of the ELN-LIMS (Ottoz et al., 2014). For rapid local testing, a VirtualBox image of the ELN-LIMS is also available. © The Author 2015. Published by Oxford University Press.

  11. The Protein Identifier Cross-Referencing (PICR) service: reconciling protein identifiers across multiple source databases.

    PubMed

    Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning

    2007-10-18

    Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at http://www.ebi.ac.uk/Tools/picr.

  12. The Protein Identifier Cross-Referencing (PICR) service: reconciling protein identifiers across multiple source databases

    PubMed Central

    Côté, Richard G; Jones, Philip; Martens, Lennart; Kerrien, Samuel; Reisinger, Florian; Lin, Quan; Leinonen, Rasko; Apweiler, Rolf; Hermjakob, Henning

    2007-01-01

    Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR) service, a web application that provides interactive and programmatic (SOAP and REST) access to a mapping algorithm that uses the UniProt Archive (UniParc) as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV) or Microsoft Excel (XLS) files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR interface, documentation and code examples are available at . PMID:17945017

  13. Volcanoes of the World: Reconfiguring a scientific database to meet new goals and expectations

    NASA Astrophysics Data System (ADS)

    Venzke, Edward; Andrews, Ben; Cottrell, Elizabeth

    2015-04-01

    The Smithsonian Global Volcanism Program's (GVP) database of Holocene volcanoes and eruptions, Volcanoes of the World (VOTW), originated in 1971, and was largely populated with content from the IAVCEI Catalog of Volcanoes of Active Volcanoes and some independent datasets. Volcanic activity reported by Smithsonian's Bulletin of the Global Volcanism Network and USGS/SI Weekly Activity Reports (and their predecessors), published research, and other varied sources has expanded the database significantly over the years. Three editions of the VOTW were published in book form, creating a catalog with new ways to display data that included regional directories, a gazetteer, and a 10,000-year chronology of eruptions. The widespread dissemination of the data in electronic media since the first GVP website in 1995 has created new challenges and opportunities for this unique collection of information. To better meet current and future goals and expectations, we have recently transitioned VOTW into a SQL Server database. This process included significant schema changes to the previous relational database, data auditing, and content review. We replaced a disparate, confusing, and changeable volcano numbering system with unique and permanent volcano numbers. We reconfigured structures for recording eruption data to allow greater flexibility in describing the complexity of observed activity, adding in the ability to distinguish episodes within eruptions (in time and space) and events (including dates) rather than characteristics that take place during an episode. We have added a reference link field in multiple tables to enable attribution of sources at finer levels of detail. We now store and connect synonyms and feature names in a more consistent manner, which will allow for morphological features to be given unique numbers and linked to specific eruptions or samples; if the designated overall volcano name is also a morphological feature, it is then also listed and described as that feature. One especially significant audit involved re-evaluating the categories of evidence used to include a volcano in the Holocene list, and reviewing in detail the entries in low-certainty categories. Concurrently, we developed a new data entry system that may in the future allow trusted users outside of Smithsonian to input data into VOTW. A redesigned website now provides new search tools and data download options. We are collaborating with organizations that manage volcano and eruption databases, physical sample databases, and geochemical databases to allow real-time connections and complex queries. VOTW serves the volcanological community by providing a clear and consistent core database of distinctly identified volcanoes and eruptions to advance goals in research, civil defense, and public outreach.

  14. ELNET--The Electronic Library Database System.

    ERIC Educational Resources Information Center

    King, Shirley V.

    1991-01-01

    ELNET (Electronic Library Network), a Japanese language database, allows searching of index terms and free text terms from articles and stores the full text of the articles on an optical disc system. Users can order fax copies of the text from the optical disc. This article also explains online searching and discusses machine translation. (LRW)

  15. Successful Keyword Searching: Initiating Research on Popular Topics Using Electronic Databases.

    ERIC Educational Resources Information Center

    MacDonald, Randall M.; MacDonald, Susan Priest

    Students are using electronic resources more than ever before to locate information for assignments. Without the proper search terms, results are incomplete, and students are frustrated. Using the keywords, key people, organizations, and Web sites provided in this book and compiled from the most commonly used databases, students will be able to…

  16. Carbon Nanotube Electron Gun

    NASA Technical Reports Server (NTRS)

    Ribaya, Bryan P. (Inventor); Nguyen, Cattien V. (Inventor)

    2013-01-01

    An electron gun, an electron source for an electron gun, an extractor for an electron gun, and a respective method for producing the electron gun, the electron source and the extractor are disclosed. Embodiments provide an electron source utilizing a carbon nanotube (CNT) bonded to a substrate for increased stability, reliability, and durability. An extractor with an aperture in a conductive material is used to extract electrons from the electron source, where the aperture may substantially align with the CNT of the electron source when the extractor and electron source are mated to form the electron gun. The electron source and extractor may have alignment features for aligning the electron source and the extractor, thereby bringing the aperture and CNT into substantial alignment when assembled. The alignment features may provide and maintain this alignment during operation to improve the field emission characteristics and overall system stability of the electron gun.

  17. Carbon nanotube electron gun

    NASA Technical Reports Server (NTRS)

    Nguyen, Cattien V. (Inventor); Ribaya, Bryan P. (Inventor)

    2010-01-01

    An electron gun, an electron source for an electron gun, an extractor for an electron gun, and a respective method for producing the electron gun, the electron source and the extractor are disclosed. Embodiments provide an electron source utilizing a carbon nanotube (CNT) bonded to a substrate for increased stability, reliability, and durability. An extractor with an aperture in a conductive material is used to extract electrons from the electron source, where the aperture may substantially align with the CNT of the electron source when the extractor and electron source are mated to form the electron gun. The electron source and extractor may have alignment features for aligning the electron source and the extractor, thereby bringing the aperture and CNT into substantial alignment when assembled. The alignment features may provide and maintain this alignment during operation to improve the field emission characteristics and overall system stability of the electron gun.

  18. Using Commercially available Tools for multi-faceted health assessment: Data Integration Lessons Learned

    PubMed Central

    Wilamowska, Katarzyna; Le, Thai; Demiris, George; Thompson, Hilaire

    2013-01-01

    Health monitoring data collected from multiple available intake devices provide a rich resource to support older adult health and wellness. Though large amounts of data can be collected, there is currently a lack of understanding on integration of these various data sources using commercially available products. This article describes an inexpensive approach to integrating data from multiple sources from a recently completed pilot project that assessed older adult wellness, and demonstrates challenges and benefits in pursuing data integration using commercially available products. The data in this project were sourced from a) electronically captured participant intake surveys, and existing commercial software output for b) vital signs and c) cognitive function. All the software used for data integration in this project was freeware and was chosen because of its ease of comprehension by novice database users. The methods and results of this approach provide a model for researchers with similar data integration needs to easily replicate this effort at a low cost. PMID:23728444

  19. Dealing with an information overload of health science data: structured utilisation of libraries, distributed knowledge in databases and Web content.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Rieger, Joerg; Meyer, Michael

    2006-01-01

    The organizational structures of web contents and electronic information resources must adapt to the demands of a growing volume of information and user requirements. Otherwise the information society will be threatened by disinformation. The biomedical sciences are especially vulnerable in this regard, since they are strongly oriented toward text-based knowledge sources. Here sustainable improvement can only be achieved by using a comprehensive, integrated approach that not only includes data management but also specifically incorporates the editorial processes, including structuring information sources and publication. The technical resources needed to effectively master these tasks are already available in the form of the data standards and tools of the Semantic Web. They include Rich Site Summaries (RSS), which have become an established means of distributing and syndicating conventional news messages and blogs. They can also provide access to the contents of the previously mentioned information sources, which are conventionally classified as 'deep web' content.

  20. Role of data warehousing in healthcare epidemiology.

    PubMed

    Wyllie, D; Davies, J

    2015-04-01

    Electronic storage of healthcare data, including individual-level risk factors for both infectious and other diseases, is increasing. These data can be integrated at hospital, regional and national levels. Data sources that contain risk factor and outcome information for a wide range of conditions offer the potential for efficient epidemiological analysis of multiple diseases. Opportunities may also arise for monitoring healthcare processes. Integrating diverse data sources presents epidemiological, practical, and ethical challenges. For example, diagnostic criteria, outcome definitions, and ascertainment methods may differ across the data sources. Data volumes may be very large, requiring sophisticated computing technology. Given the large populations involved, perhaps the most challenging aspect is how informed consent can be obtained for the development of integrated databases, particularly when it is not easy to demonstrate their potential. In this article, we discuss some of the ups and downs of recent projects as well as the potential of data warehousing for antimicrobial resistance monitoring. Copyright © 2015. Published by Elsevier Ltd.

  1. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    PubMed

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  2. Review and Comparison of the Search Effectiveness and User Interface of Three Major Online Chemical Databases

    ERIC Educational Resources Information Center

    Bharti, Neelam; Leonard, Michelle; Singh, Shailendra

    2016-01-01

    Online chemical databases are the largest source of chemical information and, therefore, the main resource for retrieving results from published journals, books, patents, conference abstracts, and other relevant sources. Various commercial, as well as free, chemical databases are available. SciFinder, Reaxys, and Web of Science are three major…

  3. A database of natural products and chemical entities from marine habitat

    PubMed Central

    Babu, Padavala Ajay; Puppala, Suma Sree; Aswini, Satyavarapu Lakshmi; Vani, Metta Ramya; Kumar, Chinta Narasimha; Prasanna, Tallapragada

    2008-01-01

    Marine compound database consists of marine natural products and chemical entities, collected from various literature sources, which are known to possess bioactivity against human diseases. The database is constructed using html code. The 12 categories of 182 compounds are provided with the source, compound name, 2-dimensional structure, bioactivity and clinical trial information. The database is freely available online and can be accessed at http://www.progenebio.in/mcdb/index.htm PMID:19238254

  4. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  5. Comparison of Online Agricultural Information Services.

    ERIC Educational Resources Information Center

    Reneau, Fred; Patterson, Richard

    1984-01-01

    Outlines major online agricultural information services--agricultural databases, databases with agricultural services, educational databases in agriculture--noting services provided, access to the database, and costs. Benefits of online agricultural database sources (availability of agricultural marketing, weather, commodity prices, management…

  6. Improvement of medication event interventions through use of an electronic database.

    PubMed

    Merandi, Jenna; Morvay, Shelly; Lewe, Dorcas; Stewart, Barb; Catt, Char; Chanthasene, Phillip P; McClead, Richard; Kappeler, Karl; Mirtallo, Jay M

    2013-10-01

    Patient safety enhancements achieved through the use of an electronic Web-based system for responding to adverse drug events (ADEs) are described. A two-phase initiative was carried out at an academic pediatric hospital to improve processes related to "medication event huddles" (interdisciplinary meetings focused on ADE interventions). Phase 1 of the initiative entailed a review of huddles and interventions over a 16-month baseline period during which multiple databases were used to manage the huddle process and staff interventions were assigned via manually generated e-mail reminders. Phase 1 data collection included ADE details (e.g., medications and staff involved, location and date of event) and the types and frequencies of interventions. Based on the phase 1 analysis, an electronic database was created to eliminate the use of multiple systems for huddle scheduling and documentation and to automatically generate e-mail reminders on assigned interventions. In phase 2 of the initiative, the impact of the database during a 5-month period was evaluated; the primary outcome was the percentage of interventions documented as completed after database implementation. During the postimplementation period, 44.7% of assigned interventions were completed, compared with a completion rate of 21% during the preimplementation period, and interventions documented as incomplete decreased from 77% to 43.7% (p < 0.0001). Process changes, education, and medication order improvements were the most frequently documented categories of interventions. Implementation of a user-friendly electronic database improved intervention completion and documentation after medication event huddles.

  7. Reusable data in public health data-bases-problems encountered in Danish Children's Database.

    PubMed

    Høstgaard, Anna Marie; Pape-Haugaard, Louise

    2012-01-01

    Denmark have unique health informatics databases e.g. "The Children's Database", which since 2009 holds data on all Danish children from birth until 17 years of age. In the current set-up a number of potential sources of errors exist - both technical and human-which means that the data is flawed. This gives rise to erroneous statistics and makes the data unsuitable for research purposes. In order to make the data usable, it is necessary to develop new methods for validating the data generation process at the municipal/regional/national level. In the present ongoing research project, two research areas are combined: Public Health Informatics and Computer Science, and both ethnographic as well as system engineering research methods are used. The project is expected to generate new generic methods and knowledge about electronic data collection and transmission in different social contexts and by different social groups and thus to be of international importance, since this is sparsely documented in the Public Health Informatics perspective. This paper presents the preliminary results, which indicate that health information technology used ought to be subject for redesign, where a thorough insight into the work practices should be point of departure.

  8. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  9. Third millenium ideal gas and condensed phase thermochemical database for combustion (with update from active thermochemical tables).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burcat, A.; Ruscic, B.; Chemistry

    2005-07-29

    The thermochemical database of species involved in combustion processes is and has been available for free use for over 25 years. It was first published in print in 1984, approximately 8 years after it was first assembled, and contained 215 species at the time. This is the 7th printed edition and most likely will be the last one in print in the present format, which involves substantial manual labor. The database currently contains more than 1300 species, specifically organic molecules and radicals, but also inorganic species connected to combustion and air pollution. Since 1991 this database is freely available onmore » the internet, at the Technion-IIT ftp server, and it is continuously expanded and corrected. The database is mirrored daily at an official mirror site, and at random at about a dozen unofficial mirror and 'finger' sites. The present edition contains numerous corrections and many recalculations of data of provisory type by the G3//B3LYP method, a high-accuracy composite ab initio calculation. About 300 species are newly calculated and are not yet published elsewhere. In anticipation of the full coupling, which is under development, the database started incorporating the available (as yet unpublished) values from Active Thermochemical Tables. The electronic version now also contains an XML file of the main database to allow transfer to other formats and ease finding specific information of interest. The database is used by scientists, educators, engineers and students at all levels, dealing primarily with combustion and air pollution, jet engines, rocket propulsion, fireworks, but also by researchers involved in upper atmosphere kinetics, astrophysics, abrasion metallurgy, etc. This introductory article contains explanations of the database and the means to use it, its sources, ways of calculation, and assessments of the accuracy of data.« less

  10. Freshwater Biological Traits Database (Traits)

    EPA Pesticide Factsheets

    The traits database was compiled for a project on climate change effects on river and stream ecosystems. The traits data, gathered from multiple sources, focused on information published or otherwise well-documented by trustworthy sources.

  11. The 3XMM spectral fit database

    NASA Astrophysics Data System (ADS)

    Georgantopoulos, I.; Corral, A.; Watson, M.; Carrera, F.; Webb, N.; Rosen, S.

    2016-06-01

    I will present the XMMFITCAT database which is a spectral fit inventory of the sources in the 3XMM catalogue. Spectra are available by the XMM/SSC for all 3XMM sources which have more than 50 background subtracted counts per module. This work is funded in the framework of the ESA Prodex project. The 3XMM catalog currently covers 877 sq. degrees and contains about 400,000 unique sources. Spectra are available for over 120,000 sources. Spectral fist have been performed with various spectral models. The results are available in the web page http://xraygroup.astro.noa.gr/ and also at the University of Leicester LEDAS database webpage ledas-www.star.le.ac.uk/. The database description as well as some science results in the joint area with SDSS are presented in two recent papers: Corral et al. 2015, A&A, 576, 61 and Corral et al. 2014, A&A, 569, 71. At least for extragalactic sources, the spectral fits will acquire added value when photometric redshifts become available. In the framework of a new Prodex project we have been funded to derive photometric redshifts for the 3XMM sources using machine learning techniques. I will present the techniques as well as the optical near-IR databases that will be used.

  12. Preliminary geologic map of the Piru 7.5' quadrangle, southern California: a digital database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, Russell H.

    1995-01-01

    This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1995). More specific information about the units may be available in the original sources.

  13. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762

  14. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.

  15. A multimedia perioperative record keeper for clinical research.

    PubMed

    Perrino, A C; Luther, M A; Phillips, D B; Levin, F L

    1996-05-01

    To develop a multimedia perioperative recordkeeper that provides: 1. synchronous, real-time acquisition of multimedia data, 2. on-line access to the patient's chart data, and 3. advanced data analysis capabilities through integrated, multimedia database and analysis applications. To minimize cost and development time, the system design utilized industry standard hardware components and graphical. software development tools. The system was configured to use a Pentium PC complemented with a variety of hardware interfaces to external data sources. These sources included physiologic monitors with data in digital, analog, video, and audio as well as paper-based formats. The development process was guided by trials in over 80 clinical cases and by the critiques from numerous users. As a result of this process, a suite of custom software applications were created to meet the design goals. The Perioperative Data Acquisition application manages data collection from a variety of physiological monitors. The Charter application provides for rapid creation of an electronic medical record from the patient's paper-based chart and investigator's notes. The Multimedia Medical Database application provides a relational database for the organization and management of multimedia data. The Triscreen application provides an integrated data analysis environment with simultaneous, full-motion data display. With recent technological advances in PC power, data acquisition hardware, and software development tools, the clinical researcher now has the ability to collect and examine a more complete perioperative record. It is hoped that the description of the MPR and its development process will assist and encourage others to advance these tools for perioperative research.

  16. Comprehensive Routing Security Development and Deployment for the Internet

    DTIC Science & Technology

    2015-02-01

    feature enhancement and bug fixes. • MySQL : MySQL is a widely used and popular open source database package. It was chosen for database support in the...RPSTIR depends on several other open source packages. • MySQL : MySQL is used for the the local RPKI database cache. • OpenSSL: OpenSSL is used for...cryptographic libraries for X.509 certificates. • ODBC mySql Connector: ODBC (Open Database Connectivity) is a standard programming interface (API) for

  17. Developing a Nursing Database System in Kenya

    PubMed Central

    Riley, Patricia L; Vindigni, Stephen M; Arudo, John; Waudo, Agnes N; Kamenju, Andrew; Ngoya, Japheth; Oywer, Elizabeth O; Rakuom, Chris P; Salmon, Marla E; Kelley, Maureen; Rogers, Martha; St Louis, Michael E; Marum, Lawrence H

    2007-01-01

    Objective To describe the development, initial findings, and implications of a national nursing workforce database system in Kenya. Principal Findings Creating a national electronic nursing workforce database provides more reliable information on nurse demographics, migration patterns, and workforce capacity. Data analyses are most useful for human resources for health (HRH) planning when workforce capacity data can be linked to worksite staffing requirements. As a result of establishing this database, the Kenya Ministry of Health has improved capability to assess its nursing workforce and document important workforce trends, such as out-migration. Current data identify the United States as the leading recipient country of Kenyan nurses. The overwhelming majority of Kenyan nurses who elect to out-migrate are among Kenya's most qualified. Conclusions The Kenya nursing database is a first step toward facilitating evidence-based decision making in HRH. This database is unique to developing countries in sub-Saharan Africa. Establishing an electronic workforce database requires long-term investment and sustained support by national and global stakeholders. PMID:17489921

  18. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    NASA Astrophysics Data System (ADS)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  19. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  20. Annual Review of Database Developments: 1993.

    ERIC Educational Resources Information Center

    Basch, Reva

    1993-01-01

    Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…

  1. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  2. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  3. Untargeted metabolomic analysis using liquid chromatography quadrupole time-of-flight mass spectrometry for non-volatile profiling of wines.

    PubMed

    Arbulu, M; Sampedro, M C; Gómez-Caballero, A; Goicolea, M A; Barrio, R J

    2015-02-09

    The current study presents a method for comprehensive untargeted metabolomic fingerprinting of the non-volatile profile of the Graciano Vitis vinifera wine variety, using liquid chromatography/electrospray ionization time of flight mass spectrometry (LC-ESI-QTOF). Pre-treatment of samples, chromatographic columns, mobile phases, elution gradients and ionization sources, were evaluated for the extraction of the maximum number of metabolites in red wine. Putative compounds were extracted from the raw data using the extraction algorithm, molecular feature extractor (MFE). For the metabolite identification the WinMet database was designed based on electronic databases and literature research and includes only the putative metabolites reported to be present in oenological matrices. The results from WinMet were compared with those in the METLIN database to evaluate how much the databases overlap for performing identifications. The reproducibility of the analysis was assessed using manual processing following replicate injections of Vitis vinifera cv. Graciano wine spiked with external standards. In the present work, 411 different metabolites in Graciano Vitis vinifera red wine were identified, including primary wine metabolites such as sugars (4%), amino acids (23%), biogenic amines (4%), fatty acids (2%), and organic acids (32%) and secondary metabolites such as phenols (27%) and esters (8%). Significant differences between varieties Tempranillo and Graciano were related to the presence of fifteen specific compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Identification of biomedical journals in Spain and Latin America.

    PubMed

    Bonfill, Xavier; Osorio, Dimelza; Posso, Margarita; Solà, Ivan; Rada, Gabriel; Torres, Ania; García Dieguez, Marcelo; Piña-Pozas, Maricela; Díaz-García, Luisa; Tristán, Mario; Gandarilla, Omar; Rincón-Valenzuela, David A; Martí, Arturo; Hidalgo, Ricardo; Simancas-Racines, Daniel; López, Luis; Correa, Ricardo; Rojas-De-Arias, Antonieta; Loza, César; Gianneo, Óscar; Pardo, Hector

    2015-12-01

    Journals in languages other than English that publish original clinical research are often not well covered in the main biomedical databases and therefore often not included in systematic reviews. This study aimed to identify Spanish language biomedical journals from Spain and Latin America and to describe their main features. Journals were identified in electronic databases, publishers' catalogues and local registries. Eligibility was determined by assessing data from these sources or the journals' websites, when available. A total of 2457 journals were initially identified; 1498 met inclusion criteria. Spain (27.3%), Mexico (16.0%), Argentina (15.1%) and Chile (11.9%) had the highest number of journals. Most (85.8%) are currently active; 87.8% have an ISSN. The median and mean length of publication were 22 and 29 years, respectively. A total of 66.0% were indexed in at least one database; 3.0% had an impact factor in 2012. A total of 845 journals had websites (56.4%), of which 700 (82.8%) were searchable and 681 (80.6%) free of charge. Most of the identified journals have no impact factor or are not indexed in any of the major databases. The list of identified biomedical journals can be a useful resource when conducting hand searching activities and identifying clinical trials that otherwise would not be retrieved. © 2015 Health Libraries Group.

  5. Electronic Publishing.

    ERIC Educational Resources Information Center

    Lancaster, F. W.

    1989-01-01

    Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…

  6. Rapid development of entity-based data models for bioinformatics with persistence object-oriented design and structured interfaces.

    PubMed

    Ezra Tsur, Elishai

    2017-01-01

    Databases are imperative for research in bioinformatics and computational biology. Current challenges in database design include data heterogeneity and context-dependent interconnections between data entities. These challenges drove the development of unified data interfaces and specialized databases. The curation of specialized databases is an ever-growing challenge due to the introduction of new data sources and the emergence of new relational connections between established datasets. Here, an open-source framework for the curation of specialized databases is proposed. The framework supports user-designed models of data encapsulation, objects persistency and structured interfaces to local and external data sources such as MalaCards, Biomodels and the National Centre for Biotechnology Information (NCBI) databases. The proposed framework was implemented using Java as the development environment, EclipseLink as the data persistency agent and Apache Derby as the database manager. Syntactic analysis was based on J3D, jsoup, Apache Commons and w3c.dom open libraries. Finally, a construction of a specialized database for aneurysms associated vascular diseases is demonstrated. This database contains 3-dimensional geometries of aneurysms, patient's clinical information, articles, biological models, related diseases and our recently published model of aneurysms' risk of rapture. Framework is available in: http://nbel-lab.com.

  7. [Access control management in electronic health records: a systematic literature review].

    PubMed

    Carrión Señor, Inmaculada; Fernández Alemán, José Luis; Toval, Ambrosio

    2012-01-01

    This study presents the results of a systematic literature review of aspects related to access control in electronic health records systems, wireless security and privacy and security training for users. Information sources consisted of original articles found in Medline, ACM Digital Library, Wiley InterScience, IEEE Digital Library, Science@Direct, MetaPress, ERIC, CINAHL and Trip Database, published between January 2006 and January 2011. A total of 1,208 articles were extracted using a predefined search string and were reviewed by the authors. The final selection consisted of 24 articles. Of the selected articles, 21 dealt with access policies in electronic health records systems. Eleven articles discussed whether access to electronic health records should be granted by patients or by health organizations. Wireless environments were only considered in three articles. Finally, only four articles explicitly mentioned that technical training of staff and/or patients is required. Role-based access control is the preferred mechanism to deploy access policy by the designers of electronic health records. In most systems, access control is managed by users and health professionals, which promotes patients' right to control personal information. Finally, the security of wireless environments is not usually considered. However, one line of research is eHealth in mobile environments, called mHealth. Copyright © 2011 SESPAS. Published by Elsevier Espana. All rights reserved.

  8. A technique for routinely updating the ITU-R database using radio occultation electron density profiles

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno

    2013-09-01

    Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).

  9. Cryogenic Information Center

    NASA Technical Reports Server (NTRS)

    Mohling, Robert A.; Marquardt, Eric D.; Fusilier, Fred C.; Fesmire, James E.

    2003-01-01

    The Cryogenic Information Center (CIC) is a not-for-profit corporation dedicated to preserving and distributing cryogenic information to government, industry, and academia. The heart of the CIC is a uniform source of cryogenic data including analyses, design, materials and processes, and test information traceable back to the Cryogenic Data Center of the former National Bureau of Standards. The electronic database is a national treasure containing over 146,000 specific bibliographic citations of cryogenic literature and thermophysical property data dating back to 1829. A new technical/bibliographic inquiry service can perform searches and technical analyses. The Cryogenic Material Properties (CMP) Program consists of computer codes using empirical equations to determine thermophysical material properties with emphasis on the 4-300K range. CMP's objective is to develop a user-friendly standard material property database using the best available data so government and industry can conduct more accurate analyses. The CIC serves to benefit researchers, engineers, and technologists in cryogenics and cryogenic engineering, whether they are new or experienced in the field.

  10. New Zealand's National Landslide Database

    NASA Astrophysics Data System (ADS)

    Rosser, B.; Dellow, S.; Haubrook, S.; Glassey, P.

    2016-12-01

    Since 1780, landslides have caused an average of about 3 deaths a year in New Zealand and have cost the economy an average of at least NZ$250M/a (0.1% GDP). To understand the risk posed by landslide hazards to society, a thorough knowledge of where, when and why different types of landslides occur is vital. The main objective for establishing the database was to provide a centralised national-scale, publically available database to collate landslide information that could be used for landslide hazard and risk assessment. Design of a national landslide database for New Zealand required consideration of both existing landslide data stored in a variety of digital formats, and future data, yet to be collected. Pre-existing databases were developed and populated with data reflecting the needs of the landslide or hazard project, and the database structures of the time. Bringing these data into a single unified database required a new structure capable of storing and delivering data at a variety of scales and accuracy and with different attributes. A "unified data model" was developed to enable the database to hold old and new landslide data irrespective of scale and method of capture. The database contains information on landslide locations and where available: 1) the timing of landslides and the events that may have triggered them; 2) the type of landslide movement; 3) the volume and area; 4) the source and debris tail; and 5) the impacts caused by the landslide. Information from a variety of sources including aerial photographs (and other remotely sensed data), field reconnaissance and media accounts has been collated and is presented for each landslide along with metadata describing the data sources and quality. There are currently nearly 19,000 landslide records in the database that include point locations, polygons of landslide source and deposit areas, and linear features. Several large datasets are awaiting upload which will bring the total number of landslides to over 100,000. The geo-spatial database is publicly available via the Internet. Software components, including the underlying database (PostGIS), Web Map Server (GeoServer) and web application use open-source software. The hope is that others will add relevant information to the database as well as download the data contained in it.

  11. The Plant Organelles Database 3 (PODB3) update 2014: integrating electron micrographs and new options for plant organelle research.

    PubMed

    Mano, Shoji; Nakamura, Takanori; Kondo, Maki; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nagatani, Akira; Nishimura, Mikio

    2014-01-01

    The Plant Organelles Database 2 (PODB2), which was first launched in 2006 as PODB, provides static image and movie data of plant organelles, protocols for plant organelle research and external links to relevant websites. PODB2 has facilitated plant organellar research and the understanding of plant organelle dynamics. To provide comprehensive information on plant organelles in more detail, PODB2 was updated to PODB3 (http://podb.nibb.ac.jp/Organellome/). PODB3 contains two additional components: the electron micrograph database and the perceptive organelles database. Through the electron micrograph database, users can examine the subcellular and/or suborganellar structures in various organs of wild-type and mutant plants. The perceptive organelles database provides information on organelle dynamics in response to external stimuli. In addition to the extra components, the user interface for access has been enhanced in PODB3. The data in PODB3 are directly submitted by plant researchers and can be freely downloaded for use in further analysis. PODB3 contains all the information included in PODB2, and the volume of data and protocols deposited in PODB3 continue to grow steadily. We welcome contributions of data from all plant researchers to enhance the utility and comprehensiveness of PODB3.

  12. Is More Always Better?

    ERIC Educational Resources Information Center

    Bell, Steven J.

    2003-01-01

    Discusses full-text databases and whether existing aggregator databases are meeting user needs. Topics include the need for better search interfaces; concepts of quality research and information retrieval; information overload; full text in electronic journal collections versus aggregator databases; underrepresentation of certain disciplines; and…

  13. Nurses' perceptions of ethical issues in the care of older people.

    PubMed

    Rees, Jenny; King, Lindy; Schmitz, Karl

    2009-07-01

    The aim of this thematic literature review is to explore nurses' perceptions of ethical issues in the care of older people. Electronic databases were searched from September 1997 to September 2007 using specific key words with tight inclusion criteria, which revealed 17 primary research reports. The data analysis involved repeated reading of the findings and sorting of those findings into four themes. These themes are: sources of ethical issues for nurses; differences in perceptions between nurses and patients/relatives; nurses' personal responses to ethical issues; and the patient-nurse relationship. The findings reveal that ageism is one of the major sources of the ethical issues that arise for nurses caring for older people. Education and organizational change can combat ageist attitudes. Wider training is required in the care of older people, workplace skills, palliative care and pain management for older people. The demands of a changing global demography will necessitate further research in this field.

  14. Indian research on disaster and mental health

    PubMed Central

    Kar, Nilamadhab

    2010-01-01

    The primary source for this annotation on disaster mental health research is the Indian Journal of Psychiatry. Key words like disasters, earthquake, cyclone, tsunami and flood were searched from its electronic database and relevant articles are discussed. The cross-referenced articles and relevant researches conducted on disasters in India which are published elsewhere were the secondary sources of information. There have been many epidemiological studies and only a few interventional studies on disasters in India. Prevalence figures of psychiatric disorders varied considerably across studies, secondary to nature and severity of disaster, degree of loss, support available and probably also due to the study methodology. Suggestions for intervention included pre-disaster planning, training of disaster workers, utilization of community-level volunteers as counselors, and strengthening existing individual, social and spiritual coping strategies. There is a need for more longitudinal follow-up studies and interventional studies. PMID:21836696

  15. Utilisation and Impact of the Essential Electronic Agricultural Database (TEEAL) on Library Services in a Nigerian University of Agriculture

    ERIC Educational Resources Information Center

    Oduwole, A. A.; Sowole, A. O.

    2006-01-01

    Purpose: This study examined the utilisation of the Essential Electronic Agricultural Library database (TEEAL) at the University of Agriculture Library, Abeokuta, Nigeria. Design/methodology/approach: Data collection was by questionnaire following a purposive sampling technique. A total of 104 out 150 (69.3 per cent) responses were received and…

  16. Herpes zoster surveillance using electronic databases in the Valencian Community (Spain)

    PubMed Central

    2013-01-01

    Background Epidemiologic data of Herpes Zoster (HZ) disease in Spain are scarce. The objective of this study was to assess the epidemiology of HZ in the Valencian Community (Spain), using outpatient and hospital electronic health databases. Methods Data from 2007 to 2010 was collected from computerized health databases of a population of around 5 million inhabitants. Diagnoses were recorded by physicians using the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). A sample of medical records under different criteria was reviewed by a general practitioner, to assess the reliability of codification. Results The average annual incidence of HZ was 4.60 per 1000 persons-year (PY) for all ages (95% CI: 4.57-4.63), is more frequent in women [5.32/1000PY (95% CI: 5.28-5.37)] and is strongly age-related, with a peak incidence at 70-79 years. A total of 7.16/1000 cases of HZ required hospitalization. Conclusions Electronic health database used in the Valencian Community is a reliable electronic surveillance tool for HZ disease and will be useful to define trends in disease burden before and after HZ vaccine introduction. PMID:24094135

  17. Inferring pregnancy episodes and outcomes within a network of observational databases

    PubMed Central

    Ryan, Patrick; Fife, Daniel; Gifkins, Dina; Knoll, Chris; Friedman, Andrew

    2018-01-01

    Administrative claims and electronic health records are valuable resources for evaluating pharmaceutical effects during pregnancy. However, direct measures of gestational age are generally not available. Establishing a reliable approach to infer the duration and outcome of a pregnancy could improve pharmacovigilance activities. We developed and applied an algorithm to define pregnancy episodes in four observational databases: three US-based claims databases: Truven MarketScan® Commercial Claims and Encounters (CCAE), Truven MarketScan® Multi-state Medicaid (MDCD), and the Optum ClinFormatics® (Optum) database and one non-US database, the United Kingdom (UK) based Clinical Practice Research Datalink (CPRD). Pregnancy outcomes were classified as live births, stillbirths, abortions and ectopic pregnancies. Start dates were estimated using a derived hierarchy of available pregnancy markers, including records such as last menstrual period and nuchal ultrasound dates. Validation included clinical adjudication of 700 electronic Optum and CPRD pregnancy episode profiles to assess the operating characteristics of the algorithm, and a comparison of the algorithm’s Optum pregnancy start estimates to starts based on dates of assisted conception procedures. Distributions of pregnancy outcome types were similar across all four data sources and pregnancy episode lengths found were as expected for all outcomes, excepting term lengths in episodes that used amenorrhea and urine pregnancy tests for start estimation. Validation survey results found highest agreement between reviewer chosen and algorithm operating characteristics for questions assessing pregnancy status and accuracy of outcome category with 99–100% agreement for Optum and CPRD. Outcome date agreement within seven days in either direction ranged from 95–100%, while start date agreement within seven days in either direction ranged from 90–97%. In Optum validation sensitivity analysis, a total of 73% of algorithm estimated starts for live births were in agreement with fertility procedure estimated starts within two weeks in either direction; ectopic pregnancy 77%, stillbirth 47%, and abortion 36%. An algorithm to infer live birth and ectopic pregnancy episodes and outcomes can be applied to multiple observational databases with acceptable accuracy for further epidemiologic research. Less accuracy was found for start date estimations in stillbirth and abortion outcomes in our sensitivity analysis, which may be expected given the nature of the outcomes. PMID:29389968

  18. Modelling Conditions and Health Care Processes in Electronic Health Records: An Application to Severe Mental Illness with the Clinical Practice Research Datalink.

    PubMed

    Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos

    2016-01-01

    The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.

  19. The Chandra Source Catalog: Storage and Interfaces

    NASA Astrophysics Data System (ADS)

    van Stone, David; Harbo, Peter N.; Tibbetts, Michael S.; Zografou, Panagoula; Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is part of the Chandra Data Archive (CDA) at the Chandra X-ray Center. The catalog contains source properties and associated data objects such as images, spectra, and lightcurves. The source properties are stored in relational databases and the data objects are stored in files with their metadata stored in databases. The CDA supports different versions of the catalog: multiple fixed release versions and a live database version. There are several interfaces to the catalog: CSCview, a graphical interface for building and submitting queries and for retrieving data objects; a command-line interface for property and source searches using ADQL; and VO-compliant services discoverable though the VO registry. This poster describes the structure of the catalog and provides an overview of the interfaces.

  20. Governance and oversight of researcher access to electronic health data: the role of the Independent Scientific Advisory Committee for MHRA database research, 2006-2015.

    PubMed

    Waller, P; Cassell, J A; Saunders, M H; Stevens, R

    2017-03-01

    In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.

  1. California Fault Parameters for the National Seismic Hazard Maps and Working Group on California Earthquake Probabilities 2007

    USGS Publications Warehouse

    Wills, Chris J.; Weldon, Ray J.; Bryant, W.A.

    2008-01-01

    This report describes development of fault parameters for the 2007 update of the National Seismic Hazard Maps and the Working Group on California Earthquake Probabilities (WGCEP, 2007). These reference parameters are contained within a database intended to be a source of values for use by scientists interested in producing either seismic hazard or deformation models to better understand the current seismic hazards in California. These parameters include descriptions of the geometry and rates of movements of faults throughout the state. These values are intended to provide a starting point for development of more sophisticated deformation models which include known rates of movement on faults as well as geodetic measurements of crustal movement and the rates of movements of the tectonic plates. The values will be used in developing the next generation of the time-independent National Seismic Hazard Maps, and the time-dependant seismic hazard calculations being developed for the WGCEP. Due to the multiple uses of this information, development of these parameters has been coordinated between USGS, CGS and SCEC. SCEC provided the database development and editing tools, in consultation with USGS, Golden. This database has been implemented in Oracle and supports electronic access (e.g., for on-the-fly access). A GUI-based application has also been developed to aid in populating the database. Both the continually updated 'living' version of this database, as well as any locked-down official releases (e.g., used in a published model for calculating earthquake probabilities or seismic shaking hazards) are part of the USGS Quaternary Fault and Fold Database http://earthquake.usgs.gov/regional/qfaults/ . CGS has been primarily responsible for updating and editing of the fault parameters, with extensive input from USGS and SCEC scientists.

  2. Fast in-database cross-matching of high-cadence, high-density source lists with an up-to-date sky model

    NASA Astrophysics Data System (ADS)

    Scheers, B.; Bloemen, S.; Mühleisen, H.; Schellart, P.; van Elteren, A.; Kersten, M.; Groot, P. J.

    2018-04-01

    Coming high-cadence wide-field optical telescopes will image hundreds of thousands of sources per minute. Besides inspecting the near real-time data streams for transient and variability events, the accumulated data archive is a wealthy laboratory for making complementary scientific discoveries. The goal of this work is to optimise column-oriented database techniques to enable the construction of a full-source and light-curve database for large-scale surveys, that is accessible by the astronomical community. We adopted LOFAR's Transients Pipeline as the baseline and modified it to enable the processing of optical images that have much higher source densities. The pipeline adds new source lists to the archive database, while cross-matching them with the known cataloguedsources in order to build a full light-curve archive. We investigated several techniques of indexing and partitioning the largest tables, allowing for faster positional source look-ups in the cross matching algorithms. We monitored all query run times in long-term pipeline runs where we processed a subset of IPHAS data that have image source density peaks over 170,000 per field of view (500,000 deg-2). Our analysis demonstrates that horizontal table partitions of declination widths of one-degree control the query run times. Usage of an index strategy where the partitions are densely sorted according to source declination yields another improvement. Most queries run in sublinear time and a few (< 20%) run in linear time, because of dependencies on input source-list and result-set size. We observed that for this logical database partitioning schema the limiting cadence the pipeline achieved with processing IPHAS data is 25 s.

  3. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics.

    PubMed

    Deutsch, Eric W; Sun, Zhi; Campbell, David S; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S; Moritz, Robert L

    2016-11-04

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances-a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ∼20,000 primary isoforms plus contaminants to a very large database that includes almost all nonredundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/ .

  4. Tiered Human Integrated Sequence Search Databases for Shotgun Proteomics

    PubMed Central

    Deutsch, Eric W.; Sun, Zhi; Campbell, David S.; Binz, Pierre-Alain; Farrah, Terry; Shteynberg, David; Mendoza, Luis; Omenn, Gilbert S.; Moritz, Robert L.

    2016-01-01

    The results of analysis of shotgun proteomics mass spectrometry data can be greatly affected by the selection of the reference protein sequence database against which the spectra are matched. For many species there are multiple sources from which somewhat different sequence sets can be obtained. This can lead to confusion about which database is best in which circumstances – a problem especially acute in human sample analysis. All sequence databases are genome-based, with sequences for the predicted gene and their protein translation products compiled. Our goal is to create a set of primary sequence databases that comprise the union of sequences from many of the different available sources and make the result easily available to the community. We have compiled a set of four sequence databases of varying sizes, from a small database consisting of only the ~20,000 primary isoforms plus contaminants to a very large database that includes almost all non-redundant protein sequences from several sources. This set of tiered, increasingly complete human protein sequence databases suitable for mass spectrometry proteomics sequence database searching is called the Tiered Human Integrated Search Proteome set. In order to evaluate the utility of these databases, we have analyzed two different data sets, one from the HeLa cell line and the other from normal human liver tissue, with each of the four tiers of database complexity. The result is that approximately 0.8%, 1.1%, and 1.5% additional peptides can be identified for Tiers 2, 3, and 4, respectively, as compared with the Tier 1 database, at substantially increasing computational cost. This increase in computational cost may be worth bearing if the identification of sequence variants or the discovery of sequences that are not present in the reviewed knowledge base entries is an important goal of the study. We find that it is useful to search a data set against a simpler database, and then check the uniqueness of the discovered peptides against a more complex database. We have set up an automated system that downloads all the source databases on the first of each month and automatically generates a new set of search databases and makes them available for download at http://www.peptideatlas.org/thisp/. PMID:27577934

  5. NBIC: National Ballast Information Clearinghouse

    Science.gov Websites

    Smithsonian Environmental Research Center Logo US Coast Guard Logo Submit BW Report | Search NBIC Database | NBIC Research & Development | NBIC News | Home Cite NBIC Database as: National Ballast Information Clearinghouse 2016. NBIC Online Database. Electronic publication, Smithsonian Environmental Research Center &

  6. 49 CFR 535.8 - Reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... must submit information electronically through the EPA database system as the single point of entry for... agencies are not prepared to receive information through the EPA database system, manufacturers are... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...

  7. 49 CFR 535.8 - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... information. (2) Manufacturers must submit information electronically through the EPA database system as the... year 2012 the agencies are not prepared to receive information through the EPA database system... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...

  8. 49 CFR 535.8 - Reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... must submit information electronically through the EPA database system as the single point of entry for... agencies are not prepared to receive information through the EPA database system, manufacturers are... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...

  9. 49 CFR 535.8 - Reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... must submit information electronically through the EPA database system as the single point of entry for... agencies are not prepared to receive information through the EPA database system, manufacturers are... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...

  10. A systematic review on the prevalence of metabolic syndrome in Iranian children and adolescents.

    PubMed

    Kelishadi, Roya; Hovsepian, Silva; Djalalinia, Shirin; Jamshidi, Fahimeh; Qorbani, Mostafa

    2016-01-01

    Metabolic syndrome (MetS), a cluster of cardiovascular risk factors, is one of the most common metabolic disorders, which lead to many chronic diseases. The link between childhood MetS and occurrence of atherosclerosis and its sequels in adulthood is well documented. This study aims to systematically review the prevalence of MetS among Iranian children and adolescents. An electronic search was conducted on studies published from January 1990 to January 2015. The main international electronic data sources were PubMed and the NLM Gateway (for MEDLINE), Institute of Scientific Information (ISI), and SCOPUS. For Persian databases, we used domestic databases. We included all available population-based studies and national surveys conducted in the pediatric age group aged 3-21-year-old. In this review, 2138 articles were identified (PubMed: 265; SCOPUS: 368; ISI: 465; Scientific Information Database: 189; IranMedex: 851; Irandoc: 46). After quality assessment, 13 qualified articles were evaluated. The number of total population and points of data were 24,772 and 125, respectively. Regarding the geographical distribution, we found 2 national, 6 provincial, and 5 district level points of data. The prevalence range of MetS among children was 1-22% using different definitions. Reported range of pediatric MetS defined by different criteria was as follows: National Cholesterol Education Program-Adult Treatment Panel III; 3-16%, International Diabetes Federation; 0-8%, American Heart Association; 4-9.5%, The National Health and Nutrition Examination Survey III; 1-18%, de Ferranti; 0-22%. MetS is a common metabolic disorder among Iranian children and adolescents, with increasing trends during the last decades. This finding provides baseline useful information for health policy makers to implement evidence based-health promotion for appropriate controlling of this growing health problem for the pediatric population.

  11. Prevalence of dyslipidemia in Iranian children and adolescents: A systematic review.

    PubMed

    Hovsepian, Silva; Kelishadi, Roya; Djalalinia, Shirin; Farzadfar, Farshad; Naderimagham, Shohreh; Qorbani, Mostafa

    2015-05-01

    Dyslipidemia is considered as an important modifiable risk factor for cardiovascular disease (CVD). The link between childhood dyslipidemia and occurrence of atherosclerosis and its sequels in adulthood are well-documented. This study aimed to systematically review the prevalence of dyslipidemia among Iranian children and adolescents. An electronic search was conducted on studies published from January 1990 to January 2014. The main international electronic data sources were PubMed and the NLM Gateway (for MEDLINE), Institute of Scientific Information (ISI), and SCOPUS. For Persian databases, we used domestic databases with systematic search capability including IranMedex, Irandoc, and Scientific Information Database (SID). We included all available population-based studies and national surveys conducted in the pediatric age group (aged <21 years). In this review, 1772 articles were identified (PubMed: 1464; Scopus: 11; ISI: 58; SID: 90; IranMedex: 149; Irandoc: 57). During three refine steps and after removing of duplicates, 182 articles related to the study domain were selected. After quality assessment, 46 studies were selected for text appraisal, of which 26 qualified articles were evaluated at the final step. The prevalence range of hypercholesterolemia, hypertriglyceridemia, elevated low-density lipoprotein cholesterol, and low high-density lipoprotein cholesterol (HDL-C) were 3-48%, 3-50%, 5-20% and 5-88%, respectively. Low HDL-C and hypertriglyceridemia were the most prevalent lipid disorders in this group of population. Dyslipidemia is a common health problem among Iranian children and adolescents. Few data were available in preschool children. This finding provides useful information for health policy makers to implement action-oriented interventions for prevention and early control of this important CVD risk factor.

  12. Cohort profile of the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Case Register: current status and recent enhancement of an Electronic Mental Health Record-derived data resource.

    PubMed

    Perera, Gayan; Broadbent, Matthew; Callard, Felicity; Chang, Chin-Kuo; Downs, Johnny; Dutta, Rina; Fernandes, Andrea; Hayes, Richard D; Henderson, Max; Jackson, Richard; Jewell, Amelia; Kadra, Giouliana; Little, Ryan; Pritchard, Megan; Shetty, Hitesh; Tulloch, Alex; Stewart, Robert

    2016-03-01

    The South London and Maudsley National Health Service (NHS) Foundation Trust Biomedical Research Centre (SLaM BRC) Case Register and its Clinical Record Interactive Search (CRIS) application were developed in 2008, generating a research repository of real-time, anonymised, structured and open-text data derived from the electronic health record system used by SLaM, a large mental healthcare provider in southeast London. In this paper, we update this register's descriptive data, and describe the substantial expansion and extension of the data resource since its original development. Descriptive data were generated from the SLaM BRC Case Register on 31 December 2014. Currently, there are over 250,000 patient records accessed through CRIS. Since 2008, the most significant developments in the SLaM BRC Case Register have been the introduction of natural language processing to extract structured data from open-text fields, linkages to external sources of data, and the addition of a parallel relational database (Structured Query Language) output. Natural language processing applications to date have brought in new and hitherto inaccessible data on cognitive function, education, social care receipt, smoking, diagnostic statements and pharmacotherapy. In addition, through external data linkages, large volumes of supplementary information have been accessed on mortality, hospital attendances and cancer registrations. Coupled with robust data security and governance structures, electronic health records provide potentially transformative information on mental disorders and outcomes in routine clinical care. The SLaM BRC Case Register continues to grow as a database, with approximately 20,000 new cases added each year, in addition to extension of follow-up for existing cases. Data linkages and natural language processing present important opportunities to enhance this type of research resource further, achieving both volume and depth of data. However, research projects still need to be carefully tailored, so that they take into account the nature and quality of the source information. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. Effects of implementing electronic medical records on primary care billings and payments: a before-after study.

    PubMed

    Jaakkimainen, R Liisa; Shultz, Susan E; Tu, Karen

    2013-09-01

    Several barriers to the adoption of electronic medical records (EMRs) by family physicians have been discussed, including the costs of implementation, impact on work flow and loss of productivity. We examined billings and payments received before and after implementation of EMRs among primary care physicians in the province of Ontario. We also examined billings and payments before and after switching from a fee-for-service to a capitation payment model, because EMR implementation coincided with primary care reform in the province. We used information from the Electronic Medical Record Administrative Data Linked Database (EMRALD) to conduct a retrospective before-after study. The EMRALD database includes EMR data extracted from 183 community-based family physicians in Ontario. We included EMRALD physicians who were eligible to bill the Ontario Health Insurance Plan at least 18 months before and after the date they started using EMRs and had completed a full 18-month period before Mar. 31, 2011, when the study stopped. The main outcome measures were physicians' monthly billings and payments for office visits and total annual payments received from all government sources. Two index dates were examined: the date physicians started using EMRs and were in a stable payment model (n = 64) and the date physicians switched from a fee-for-service to a capitation payment model (n = 42). Monthly billings and payments for office visits did not decrease after the implementation of EMRs. The overall weighted mean annual payment from all government sources increased by 27.7% after the start of EMRs among EMRALD physicians; an increase was also observed among all other primary care physicians in Ontario, but it was not as great (14.4%). There was a decline in monthly billings and payments for office visits after physicians changed payment models, but an increase in their overall annual government payments. Implementation of EMRs by primary care physicians did not result in decreased billings or government payments for office visits. Further economic analyses are needed to measure the effects of EMR implementation on productivity and the costs of implementing an EMR system, including the costs of nonclinical work by physicians and their staff.

  14. Enhancing GADRAS Source Term Inputs for Creation of Synthetic Spectra.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, Steven M.; Harding, Lee

    The Gamma Detector Response and Analysis Software (GADRAS) team has enhanced the source term input for the creation of synthetic spectra. These enhancements include the following: allowing users to programmatically provide source information to GADRAS through memory, rather than through a string limited to 256 characters; allowing users to provide their own source decay database information; and updating the default GADRAS decay database to fix errors and include coincident gamma information.

  15. Data harmonization and federated analysis of population-based studies: the BioSHaRE project

    PubMed Central

    2013-01-01

    Abstracts Background Individual-level data pooling of large population-based studies across research centres in international research projects faces many hurdles. The BioSHaRE (Biobank Standardisation and Harmonisation for Research Excellence in the European Union) project aims to address these issues by building a collaborative group of investigators and developing tools for data harmonization, database integration and federated data analyses. Methods Eight population-based studies in six European countries were recruited to participate in the BioSHaRE project. Through workshops, teleconferences and electronic communications, participating investigators identified a set of 96 variables targeted for harmonization to answer research questions of interest. Using each study’s questionnaires, standard operating procedures, and data dictionaries, harmonization potential was assessed. Whenever harmonization was deemed possible, processing algorithms were developed and implemented in an open-source software infrastructure to transform study-specific data into the target (i.e. harmonized) format. Harmonized datasets located on server in each research centres across Europe were interconnected through a federated database system to perform statistical analysis. Results Retrospective harmonization led to the generation of common format variables for 73% of matches considered (96 targeted variables across 8 studies). Authenticated investigators can now perform complex statistical analyses of harmonized datasets stored on distributed servers without actually sharing individual-level data using the DataSHIELD method. Conclusion New Internet-based networking technologies and database management systems are providing the means to support collaborative, multi-center research in an efficient and secure manner. The results from this pilot project show that, given a strong collaborative relationship between participating studies, it is possible to seamlessly co-analyse internationally harmonized research databases while allowing each study to retain full control over individual-level data. We encourage additional collaborative research networks in epidemiology, public health, and the social sciences to make use of the open source tools presented herein. PMID:24257327

  16. Enriching Great Britain's National Landslide Database by searching newspaper archives

    NASA Astrophysics Data System (ADS)

    Taylor, Faith E.; Malamud, Bruce D.; Freeborough, Katy; Demeritt, David

    2015-11-01

    Our understanding of where landslide hazard and impact will be greatest is largely based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in Great Britain by searching an electronic archive of regional newspapers. In Great Britain, the British Geological Survey (BGS) is responsible for updating and maintaining records of landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of more than 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. We aim to supplement the richness of the NLD by (i) identifying additional landslide events, (ii) acting as an additional source of confirmation of events existing in the NLD and (iii) adding more detail to existing database entries. This is done by systematically searching the Nexis UK digital archive of 568 regional newspapers published in the UK. In this paper, we construct a robust Boolean search criterion by experimenting with landslide terminology for four training periods. We then apply this search to all articles published in 2006 and 2012. This resulted in the addition of 111 records of landslide events to the NLD over the 2 years investigated (2006 and 2012). We also find that we were able to obtain information about landslide impact for 60-90% of landslide events identified from newspaper articles. Spatial and temporal patterns of additional landslides identified from newspaper articles are broadly in line with those existing in the NLD, confirming that the NLD is a representative sample of landsliding in Great Britain. This method could now be applied to more time periods and/or other hazards to add richness to databases and thus improve our ability to forecast future events based on records of past events.

  17. Adolescents' Help-Seeking Behavior and Intentions Following Adolescent Dating Violence: A Systematic Review.

    PubMed

    Bundock, Kerrie; Chan, Carmen; Hewitt, Olivia

    2018-01-01

    The review aimed to systematically identify and summarize empirical work examining adolescent victims' help-seeking behaviors and intentions in relation to their own experience of adolescent dating violence (ADV) and to critically evaluate the literature. Three main objectives were addressed: identify factors associated with help seeking, identify help-seeking source (who adolescents disclose to), and explore the barriers and facilitators for help seeking. Results were separated into actual help seeking and help-seeking intentions. A systematic search was conducted via an electronic search on February 10, 2017. Studies were identified by systematically searching the following electronic databases: Amed, BNI, CINAHL, EMBASE, Health Business Elite, HMIC, Medline, PsychINFO, and PubMed. Nineteen studies were included in the review. Adolescents were more likely to go to informal sources of support, with friends being the most commonly reported source. The majority of studies found females were more likely than males to seek help; however, inconsistencies in gender differences emerged. The variation in measurement and definition of ADV and help seeking included in this review impacts on its conclusions. Adolescents identify a number of barriers to help seeking for ADV. Emotional factors were identified as important barriers to seeking help; however, very little research in this review explored this area. Further research is required on age and cultural differences, use of the Internet, and preference for different sources for different types of abuse. There is a need for a greater focus on help seeking to ensure government campaigns are appropriately meeting the needs of young people experiencing ADV.

  18. A Source-based Measurement Database for Occupational Exposure Assessment of Electromagnetic Fields in the INTEROCC Study: A Literature Review Approach

    PubMed Central

    Vila, Javier; Bowman, Joseph D.; Richardson, Lesley; Kincl, Laurel; Conover, Dave L.; McLean, Dave; Mann, Simon; Vecchia, Paolo; van Tongeren, Martie; Cardis, Elisabeth

    2016-01-01

    Introduction: To date, occupational exposure assessment of electromagnetic fields (EMF) has relied on occupation-based measurements and exposure estimates. However, misclassification due to between-worker variability remains an unsolved challenge. A source-based approach, supported by detailed subject data on determinants of exposure, may allow for a more individualized exposure assessment. Detailed information on the use of occupational sources of exposure to EMF was collected as part of the INTERPHONE-INTEROCC study. To support a source-based exposure assessment effort within this study, this work aimed to construct a measurement database for the occupational sources of EMF exposure identified, assembling available measurements from the scientific literature. Methods: First, a comprehensive literature search was performed for published and unpublished documents containing exposure measurements for the EMF sources identified, a priori as well as from answers of study subjects. Then, the measurements identified were assessed for quality and relevance to the study objectives. Finally, the measurements selected and complementary information were compiled into an Occupational Exposure Measurement Database (OEMD). Results: Currently, the OEMD contains 1624 sets of measurements (>3000 entries) for 285 sources of EMF exposure, organized by frequency band (0 Hz to 300 GHz) and dosimetry type. Ninety-five documents were selected from the literature (almost 35% of them are unpublished technical reports), containing measurements which were considered informative and valid for our purpose. Measurement data and complementary information collected from these documents came from 16 different countries and cover the time period between 1974 and 2013. Conclusion: We have constructed a database with measurements and complementary information for the most common sources of exposure to EMF in the workplace, based on the responses to the INTERPHONE-INTEROCC study questionnaire. This database covers the entire EMF frequency range and represents the most comprehensive resource of information on occupational EMF exposure. It is available at www.crealradiation.com/index.php/en/databases. PMID:26493616

  19. Online bibliographic sources in hydrology

    USGS Publications Warehouse

    Wild, Emily C.; Havener, W. Michael

    2001-01-01

    Traditional commercial bibliographic databases and indexes provide some access to hydrology materials produced by the government; however, these sources do not provide comprehensive coverage of relevant hydrologic publications. This paper discusses bibliographic information available from the federal government and state geological surveys, water resources agencies, and depositories. In addition to information in these databases, the paper describes the scope, styles of citing, subject terminology, and the ways these information sources are currently being searched, formally and informally, by hydrologists. Information available from the federal and state agencies and from the state depositories might be missed by limiting searches to commercially distributed databases.

  20. Dietary intake and main sources of plant lignans in five European countries

    PubMed Central

    Tetens, Inge; Turrini, Aida; Tapanainen, Heli; Christensen, Tue; Lampe, Johanna W.; Fagt, Sisse; Håkansson, Niclas; Lundquist, Annamari; Hallund, Jesper; Valsta, Liisa M.

    2013-01-01

    Background Dietary intakes of plant lignans have been hypothesized to be inversely associated with the risk of developing cardiovascular disease and cancer. Earlier studies were based on a Finnish lignan database (Fineli®) with two lignan precursors, secoisolariciresinol (SECO) and matairesinol (MAT). More recently, a Dutch database, including SECO and MAT and the newly recognized lignan precursors lariciresinol (LARI) and pinoresinol (PINO), was compiled. The objective was to re-estimate and re-evaluate plant lignan intakes and to identify the main sources of plant lignans in five European countries using the Finnish and Dutch lignan databases, respectively. Methods Forty-two food groups known to contribute to the total lignan intake were selected and attributed a value for SECO and MAT from the Finnish lignan database (Fineli®) or for SECO, MAT, LARI, and PINO from the Dutch database. Total intake of lignans was estimated from food consumption data for adult men and women (19–79 years) from Denmark, Finland, Italy, Sweden, United Kingdom, and the contribution of aggregated food groups calculated using the Dutch lignin database. Results Mean dietary lignan intakes estimated using the Dutch database ranged from 1 to 2 mg/day, which was approximately four-fold higher than the intakes estimated from the Fineli® database. When LARI and PINO were included in the estimation of the total lignan intakes, cereals, grain products, vegetables, fruit and berries were the most important dietary sources of lignans. Conclusion Total lignin intake was approximately four-fold higher in the Dutch lignin database, which includes the lignin precursors LARI and PINO, compared to estimates based on the Finnish database based only on SECO and MAT. The main sources of lignans according to the Dutch database in the five countries studied were cereals and grain products, vegetables, fruit, berries, and beverages. PMID:23766759

  1. Qualitative analysis of seized synthetic cannabinoids and synthetic cathinones by gas chromatography triple quadrupole tandem mass spectrometry.

    PubMed

    Gwak, Seongshin; Arroyo-Mora, Luis E; Almirall, José R

    2015-02-01

    Designer drugs are analogues or derivatives of illicit drugs with a modification of their chemical structure in order to circumvent current legislation for controlled substances. Designer drugs of abuse have increased dramatically in popularity all over the world for the past couple of years. Currently, the qualitative seized-drug analysis is mainly performed by gas chromatography-electron ionization-mass spectrometry (GC-EI-MS) in which most of these emerging designer drug derivatives are extensively fragmented not presenting a molecular ion in their mass spectra. The absence of molecular ion and/or similar fragmentation pattern among these derivatives may cause the equivocal identification of unknown seized-substances. In this study, the qualitative identification of 34 designer drugs, mainly synthetic cannabinoids and synthetic cathinones, were performed by gas chromatography-triple quadrupole-tandem mass spectrometry with two different ionization techniques, including electron ionization (EI) and chemical ionization (CI) only focusing on qualitative seized-drug analysis, not from the toxicological point of view. The implementation of CI source facilitates the determination of molecular mass and the identification of seized designer drugs. Developed multiple reaction monitoring (MRM) mode may increase sensitivity and selectivity in the analysis of seized designer drugs. In addition, CI mass spectra and MRM mass spectra of these designer drug derivatives can be used as a potential supplemental database along with EI mass spectral database. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Examining database persistence of ISO/EN 13606 standardized electronic health record extracts: relational vs. NoSQL approaches.

    PubMed

    Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario

    2017-08-18

    The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.

  3. Elektronische Informationsdienste im Bildungswesen (Electronic Information Services in Education) Gesellschaft Information Bildung Conference (GIB) (2nd, Berlin, Germany, November 17-18, 1994).

    ERIC Educational Resources Information Center

    Diepold, Peter, Ed.; Rusch-Feja, Diann, Ed.

    These papers on educational technology were presented in three workshops at the second annual conference of the Society of Information Education (GIB). Discussion includes electronic networks, CD-ROMs, and online databases in education, the quality of educational software, database services and instructional methods, and the use of the Internet in…

  4. SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts

    NASA Astrophysics Data System (ADS)

    Howe, B.; Halperin, D.

    2014-12-01

    Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.

  5. Building Databases for Education. ERIC Digest.

    ERIC Educational Resources Information Center

    Klausmeier, Jane A.

    This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…

  6. Development of a data entry auditing protocol and quality assurance for a tissue bank database.

    PubMed

    Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L

    2012-03-01

    Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.

  7. Historical reconstructions of California wildfires vary by data source

    USGS Publications Warehouse

    Syphard, Alexandra D.; Keeley, Jon E.

    2016-01-01

    Historical data are essential for understanding how fire activity responds to different drivers. It is important that the source of data is commensurate with the spatial and temporal scale of the question addressed, but fire history databases are derived from different sources with different restrictions. In California, a frequently used fire history dataset is the State of California Fire and Resource Assessment Program (FRAP) fire history database, which circumscribes fire perimeters at a relatively fine scale. It includes large fires on both state and federal lands but only covers fires that were mapped or had other spatially explicit data. A different database is the state and federal governments’ annual reports of all fires. They are more complete than the FRAP database but are only spatially explicit to the level of county (California Department of Forestry and Fire Protection – Cal Fire) or forest (United States Forest Service – USFS). We found substantial differences between the FRAP database and the annual summaries, with the largest and most consistent discrepancy being in fire frequency. The FRAP database missed the majority of fires and is thus a poor indicator of fire frequency or indicators of ignition sources. The FRAP database is also deficient in area burned, especially before 1950. Even in contemporary records, the huge number of smaller fires not included in the FRAP database account for substantial cumulative differences in area burned. Wildfires in California account for nearly half of the western United States fire suppression budget. Therefore, the conclusions about data discrepancies and the implications for fire research are of broad importance.

  8. Active fault databases: building a bridge between earthquake geologists and seismic hazard practitioners, the case of the QAFI v.3 database

    NASA Astrophysics Data System (ADS)

    García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João

    2017-08-01

    Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.

  9. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  10. A Bayesian Multivariate Receptor Model for Estimating Source Contributions to Particulate Matter Pollution using National Databases.

    PubMed

    Hackstadt, Amber J; Peng, Roger D

    2014-11-01

    Time series studies have suggested that air pollution can negatively impact health. These studies have typically focused on the total mass of fine particulate matter air pollution or the individual chemical constituents that contribute to it, and not source-specific contributions to air pollution. Source-specific contribution estimates are useful from a regulatory standpoint by allowing regulators to focus limited resources on reducing emissions from sources that are major contributors to air pollution and are also desired when estimating source-specific health effects. However, researchers often lack direct observations of the emissions at the source level. We propose a Bayesian multivariate receptor model to infer information about source contributions from ambient air pollution measurements. The proposed model incorporates information from national databases containing data on both the composition of source emissions and the amount of emissions from known sources of air pollution. The proposed model is used to perform source apportionment analyses for two distinct locations in the United States (Boston, Massachusetts and Phoenix, Arizona). Our results mirror previous source apportionment analyses that did not utilize the information from national databases and provide additional information about uncertainty that is relevant to the estimation of health effects.

  11. Catalogue of UV sources in the Galaxy

    NASA Astrophysics Data System (ADS)

    Beitia-Antero, L.; Gómez de Castro, A. I.

    2017-03-01

    The Galaxy Evolution Explorer (GALEX) ultraviolet (UV) database contains the largest photometric catalogue in the ultraviolet range; as a result GALEX photometric bands, Near UV band (NUV) and the Far UV band (FUV), have become standards. Nevertheless, the GALEX catalogue does not include bright UV sources due to the high sensitivity of its detectors, neither sources in the Galactic plane. In order to extend the GALEX database for future UV missions, we have obtained synthetic FUV and NUV photometry using the database of UV spectra generated by the International Ultraviolet Explorer (IUE). This database contains 63,755 spectra in the low dispersion mode (λ / δ λ ˜ 300) obtained during its 18-year lifetime. For stellar sources in the IUE database, we have selected spectra with high Signal-To-NoiseRatio (SNR) and computed FUV and NUV magnitudes using the GALEX transmission curves along with the conversion equations between flux and magnitudes provided by the mission. Besides, we have performed variability tests to determine whether the sources were variable (during the IUE observations). As a result, we have generated two different catalogues: one for non-variable stars and another one for variable sources. The former contains FUV and NUV magnitudes, while the latter gives the basic information and the FUV magnitude for each observation. The consistency of the magnitudes has been tested using White Dwarfs contained in both GALEX and IUE samples. The catalogues are available through the Centre des Donées Stellaires. The sources are distributed throughout the whole sky, with a special coverage of the Galactic plane.

  12. Explore a Career in Health Sciences Information

    MedlinePlus

    ... tools that range from traditional print journals to electronic databases and the latest mobile devices, health sciences ... an expert search of the literature. connecting licensed electronic resources and decision tools into a patient's electronic ...

  13. DIMA quick start, database for inventory, monitoring and assessment

    USDA-ARS?s Scientific Manuscript database

    The Database for Inventory, Monitoring and Assessment (DIMA) is a highly-customized Microsoft Access database for collecting data electronically in the field and for organizing, storing and reporting those data for monitoring and assessment. While DIMA can be used for any number of different monito...

  14. Library Instruction and Online Database Searching.

    ERIC Educational Resources Information Center

    Mercado, Heidi

    1999-01-01

    Reviews changes in online database searching in academic libraries. Topics include librarians conducting all searches; the advent of end-user searching and the need for user instruction; compact disk technology; online public catalogs; the Internet; full text databases; electronic information literacy; user education and the remote library user;…

  15. Distribution System Upgrade Unit Cost Database

    DOE Data Explorer

    Horowitz, Kelsey

    2017-11-30

    This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.

  16. Modelling Conditions and Health Care Processes in Electronic Health Records: An Application to Severe Mental Illness with the Clinical Practice Research Datalink

    PubMed Central

    Olier, Ivan; Springate, David A.; Ashcroft, Darren M.; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos

    2016-01-01

    Background The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. Methods We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. Results We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. Conclusion We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists. PMID:26918439

  17. Database of potential sources for earthquakes larger than magnitude 6 in Northern California

    USGS Publications Warehouse

    ,

    1996-01-01

    The Northern California Earthquake Potential (NCEP) working group, composed of many contributors and reviewers in industry, academia and government, has pooled its collective expertise and knowledge of regional tectonics to identify potential sources of large earthquakes in northern California. We have created a map and database of active faults, both surficial and buried, that forms the basis for the northern California portion of the national map of probabilistic seismic hazard. The database contains 62 potential sources, including fault segments and areally distributed zones. The working group has integrated constraints from broadly based plate tectonic and VLBI models with local geologic slip rates, geodetic strain rate, and microseismicity. Our earthquake source database derives from a scientific consensus that accounts for conflict in the diverse data. Our preliminary product, as described in this report brings to light many gaps in the data, including a need for better information on the proportion of deformation in fault systems that is aseismic.

  18. [Exploration and construction of the full-text database of acupuncture literature in the Republic of China].

    PubMed

    Fei, Lin; Zhao, Jing; Leng, Jiahao; Zhang, Shujian

    2017-10-12

    The ALIPORC full-text database is targeted at a specific full-text database of acupuncture literature in the Republic of China. Starting in 2015, till now, the database has been getting completed, focusing on books relevant with acupuncture, articles and advertising documents, accomplished or published in the Republic of China. The construction of this database aims to achieve the source sharing of acupuncture medical literature in the Republic of China through the retrieval approaches to diversity and accurate content presentation, contributes to the exchange of scholars, reduces the paper damage caused by paging and simplify the retrieval of the rare literature. The writers have made the explanation of the database in light of sources, characteristics and current situation of construction; and have discussed on improving the efficiency and integrity of the database and deepening the development of acupuncture literature in the Republic of China.

  19. Third Annual Foreign Acquisitions Workshop: Improving Access to Foreign Gray Literature

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The theme of the Third Annual Foreign Acquisitions Workshop was the acquisition of and access to foreign (non-U.S.) gray literature. Individual presentations addressed general topics related to the value and scope of gray literature, specialized and foreign gray-literature sources, intellectual property issues, and U.S. Federal Agency activities. Additional topics focused on electronic access and evaluation techniques and the current and potential uses of networking technology. The workshop papers are presented in their entirety or in abstract or outline form. Appendices include a listing of databases that include foreign gray literature, a bibliography, and a report on U.S.-Japan cooperation in the use of scientific and technical information.

  20. Futurescapes: evidence expectations in the USA for comparative effectiveness research for drugs in 2020.

    PubMed

    Messner, Donna A; Mohr, Penny; Towse, Adrian

    2015-08-01

    Explore key factors influencing future expectations for the production of evidence from comparative effectiveness research for drugs in the USA in 2020 and construct three plausible future scenarios. Semistructured key informant interviews and three rounds of modified Delphi with systematic scenario-building methods. Most influential key factors were: health delivery system integration; electronic health record development; exploitation of very large databases and mixed data sources; and proactive patient engagement in research. The scenario deemed most likely entailed uneven development of large integrated health systems with pockets of increased provider risk for patient care, enhanced data collection systems, changing incentives to do comparative effectiveness research and new opportunities for evidence generation partnerships.

  1. PropeR revisited.

    PubMed

    van der Linden, Helma; Talmon, Jan; Tange, Huibert; Grimson, Jane; Hasman, Arie

    2005-03-01

    The PropeR EHR system (PropeRWeb) is a multidisciplinary electronic health record (EHR) system for multidisciplinary use in extramural patient care for stroke patients. The system is built using existing open source components and is based on open standards. It is implemented as a web application using servlets and Java Server Pages (JSP's) with a CORBA connection to the database servers, which are based on the OMG HDTF specifications. PropeRWeb is a generic system which can be readily customized for use in a variety of clinical domains. The system proved to be stable and flexible, although some aspects (a.o. user friendliness) could be improved. These improvements are currently under development in a second version.

  2. Differentiating the Bishop ash bed and related tephra layers by elemental-based similarity coefficients of volcanic glass shards using solution inductively coupled plasma-mass spectrometry (S-ICP-MS)

    USGS Publications Warehouse

    Knott, J.R.; Sarna-Wojcicki, A. M.; Montanez, I.P.; Wan, E.

    2007-01-01

    Volcanic glass samples from the same volcanic center (intra-source) often have a similar major-element composition. Thus, it can be difficult to distinguish between individual tephra layers, particularly when using similarity coefficients calculated from electron microprobe major-element measurements. Minor/trace element concentrations in glass can be determined by solution inductively coupled plasma mass spectrometry (S-ICP-MS), but have not been shown as suitable for use in large tephrochronologic databases. Here, we present minor/trace-element concentrations measured by S-ICP-MS and compare these data by similarity coefficients, the method commonly used in large databases. Trial samples from the Bishop Tuff, the upper and lower tuffs of Glass Mountain and the tuffs of Mesquite Spring suites from eastern California, USA, which have an indistinguishable major-element composition, were analyzed using S-ICP-MS. The resulting minor/trace element similarity coefficients clearly separated the suites of tephra layers and, in most cases, individual tephra layers within each suite. Comparisons with previous instrumental neutron activation analysis (INAA) elemental measurements were marginally successful. This is important step toward quantitative correlation in large tephrochronologic databases to achieve definitive identification of volcanic glass samples and for high-resolution age determinations. ?? 2007 Elsevier Ltd and INQUA.

  3. Inhibition of methicillin-resistant Staphylococcus aureus (MRSA) by antimicrobial peptides (AMPs) and plant essential oils.

    PubMed

    Zouhir, Abdelmajid; Jridi, Taoufik; Nefzi, Adel; Ben Hamida, Jeannette; Sebei, Khaled

    2016-12-01

    Drug-resistant bacterial infections cause considerable patient mortality and morbidity. The annual frequency of deaths from methicillin-resistant Staphylococcus aureus (MRSA) has surpassed those caused by human immunodeficiency virus/acquired immune deficiency syndrome. The antimicrobial peptides (AMPs), plant essential oils (EOs) and their combinations have proven to be quite effective in killing a wide selection of bacterial pathogens including MRSA. This review summarizes the studies in the use of AMPs, plant EOs and their combinations for coping with MRSA bacteria, and to formulate new prospects for future studies on this topic. The sources of scientific literature such as PubMed, library search, Google Scholar, Science Direct and electronic databases such as 'The Antimicrobial Peptide Database', 'Collection of Anti-Microbial Peptides' and 'YADAMP'. Physicochemical data of anti-MRSA peptides were determined by Scientific DataBase Maker software. Of the 118 peptides, 88 exhibited an activity against MRSA with the highest activity of minimum inhibitory concentration values. Various plant EOs have been effective against MRSA. Remarkably, lemongrass EOs completely inhibited all MRSA growth on the plate. Lemon myrtle, Mountain savory, Cinnamon bark and Melissa EOs showed a significant inhibition. Several of these AMPs, EOs and their combinations were effective against MRSA. Their activities have implications for the development of new drugs for medical use.

  4. 14 CFR 221.180 - Requirements for electronic filing of tariffs.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...

  5. 14 CFR 221.180 - Requirements for electronic filing of tariffs.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...

  6. 14 CFR 221.180 - Requirements for electronic filing of tariffs.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...

  7. 14 CFR 221.180 - Requirements for electronic filing of tariffs.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...

  8. 14 CFR 221.180 - Requirements for electronic filing of tariffs.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...

  9. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  10. A systematic review of the physical and chemical characteristics of pollutants from biomass burning and combustion of fossil fuels and health effects in Brazil.

    PubMed

    Oliveira, Beatriz Fátima Alves de; Ignotti, Eliane; Hacon, Sandra S

    2011-09-01

    The aim of this study was to carry out a review of scientific literature published in Brazil between 2000 and 2009 on the characteristics of air pollutants from different emission sources, especially particulate matter (PM) and its effects on respiratory health. Using electronic databases, a systematic literature review was performed of all research related to air pollutant emissions. Publications were analyzed to identify the physical and chemical characteristics of pollutants from different emission sources and their related effects on the respiratory system. The PM2.5 is composed predominantly of organic compounds with 20% of inorganic elements. Higher concentrations of metals were detected in metropolitan areas than in biomass burning regions. The relative risk of hospital admissions due to respiratory diseases in children was higher than in the elderly population. The results of studies of health effects of air pollution are specific to the region where the emissions occurred and should not be used to depict the situation in other areas with different emission sources.

  11. The Ins and Outs of USDA Nutrient Composition

    USDA-ARS?s Scientific Manuscript database

    The USDA National Nutrient Database for Standard Reference (SR) is the major source of food composition data in the United States, providing the foundation for most food composition databases in the public and private sectors. Sources of data used in SR include analytical studies, food manufacturer...

  12. Filling Terrorism Gaps: VEOs, Evaluating Databases, and Applying Risk Terrain Modeling to Terrorism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagan, Ross F.

    2016-08-29

    This paper aims to address three issues: the lack of literature differentiating terrorism and violent extremist organizations (VEOs), terrorism incident databases, and the applicability of Risk Terrain Modeling (RTM) to terrorism. Current open source literature and publicly available government sources do not differentiate between terrorism and VEOs; furthermore, they fail to define them. Addressing the lack of a comprehensive comparison of existing terrorism data sources, a matrix comparing a dozen terrorism databases is constructed, providing insight toward the array of data available. RTM, a method for spatial risk analysis at a micro level, has some applicability to terrorism research, particularlymore » for studies looking at risk indicators of terrorism. Leveraging attack data from multiple databases, combined with RTM, offers one avenue for closing existing research gaps in terrorism literature.« less

  13. The active transport of histidine and its role in ATP production in Trypanosoma cruzi.

    PubMed

    Barisón, M J; Damasceno, F S; Mantilla, B S; Silber, A M

    2016-08-01

    Trypanosoma cruzi, the aetiological agent of Chagas's disease, metabolizes glucose, and after its exhaustion, degrades amino acids as energy source. Here, we investigate histidine uptake and its participation in energy metabolism. No putative genes for the histidine biosynthetic pathway have been identified in genome databases of T. cruzi, suggesting that its uptake from extracellular medium is a requirement for the viability of the parasite. From this assumption, we characterized the uptake of histidine in T. cruzi, showing that this amino acid is incorporated through a single and saturable active system. We also show that histidine can be completely oxidised to CO2. This finding, together with the fact that genes encoding the putative enzymes for the histidine - glutamate degradation pathway were annotated, led us to infer its participation in the energy metabolism of the parasite. Here, we show that His is capable of restoring cell viability after long-term starvation. We confirm that as an energy source, His provides electrons to the electron transport chain, maintaining mitochondrial inner membrane potential and O2 consumption in a very efficient manner. Additionally, ATP biosynthesis from oxidative phosphorylation was found when His was the only oxidisable metabolite present, showing that this amino acid is involved in bioenergetics and parasite persistence within its invertebrate host.

  14. Optically pulsed electron accelerator

    DOEpatents

    Fraser, John S.; Sheffield, Richard L.

    1987-01-01

    An optically pulsed electron accelerator can be used as an injector for a free electron laser and comprises a pulsed light source, such as a laser, for providing discrete incident light pulses. A photoemissive electron source emits electron bursts having the same duration as the incident light pulses when impinged upon by same. The photoemissive electron source is located on an inside wall of a radio frequency powered accelerator cell which accelerates the electron burst emitted by the photoemissive electron source.

  15. Optically pulsed electron accelerator

    DOEpatents

    Fraser, J.S.; Sheffield, R.L.

    1985-05-20

    An optically pulsed electron accelerator can be used as an injector for a free electron laser and comprises a pulsed light source, such as a laser, for providing discrete incident light pulses. A photoemissive electron source emits electron bursts having the same duration as the incident light pulses when impinged upon by same. The photoemissive electron source is located on an inside wall of a radiofrequency-powered accelerator cell which accelerates the electron burst emitted by the photoemissive electron source.

  16. Reliability and validity assessment of administrative databases in measuring the quality of rectal cancer management.

    PubMed

    Corbellini, Carlo; Andreoni, Bruno; Ansaloni, Luca; Sgroi, Giovanni; Martinotti, Mario; Scandroglio, Ildo; Carzaniga, Pierluigi; Longoni, Mauro; Foschi, Diego; Dionigi, Paolo; Morandi, Eugenio; Agnello, Mauro

    2018-01-01

    Measurement and monitoring of the quality of care using a core set of quality measures are increasing in health service research. Although administrative databases include limited clinical data, they offer an attractive source for quality measurement. The purpose of this study, therefore, was to evaluate the completeness of different administrative data sources compared to a clinical survey in evaluating rectal cancer cases. Between May 2012 and November 2014, a clinical survey was done on 498 Lombardy patients who had rectal cancer and underwent surgical resection. These collected data were compared with the information extracted from administrative sources including Hospital Discharge Dataset, drug database, daycare activity data, fee-exemption database, and regional screening program database. The agreement evaluation was performed using a set of 12 quality indicators. Patient complexity was a difficult indicator to measure for lack of clinical data. Preoperative staging was another suboptimal indicator due to the frequent missing administrative registration of tests performed. The agreement between the 2 data sources regarding chemoradiotherapy treatments was high. Screening detection, minimally invasive techniques, length of stay, and unpreventable readmissions were detected as reliable quality indicators. Postoperative morbidity could be a useful indicator but its agreement was lower, as expected. Healthcare administrative databases are large and real-time collected repositories of data useful in measuring quality in a healthcare system. Our investigation reveals that the reliability of indicators varies between them. Ideally, a combination of data from both sources could be used in order to improve usefulness of less reliable indicators.

  17. Big Data, Predictive Analytics, and Quality Improvement in Kidney Transplantation: A Proof of Concept.

    PubMed

    Srinivas, T R; Taber, D J; Su, Z; Zhang, J; Mour, G; Northrup, D; Tripathi, A; Marsden, J E; Moran, W P; Mauldin, P D

    2017-03-01

    We sought proof of concept of a Big Data Solution incorporating longitudinal structured and unstructured patient-level data from electronic health records (EHR) to predict graft loss (GL) and mortality. For a quality improvement initiative, GL and mortality prediction models were constructed using baseline and follow-up data (0-90 days posttransplant; structured and unstructured for 1-year models; data up to 1 year for 3-year models) on adult solitary kidney transplant recipients transplanted during 2007-2015 as follows: Model 1: United Network for Organ Sharing (UNOS) data; Model 2: UNOS & Transplant Database (Tx Database) data; Model 3: UNOS, Tx Database & EHR comorbidity data; and Model 4: UNOS, Tx Database, EHR data, Posttransplant trajectory data, and unstructured data. A 10% 3-year GL rate was observed among 891 patients (2007-2015). Layering of data sources improved model performance; Model 1: area under the curve (AUC), 0.66; (95% confidence interval [CI]: 0.60, 0.72); Model 2: AUC, 0.68; (95% CI: 0.61-0.74); Model 3: AUC, 0.72; (95% CI: 0.66-077); Model 4: AUC, 0.84, (95 % CI: 0.79-0.89). One-year GL (AUC, 0.87; Model 4) and 3-year mortality (AUC, 0.84; Model 4) models performed similarly. A Big Data approach significantly adds efficacy to GL and mortality prediction models and is EHR deployable to optimize outcomes. © 2016 The American Society of Transplantation and the American Society of Transplant Surgeons.

  18. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission.

    PubMed

    Hankeln, Wolfgang; Buttigieg, Pier Luigi; Fink, Dennis; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver

    2010-06-30

    Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft Excel spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data.

  19. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission

    PubMed Central

    2010-01-01

    Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data. PMID:20591175

  20. Development of a mobile device optimized cross platform-compatible oral pathology and radiology spaced repetition system for dental education.

    PubMed

    Al-Rawi, Wisam; Easterling, Lauren; Edwards, Paul C

    2015-04-01

    Combining active recall testing with spaced repetition increases memory retention. The aim of this study was to evaluate and compare students' perception and utilization of an electronic spaced repetition oral pathology-radiology system in dental hygiene education and predoctoral dental education. The study employed an open-source suite of applications to create electronic "flashcards" that can be individually adjusted for frequency of repetition, depending on a user's assessment of difficulty. Accessible across multiple platforms (iOS, Android, Linux, OSX, Windows) as well as via any web-based browser, this framework was used to develop an oral radiology-oral pathology database of case-based questions. This system was introduced in two courses: sophomore oral pathology for dental students and sophomore radiology for dental hygiene students. Students were provided free software and/or mobile tablet devices as well as a database of 300 electronic question cards. Study participants were surveyed on frequency and extent of use. Perception-based surveys were used to evaluate their attitudes towards this technology. Of the eligible students, 12 of 22 (54.5%) dental hygiene and 49 of 107 (45.8%) dental students responded to the surveys. Adoption rates and student feedback were compared between the two groups. Among the respondents, acceptance of this technology with respect to educational usefulness was similar for the dental and dental hygiene students (median=5 on a five-point scale; dental hygiene interquartile range (IQR)=0; dental IQR=1). Only a minority of the survey respondents (25% dental, 33% dental hygiene) took advantage of one of the main benefits of this technology: automated spaced repetition.

  1. Investigation of the Contribution of Lower Charger State Ar Ions to the Unknown Faint X-Ray Feature Found in the Stacked Spectrum of Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Gall, Amy

    Driven by the recent detection of an unidentified emission line previously reported at 3.55-3.57 keV in a stacked spectrum of galaxy clusters, in this work we investigated the resonant dielectronic recombination (DR) process in Li-like Ar as a possible source of, or contributor to, the emission line. The Li-like transition 1s22l-1s2l3l' was suggested to produce a 3.62 keV photon [1] near the unidentified line at 3.57 keV and was the primary focus of our study. The Electron Beam Ion Trap at NIST was used to produce and trap the highly-charged ions of argon. The energy of the quasi-monoenergetic electron beam was incremented in steps of 15 eV to scan over all of the Li-like Ar DR resonances. A Johann-type crystal spectrometer and a solid-state germanium detector were used to take x-ray measurements perpendicular to the electron beam. Our broadband results allowed us to identify the processes that produced specific spectral features, while our high-resolution spectra allowed the experimental separation of features that are less than 2 eV apart. We have used the collisional radiative model NOMAD [2] aided by atomic data calculations by FAC [3] to interpret our observations and account for corrections. Experimental results were compared to the atomic database AtomDB, used to fit the galaxy cluster spectra. We found a number of measured features due to DR in lower charge state Ar ions not included in the database, close in energy to the identified line at 3.57 keV, and suggest their inclusion for improved interpretation and diagnosis of other astrophysical spectra.

  2. Spirituality in childhood cancer care

    PubMed Central

    Lima, Nádia Nara Rolim; do Nascimento, Vânia Barbosa; de Carvalho, Sionara Melo Figueiredo; Neto, Modesto Leite Rolim; Moreira, Marcial Moreno; Brasil, Aline Quental; Junior, Francisco Telésforo Celestino; de Oliveira, Gislene Farias; Reis, Alberto Olavo Advíncula

    2013-01-01

    To deal with the suffering caused by childhood cancer, patients and their families use different coping strategies, among which, spirituality appears a way of minimizing possible damage. In this context, the purpose of the present study was to analyze the influence of spirituality in childhood cancer care, involving biopsychosocial aspects of the child, the family, and the health care team facing the disease. To accomplish this purpose, a nonsystematic review of literature of articles on national and international electronic databases (Scientific Electronic Library Online [SciELO], PubMed, and Latin American and Caribbean Health Sciences Literature [LILACS]) was conducted using the search terms “spirituality,” “child psychology,” “child,” and “cancer,” as well as on other available resources. After the search, 20 articles met the eligibility criteria and were included in the final sample. Our review showed that the relation between spirituality and health has lately become a subject of growing interest among researchers, as a positive influence of spirituality in the people’s welfare was noted. Studies that were retrieved using the mentioned search strategy in electronic databases, independently assessed by the authors according to the systematic review, showed that spirituality emerges as a driving force that helps pediatric patients and their families in coping with cancer. Health care workers have been increasingly attentive to this dimension of care. However, it is necessary to improve their knowledge regarding the subject. The search highlighted that spirituality is considered a source of comfort and hope, contributing to a better acceptance of his/her chronic condition by the child with cancer, as well as by the family. Further up-to-date studies facing the subject are, thus, needed. It is also necessary to better train health care practitioners, so as to provide humanized care to the child with cancer. PMID:24133371

  3. [Effects of soil data and map scale on assessment of total phosphorus storage in upland soils.

    PubMed

    Li, Heng Rong; Zhang, Li Ming; Li, Xiao di; Yu, Dong Sheng; Shi, Xue Zheng; Xing, Shi He; Chen, Han Yue

    2016-06-01

    Accurate assessment of total phosphorus storage in farmland soils is of great significance to sustainable agricultural and non-point source pollution control. However, previous studies haven't considered the estimation errors from mapping scales and various databases with different sources of soil profile data. In this study, a total of 393×10 4 hm 2 of upland in the 29 counties (or cities) of North Jiangsu was cited as a case for study. Analysis was performed of how the four sources of soil profile data, namely, "Soils of County", "Soils of Prefecture", "Soils of Province" and "Soils of China", and the six scales, i.e. 1:50000, 1:250000, 1:500000, 1:1000000, 1:4000000 and1:10000000, used in the 24 soil databases established for the four soil journals, affected assessment of soil total phosphorus. Compared with the most detailed 1:50000 soil database established with 983 upland soil profiles, relative deviation of the estimates of soil total phosphorus density (STPD) and soil total phosphorus storage (STPS) from the other soil databases varied from 4.8% to 48.9% and from 1.6% to 48.4%, respectively. The estimated STPD and STPS based on the 1:50000 database of "Soils of County" and most of the estimates based on the databases of each scale in "Soils of County" and "Soils of Prefecture" were different, with the significance levels of P<0.001 or P<0.05. Extremely significant differences (P<0.001) existed between the estimates based on the 1:50000 database of "Soils of County" and the estimates based on the databases of each scale in "Soils of Province" and "Soils of China". This study demonstrated the significance of appropriate soil data sources and appropriate mapping scales in estimating STPS.

  4. A genotypic and phenotypic information source for marker-assisted selection of cereals: the CEREALAB database

    PubMed Central

    Milc, Justyna; Sala, Antonio; Bergamaschi, Sonia; Pecchioni, Nicola

    2011-01-01

    The CEREALAB database aims to store genotypic and phenotypic data obtained by the CEREALAB project and to integrate them with already existing data sources in order to create a tool for plant breeders and geneticists. The database can help them in unravelling the genetics of economically important phenotypic traits; in identifying and choosing molecular markers associated to key traits; and in choosing the desired parentals for breeding programs. The database is divided into three sub-schemas corresponding to the species of interest: wheat, barley and rice; each sub-schema is then divided into two sub-ontologies, regarding genotypic and phenotypic data, respectively. Database URL: http://www.cerealab.unimore.it/jws/cerealab.jnlp PMID:21247929

  5. New DMSP database of precipitating auroral electrons and ions

    NASA Astrophysics Data System (ADS)

    Redmon, Robert J.; Denig, William F.; Kilcommons, Liam M.; Knipp, Delores J.

    2017-08-01

    Since the mid-1970s, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low Earth orbit. As the program evolved, so have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) to F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information and the Coordinated Data Analysis Web. We describe how the new database is being applied to high-latitude studies of the colocation of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field-aligned currents, and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  6. [Automated anesthesia record systems].

    PubMed

    Heinrichs, W; Mönk, S; Eberle, B

    1997-07-01

    The introduction of electronic anaesthesia documentation systems was attempted as early as in 1979, although their efficient application has become reality only in the past few years. The advantages of the electronic protocol are apparent: Continuous high quality documentation, comparability of data due to the availability of a data bank, reduction in the workload of the anaesthetist and availability of additional data. Disadvantages of the electronic protocol have also been discussed in the literature. By going through the process of entering data on the course of the anaesthetic procedure on the protocol sheet, the information is mentally absorbed and evaluated by the anaesthetist. This information may, however, be lost when the data are recorded fully automatically-without active involvement on the part of the anaesthetist. Recent publications state that by using intelligent alarms and/or integrated displays manual record keeping is no longer necessary for anaesthesia vigilance. The technical design of automated anaesthesia records depends on an integration of network technology into the hospital. It will be appropriate to connect the systems to the internet, but safety requirements have to be followed strictly. Concerning the database, client server architecture as well as language standards like SQL should be used. Object oriented databases will be available in the near future. Another future goal of automated anaesthesia record systems will be using knowledge based technologies within these systems. Drug interactions, disease related anaesthetic techniques and other information sources can be integrated. At this time, almost none of the commercially available systems has matured to a point where their purchase can be recommended without reservation. There is still a lack of standards for the subsequent exchange of data and a solution to a number of ergonomic problems still remains to be found. Nevertheless, electronic anaesthesia protocols will be required in the near future. The advantages of accurate documentation and quality control in the presence of careful planning outweight cost considerations by far.

  7. Analysis of commercial and public bioactivity databases.

    PubMed

    Tiikkainen, Pekka; Franke, Lutz

    2012-02-27

    Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.

  8. EPA’s SPECIATE 4.4 Database: Bridging Data Sources and Data Users

    EPA Science Inventory

    SPECIATE is the U.S. Environmental Protection Agency's (EPA)repository of volatile organic gas and particulate matter (PM) speciation profiles for air pollution sources. EPA released SPECIATE 4.4 in early 2014 and, in total, the SPECIATE 4.4 database includes 5,728 PM, VOC, total...

  9. Simulation of decay processes and radiation transport times in radioactivity measurements

    NASA Astrophysics Data System (ADS)

    García-Toraño, E.; Peyres, V.; Bé, M.-M.; Dulieu, C.; Lépy, M.-C.; Salvat, F.

    2017-04-01

    The Fortran subroutine package PENNUC, which simulates random decay pathways of radioactive nuclides, is described. The decay scheme of the active nuclide is obtained from the NUCLEIDE database, whose web application has been complemented with the option of exporting nuclear decay data (possible nuclear transitions, branching ratios, type and energy of emitted particles) in a format that is readable by the simulation subroutines. In the case of beta emitters, the initial energy of the electron or positron is sampled from the theoretical Fermi spectrum. De-excitation of the atomic electron cloud following electron capture and internal conversion is described using transition probabilities from the LLNL Evaluated Atomic Data Library and empirical or calculated energies of released X rays and Auger electrons. The time evolution of radiation showers is determined by considering the lifetimes of nuclear and atomic levels, as well as radiation propagation times. Although PENNUC is designed to operate independently, here it is used in conjunction with the electron-photon transport code PENELOPE, and both together allow the simulation of experiments with radioactive sources in complex material structures consisting of homogeneous bodies limited by quadric surfaces. The reliability of these simulation tools is demonstrated through comparisons of simulated and measured energy spectra from radionuclides with complex multi-gamma spectra, nuclides with metastable levels in their decay pathways, nuclides with two daughters, and beta plus emitters.

  10. Global Inventory of Gas Geochemistry Data from Fossil Fuel, Microbial and Burning Sources, version 2017

    NASA Astrophysics Data System (ADS)

    Sherwood, Owen A.; Schwietzke, Stefan; Arling, Victoria A.; Etiope, Giuseppe

    2017-08-01

    The concentration of atmospheric methane (CH4) has more than doubled over the industrial era. To help constrain global and regional CH4 budgets, inverse (top-down) models incorporate data on the concentration and stable carbon (δ13C) and hydrogen (δ2H) isotopic ratios of atmospheric CH4. These models depend on accurate δ13C and δ2H end-member source signatures for each of the main emissions categories. Compared with meticulous measurement and calibration of isotopic CH4 in the atmosphere, there has been relatively less effort to characterize globally representative isotopic source signatures, particularly for fossil fuel sources. Most global CH4 budget models have so far relied on outdated source signature values derived from globally nonrepresentative data. To correct this deficiency, we present a comprehensive, globally representative end-member database of the δ13C and δ2H of CH4 from fossil fuel (conventional natural gas, shale gas, and coal), modern microbial (wetlands, rice paddies, ruminants, termites, and landfills and/or waste) and biomass burning sources. Gas molecular compositional data for fossil fuel categories are also included with the database. The database comprises 10 706 samples (8734 fossil fuel, 1972 non-fossil) from 190 published references. Mean (unweighted) δ13C signatures for fossil fuel CH4 are significantly lighter than values commonly used in CH4 budget models, thus highlighting potential underestimation of fossil fuel CH4 emissions in previous CH4 budget models. This living database will be updated every 2-3 years to provide the atmospheric modeling community with the most complete CH4 source signature data possible. Database digital object identifier (DOI): https://doi.org/10.15138/G3201T.

  11. KaBOB: ontology-based semantic integration of biomedical databases.

    PubMed

    Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E

    2015-04-23

    The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.

  12. CardioTF, a database of deconstructing transcriptional circuits in the heart system

    PubMed Central

    2016-01-01

    Background: Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs), position weight matrices (PWMs), and enhancer sequences discovered using the ChIP-seq method. Methods: The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL). In-house R scripts were written to analyze and visualize the results. Results: Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. Discussion: The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Availability and Implementation: Database URL: http://www.cardiosignal.org/database/cardiotf.html. PMID:27635320

  13. CardioTF, a database of deconstructing transcriptional circuits in the heart system.

    PubMed

    Zhen, Yisong

    2016-01-01

    Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs), position weight matrices (PWMs), and enhancer sequences discovered using the ChIP-seq method. The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL). In-house R scripts were written to analyze and visualize the results. Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Database URL: http://www.cardiosignal.org/database/cardiotf.html.

  14. Introduction of the American Academy of Facial Plastic and Reconstructive Surgery FACE TO FACE Database.

    PubMed

    Abraham, Manoj T; Rousso, Joseph J; Hu, Shirley; Brown, Ryan F; Moscatello, Augustine L; Finn, J Charles; Patel, Neha A; Kadakia, Sameep P; Wood-Smith, Donald

    2017-07-01

    The American Academy of Facial Plastic and Reconstructive Surgery FACE TO FACE database was created to gather and organize patient data primarily from international humanitarian surgical mission trips, as well as local humanitarian initiatives. Similar to cloud-based Electronic Medical Records, this web-based user-generated database allows for more accurate tracking of provider and patient information and outcomes, regardless of site, and is useful when coordinating follow-up care for patients. The database is particularly useful on international mission trips as there are often different surgeons who may provide care to patients on subsequent missions, and patients who may visit more than 1 mission site. Ultimately, by pooling data across multiples sites and over time, the database has the potential to be a useful resource for population-based studies and outcome data analysis. The objective of this paper is to delineate the process involved in creating the AAFPRS FACE TO FACE database, to assess its functional utility, to draw comparisons to electronic medical records systems that are now widely implemented, and to explain the specific benefits and disadvantages of the use of the database as it was implemented on recent international surgical mission trips.

  15. 48 CFR 52.232-33 - Payment by Electronic Funds Transfer-System for Award Management.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... contained in the System for Award Management (SAM) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the SAM database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the SAM database is incorrect...

  16. 48 CFR 52.232-33 - Payment by Electronic Funds Transfer-System for Award Management.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... contained in the System for Award Management (SAM) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the SAM database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the SAM database is incorrect...

  17. 48 CFR 52.232-33 - Payment by Electronic Funds Transfer-Central Contractor Registration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... contained in the Central Contractor Registration (CCR) database. In the event that the EFT information changes, the Contractor shall be responsible for providing the updated information to the CCR database. (c... 210. (d) Suspension of payment. If the Contractor's EFT information in the CCR database is incorrect...

  18. Database Security: What Students Need to Know

    ERIC Educational Resources Information Center

    Murray, Meg Coffin

    2010-01-01

    Database security is a growing concern evidenced by an increase in the number of reported incidents of loss of or unauthorized exposure to sensitive data. As the amount of data collected, retained and shared electronically expands, so does the need to understand database security. The Defense Information Systems Agency of the US Department of…

  19. Library Micro-Computing, Vol. 2. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 19 articles pertaining to library microcomputing appear in this collection, the second of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…

  20. The Database Business: Managing Today--Planning for Tomorrow. Issues and Futures.

    ERIC Educational Resources Information Center

    Aitchison, T. M.; And Others

    1988-01-01

    Current issues and the future of the database business are discussed in five papers. Topics covered include aspects relating to the quality of database production; international ownership in the U.S. information marketplace; an overview of pricing strategies in the electronic information industry; and pricing issues from the viewpoints of online…

  1. Moderate pressure plasma source of nonthermal electrons

    NASA Astrophysics Data System (ADS)

    Gershman, S.; Raitses, Y.

    2018-06-01

    Plasma sources of electrons offer control of gas and surface chemistry without the need for complex vacuum systems. The plasma electron source presented here is based on a cold cathode glow discharge (GD) operating in a dc steady state mode in a moderate pressure range of 2–10 torr. Ion-induced secondary electron emission is the source of electrons accelerated to high energies in the cathode sheath potential. The source geometry is a key to the availability and the extraction of the nonthermal portion of the electron population. The source consists of a flat and a cylindrical electrode, 1 mm apart. Our estimates show that the length of the cathode sheath in the plasma source is commensurate (~0.5–1 mm) with the inter-electrode distance so the GD operates in an obstructed regime without a positive column. Estimations of the electron energy relaxation confirm the non-local nature of this GD, hence the nonthermal portion of the electron population is available for extraction outside of the source. The use of a cylindrical anode presents a simple and promising method of extracting the high energy portion of the electron population. Langmuir probe measurements and optical emission spectroscopy confirm the presence of electrons with energies ~15 eV outside of the source. These electrons become available for surface modification and radical production outside of the source. The extraction of the electrons of specific energies by varying the anode geometry opens exciting opportunities for future exploration.

  2. Femtosecond laser-electron x-ray source

    DOEpatents

    Hartemann, Frederic V.; Baldis, Hector A.; Barty, Chris P.; Gibson, David J.; Rupp, Bernhard

    2004-04-20

    A femtosecond laser-electron X-ray source. A high-brightness relativistic electron injector produces an electron beam pulse train. A system accelerates the electron beam pulse train. The femtosecond laser-electron X-ray source includes a high intra-cavity power, mode-locked laser and an x-ray optics system.

  3. 75 FR 4827 - Submission for OMB Review; Comment Request Clinical Trials Reporting Program (CTRP) Database (NCI)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-29

    ...; Comment Request Clinical Trials Reporting Program (CTRP) Database (NCI) Summary: Under the provisions of... Collection: Title: Clinical Trials Reporting Program (CTRP) Database. Type of Information Collection Request... Program (CTRP) Database, to serve as a single, definitive source of information about all NCI-supported...

  4. 49 CFR 234.315 - Electronic recordkeeping.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 4 2012-10-01 2012-10-01 false Electronic recordkeeping. 234.315 Section 234.315 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad adequately limits and controls accessibility to the records retained in its electronic database...

  5. 49 CFR 234.315 - Electronic recordkeeping.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 4 2014-10-01 2014-10-01 false Electronic recordkeeping. 234.315 Section 234.315 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad adequately limits and controls accessibility to the records retained in its electronic database...

  6. 49 CFR 234.315 - Electronic recordkeeping.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 4 2013-10-01 2013-10-01 false Electronic recordkeeping. 234.315 Section 234.315 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... railroad adequately limits and controls accessibility to the records retained in its electronic database...

  7. BioMart: a data federation framework for large collaborative projects.

    PubMed

    Zhang, Junjun; Haider, Syed; Baran, Joachim; Cros, Anthony; Guberman, Jonathan M; Hsu, Jack; Liang, Yong; Yao, Long; Kasprzyk, Arek

    2011-01-01

    BioMart is a freely available, open source, federated database system that provides a unified access to disparate, geographically distributed data sources. It is designed to be data agnostic and platform independent, such that existing databases can easily be incorporated into the BioMart framework. BioMart allows databases hosted on different servers to be presented seamlessly to users, facilitating collaborative projects between different research groups. BioMart contains several levels of query optimization to efficiently manage large data sets and offers a diverse selection of graphical user interfaces and application programming interfaces to ensure that queries can be performed in whatever manner is most convenient for the user. The software has now been adopted by a large number of different biological databases spanning a wide range of data types and providing a rich source of annotation available to bioinformaticians and biologists alike.

  8. Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi

    A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less

  9. [Data sources, the data used, and the modality for collection].

    PubMed

    Mercier, G; Costa, N; Dutot, C; Riche, V-P

    2018-03-01

    The hospital costing process implies access to various sources of data. Whether a micro-costing or a gross-costing approach is used, the choice of the methodology is based on a compromise between the cost of data collection, data accuracy, and data transferability. This work describes the data sources available in France and the access modalities that are used, as well as the main advantages and shortcomings of: (1) the local unit costs, (2) the hospital analytical accounting, (3) the Angers database, (4) the National Health Cost Studies, (5) the INTER CHR/U databases, (6) the Program for Medicalizing Information Systems, and (7) the public health insurance databases. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  10. Comparing Global Influence: China’s and U.S. Diplomacy, Foreign Aid, Trade, and Investment in the Developing World

    DTIC Science & Technology

    2008-08-15

    from 2006-2010 and about 7.4% for Source: United Nations, COMTRADE Database . 95 96 97 98 99 2000 1 2 3 4 5 6 7 Year 0 500 1000 1500 2000 2500 3000...49 Source: United Nations, COMTRADE Database . 95 96 97 98 99 2000 1 2 3 4 5 6 7 Year 0 200 400 600 800 1000 1200 1400 $Billion China Exports 149 151...U.S. and China’s Exports of Goods to the World Source: United Nations, COMTRADE Database . 95 96 97 98 99 2000 1 2 3 4 5 6 7 Year 0 500 1000 1500

  11. Integrated Electronic Health Record Database Management System: A Proposal.

    PubMed

    Schiza, Eirini C; Panos, George; David, Christiana; Petkov, Nicolai; Schizas, Christos N

    2015-01-01

    eHealth has attained significant importance as a new mechanism for health management and medical practice. However, the technological growth of eHealth is still limited by technical expertise needed to develop appropriate products. Researchers are constantly in a process of developing and testing new software for building and handling Clinical Medical Records, being renamed to Electronic Health Record (EHR) systems; EHRs take full advantage of the technological developments and at the same time provide increased diagnostic and treatment capabilities to doctors. A step to be considered for facilitating this aim is to involve more actively the doctor in building the fundamental steps for creating the EHR system and database. A global clinical patient record database management system can be electronically created by simulating real life medical practice health record taking and utilizing, analyzing the recorded parameters. This proposed approach demonstrates the effective implementation of a universal classic medical record in electronic form, a procedure by which, clinicians are led to utilize algorithms and intelligent systems for their differential diagnosis, final diagnosis and treatment strategies.

  12. Improving Cardiac Surgical Site Infection Reporting and Prevention By Using Registry Data for Case Ascertainment.

    PubMed

    Nayar, Vaidehi; Kennedy, Andrea; Pappas, Janine; Atchley, Krista D; Field, Cynthia; Smathers, Sarah; Teszner, Eva E; Sammons, Julia S; Coffin, Susan E; Gerber, Jeffrey S; Spray, Thomas L; Steven, James M; Bell, Louis M; Forrer, Joan; Gonzalez, Fernando; Chi, Albert; Nieczpiel, William J; Martin, John N; Gaynor, J William

    2016-01-01

    The use of administrative data for surgical site infection (SSI) surveillance leads to inaccurate reporting of SSI rates [1]. A quality improvement (QI) initiative was conducted linking clinical registry and administrative databases to improve reporting and reduce the incidence of SSI [2]. At our institution, The Society of Thoracic Surgeons Congenital Heart Surgery Database (STS-CHSD) and infection surveillance database (ISD) were linked to the enterprise data warehouse containing electronic health record (EHR) billing data. A data visualization tool was created to (1) use the STS-CHSD for case ascertainment, (2) resolve discrepancies between the databases, and (3) assess impact of QI initiatives, including wound alert reports, bedside reviews, prevention bundles, and billing coder education. Over the 24-month study period, 1,715 surgical cases were ascertained according to the STS-CHSD clinical criteria, with 23 SSIs identified through the STS-CHSD, 20 SSIs identified through the ISD, and 32 SSIs identified through the billing database. The rolling 12-month STS-CHSD SSI rate decreased from 2.73% (21 of 769 as of January 2013) to 1.11% (9 of 813 as of December 2014). Thirty reporting discrepancies were reviewed to ensure accuracy. Workflow changes facilitated communication and improved adjudication of suspected SSIs. Billing coder education increased coding accuracy and narrowed variation between the 3 SSI sources. The data visualization tool demonstrated temporal relationships between QI initiatives and SSI rate reductions. Linkage of registry and infection control surveillance data with the EHR improves SSI surveillance. The visualization tool and workflow changes facilitated communication, SSI adjudication, and assessment of the QI initiatives. Implementation of these initiatives was associated with decreased SSI rates. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Materials Characterization at Utah State University: Facilities and Knowledge-base of Electronic Properties of Materials Applicable to Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Thomson, C. D.; Kite, J.; Zavyalov, V.; Corbridge, Jodie

    2004-01-01

    In an effort to improve the reliability and versatility of spacecraft charging models designed to assist spacecraft designers in accommodating and mitigating the harmful effects of charging on spacecraft, the NASA Space Environments and Effects (SEE) Program has funded development of facilities at Utah State University for the measurement of the electronic properties of both conducting and insulating spacecraft materials. We present here an overview of our instrumentation and capabilities, which are particularly well suited to study electron emission as related to spacecraft charging. These measurements include electron-induced secondary and backscattered yields, spectra, and angular resolved measurements as a function of incident energy, species and angle, plus investigations of ion-induced electron yields, photoelectron yields, sample charging and dielectric breakdown. Extensive surface science characterization capabilities are also available to fully characterize the samples in situ. Our measurements for a wide array of conducting and insulating spacecraft materials have been incorporated into the SEE Charge Collector Knowledge-base as a Database of Electronic Properties of Materials Applicable to Spacecraft Charging. This Database provides an extensive compilation of electronic properties, together with parameterization of these properties in a format that can be easily used with existing spacecraft charging engineering tools and with next generation plasma, charging, and radiation models. Tabulated properties in the Database include: electron-induced secondary electron yield, backscattered yield and emitted electron spectra; He, Ar and Xe ion-induced electron yields and emitted electron spectra; photoyield and solar emittance spectra; and materials characterization including reflectivity, dielectric constant, resistivity, arcing, optical microscopy images, scanning electron micrographs, scanning tunneling microscopy images, and Auger electron spectra. Further details of the instrumentation used for insulator measurements and representative measurements of insulating spacecraft materials are provided in other Spacecraft Charging Conference presentations. The NASA Space Environments and Effects Program, the Air Force Office of Scientific Research, the Boeing Corporation, NASA Graduate Research Fellowships, and the NASA Rocky Mountain Space Grant Consortium have provided support.

  14. Fast pulsed operation of a small non-radioactive electron source with continuous emission current control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochems, P.; Kirk, A. T.; Bunert, E.

    Non-radioactive electron sources are of great interest in any application requiring the emission of electrons at atmospheric pressure, as they offer better control over emission parameters than radioactive electron sources and are not subject to legal restrictions. Recently, we published a simple electron source consisting only of a vacuum housing, a filament, and a single control grid. In this paper, we present improved control electronics that utilize this control grid in order to focus and defocus the electron beam, thus pulsing the electron emission at atmospheric pressure. This allows short emission pulses and excellent stability of the emitted electron currentmore » due to continuous control, both during pulsed and continuous operations. As an application example, this electron source is coupled to an ion mobility spectrometer. Here, the pulsed electron source allows experiments on gas phase ion chemistry (e.g., ion generation and recombination kinetics) and can even remove the need for a traditional ion shutter.« less

  15. Focused electron and ion beam systems

    DOEpatents

    Leung, Ka-Ngo; Reijonen, Jani; Persaud, Arun; Ji, Qing; Jiang, Ximan

    2004-07-27

    An electron beam system is based on a plasma generator in a plasma ion source with an accelerator column. The electrons are extracted from a plasma cathode in a plasma ion source, e.g. a multicusp plasma ion source. The beam can be scanned in both the x and y directions, and the system can be operated with multiple beamlets. A compact focused ion or electron beam system has a plasma ion source and an all-electrostatic beam acceleration and focusing column. The ion source is a small chamber with the plasma produced by radio-frequency (RF) induction discharge. The RF antenna is wound outside the chamber and connected to an RF supply. Ions or electrons can be extracted from the source. A multi-beam system has several sources of different species and an electron beam source.

  16. Move Over, Word Processors--Here Come the Databases.

    ERIC Educational Resources Information Center

    Olds, Henry F., Jr.; Dickenson, Anne

    1985-01-01

    Discusses the use of beginning, intermediate, and advanced databases for instructional purposes. A table listing seven databases with information on ease of use, smoothness of operation, data capacity, speed, source, and program features is included. (JN)

  17. SPOT 5/HRS: A Key Source for Navigation Database

    DTIC Science & Technology

    2003-09-02

    SUBTITLE SPOT 5 / HRS: A Key Source for Navigation Database 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT ......strategic objective. Nice data ….. What after ?? Filière SPOT Marc BERNARD Page 15 Producing from HRS u Partnership with IGN ( French

  18. Database of Sources of Environmental Releases of Dioxin-Like Compounds in the United States

    EPA Science Inventory

    The Database of Sources of Environmental Releases of Dioxin-like Compounds in the United States (US)JBioWH: an open-source Java framework for bioinformatics data integration

    PubMed Central

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595

  19. JBioWH: an open-source Java framework for bioinformatics data integration.

    PubMed

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.

  1. Advanced Clinical Decision Support for Vaccine Adverse Event Detection and Reporting.

    PubMed

    Baker, Meghan A; Kaelber, David C; Bar-Shain, David S; Moro, Pedro L; Zambarano, Bob; Mazza, Megan; Garcia, Crystal; Henry, Adam; Platt, Richard; Klompas, Michael

    2015-09-15

    Reporting of adverse events (AEs) following vaccination can help identify rare or unexpected complications of immunizations and aid in characterizing potential vaccine safety signals. We developed an open-source, generalizable clinical decision support system called Electronic Support for Public Health-Vaccine Adverse Event Reporting System (ESP-VAERS) to assist clinicians with AE detection and reporting. ESP-VAERS monitors patients' electronic health records for new diagnoses, changes in laboratory values, and new allergies following vaccinations. When suggestive events are found, ESP-VAERS sends the patient's clinician a secure electronic message with an invitation to affirm or refute the message, add comments, and submit an automated, prepopulated electronic report to VAERS. High-probability AEs are reported automatically if the clinician does not respond. We implemented ESP-VAERS in December 2012 throughout the MetroHealth System, an integrated healthcare system in Ohio. We queried the VAERS database to determine MetroHealth's baseline reporting rates from January 2009 to March 2012 and then assessed changes in reporting rates with ESP-VAERS. In the 8 months following implementation, 91 622 vaccinations were given. ESP-VAERS sent 1385 messages to responsible clinicians describing potential AEs. Clinicians opened 1304 (94.2%) messages, responded to 209 (15.1%), and confirmed 16 for transmission to VAERS. An additional 16 high-probability AEs were sent automatically. Reported events included seizure, pleural effusion, and lymphocytopenia. The odds of a VAERS report submission during the implementation period were 30.2 (95% confidence interval, 9.52-95.5) times greater than the odds during the comparable preimplementation period. An open-source, electronic health record-based clinical decision support system can increase AE detection and reporting rates in VAERS. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. The implications of starvation induced psychological changes for the ethical treatment of hunger strikers.

    PubMed

    Fessler, D M T

    2003-08-01

    To evaluate existing ethical guidelines for the treatment of hunger strikers in light of findings on psychological changes that accompany the cessation of food intake. Electronic databases were searched for (a) editorials and ethical proclamations on hunger strikers and their treatment; (b) studies of voluntary and involuntary starvation, and (c) legal cases pertaining to hunger striking. Additional studies were gathered in a snowball fashion from the published material cited in these databases. Material was included if it (a) provided ethical or legal guidelines; (b) shed light on psychological changes accompanying starvation, or (c) illustrated the practice of hunger striking. Authors' observations, opinions, and conclusions were noted. Although the heterogeneous nature of the sources precluded statistical analysis, starvation appears to be accompanied by marked psychological changes. Some changes clearly impair competence, in which case physicians are advised to follow advance directives obtained early in the hunger strike. More problematic are increases in impulsivity and aggressivity, changes which, while not impairing competence, enhance the likelihood that patients will starve themselves to death.

  3. Giving raw data a chance to talk: a demonstration of exploratory visual analytics with a pediatric research database using Microsoft Live Labs Pivot to promote cohort discovery, research, and quality assessment.

    PubMed

    Viangteeravat, Teeradache; Nagisetty, Naga Satya V Rao

    2014-01-01

    Secondary use of large and open data sets provides researchers with an opportunity to address high-impact questions that would otherwise be prohibitively expensive and time consuming to study. Despite the availability of data, generating hypotheses from huge data sets is often challenging, and the lack of complex analysis of data might lead to weak hypotheses. To overcome these issues and to assist researchers in building hypotheses from raw data, we are working on a visual and analytical platform called PRD Pivot. PRD Pivot is a de-identified pediatric research database designed to make secondary use of rich data sources, such as the electronic health record (EHR). The development of visual analytics using Microsoft Live Labs Pivot makes the process of data elaboration, information gathering, knowledge generation, and complex information exploration transparent to tool users and provides researchers with the ability to sort and filter by various criteria, which can lead to strong, novel hypotheses.

  4. Ontology-based data integration between clinical and research systems.

    PubMed

    Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas

    2015-01-01

    Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.

  5. Consortia for Engineering, Science and Technology Libraries in India: A Case Study of INDEST Consortium

    NASA Astrophysics Data System (ADS)

    Pathak, S. K.; Deshpande, N. J.

    2007-10-01

    The present scenario of the INDEST Consortium among engineering, science and technology (including astronomy and astrophysics) libraries in India is discussed. The Indian National Digital Library in Engineering Sciences & Technology (INDEST) Consortium is a major initiative of the Ministry of Human Resource Development, Government of India. The INDEST Consortium provides access to 16 full text e-resources and 7 bibliographic databases for 166 institutions as members who are taking advantage of cost effective access to premier resources in engineering, science and technology, including astronomy and astrophysics. Member institutions can access over 6500 e-journals from 1092 publishers. Out of these, over 150 e-journals are exclusively for the astronomy and physics community. The current study also presents a comparative analysis of the key features of nine major services, viz. ACM Digital Library, ASCE Journals, ASME Journals, EBSCO Databases (Business Source Premier), Elsevier's Science Direct, Emerald Full Text, IEEE/IEE Electronic Library Online (IEL), ProQuest ABI/INFORM and Springer Verlag's Link. In this paper, the limitations of this consortium are also discussed.

  6. Giving Raw Data a Chance to Talk: A Demonstration of Exploratory Visual Analytics with a Pediatric Research Database Using Microsoft Live Labs Pivot to Promote Cohort Discovery, Research, and Quality Assessment

    PubMed Central

    Viangteeravat, Teeradache; Nagisetty, Naga Satya V. Rao

    2014-01-01

    Secondary use of large and open data sets provides researchers with an opportunity to address high-impact questions that would otherwise be prohibitively expensive and time consuming to study. Despite the availability of data, generating hypotheses from huge data sets is often challenging, and the lack of complex analysis of data might lead to weak hypotheses. To overcome these issues and to assist researchers in building hypotheses from raw data, we are working on a visual and analytical platform called PRD Pivot. PRD Pivot is a de-identified pediatric research database designed to make secondary use of rich data sources, such as the electronic health record (EHR). The development of visual analytics using Microsoft Live Labs Pivot makes the process of data elaboration, information gathering, knowledge generation, and complex information exploration transparent to tool users and provides researchers with the ability to sort and filter by various criteria, which can lead to strong, novel hypotheses. PMID:24808811

  7. Scale out databases for CERN use cases

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper

    2015-12-01

    Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.

  8. Making the pediatric perioperative surgical home come to life by leveraging existing health information technology.

    PubMed

    Leahy, Izabela C; Borczuk, Rachel; Ferrari, Lynne R

    2017-06-01

    To design a patient data dashboard for the Department of Anesthesiology, Perioperative and Pain Medicine at Boston Children's Hospital that supports care integration across the healthcare system as described by the pediatric perioperative surgical home (PPSH) initiative. By using 360 Technology, patient data was automatically pulled from all available Electronic Health Record sources from 2005 to the present. The PPSH dashboard described in this report provides a guide for implementation of PPSH Clinical Care Pathways. The dashboard integrates several databases to allow for visual longitudinal tracking of patient care, outcomes, and cost. The integration of electronic information provided the ability to display, compare, and analyze selected PPSH metrics in real time. By utilizing the PPSH dashboard format the use of an automated, integrated clinical, and financial health data profile for a specific patient population may improve clinicians' ability to have a comprehensive assessment of all care elements. This more global clinical thinking has the potential to produce bottom-up, evidence-based healthcare reform. The experience with the PPSH dashboard provides solid evidence for the use of integrated Electronic Health Record to improve patient outcomes and decrease cost.

  9. SU-E-I-43: Photoelectric Cross Section Revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, A; Nakagawa, K; Kotoku, J

    2015-06-15

    Purpose: The importance of the precision in photoelectric cross-section value increases for recent developed technology such as dual energy computed tomography, in which some reconstruction algorithms require the energy dependence of the photo-absorption in each material composition of human being. In this study, we revisited the photoelectric cross-section calculation by self-consistent relativistic Hartree-Fock (HF) atomic model and compared with that widely distributed as “XCOM database” in National Institute of Standards and Technology, which was evaluated with localdensity approximation for electron-exchange (Fock)z potential. Methods: The photoelectric cross section can be calculated with the electron wave functions in initial atomic state (boundmore » electron) and final continuum state (photoelectron). These electron states were constructed based on the selfconsistent HF calculation, where the repulsive Coulomb potential from the electron charge distribution (Hartree term) and the electron exchange potential with full electromagnetic interaction (Fock term) were included for the electron-electron interaction. The photoelectric cross sections were evaluated for He (Z=2), Be (Z=4), C (Z=6), O (Z=8), and Ne (Z=10) in energy range of 10keV to 1MeV. The Result was compared with XCOM database. Results: The difference of the photoelectric cross section between the present calculation and XCOM database was 8% at a maximum (in 10keV for Be). The agreement tends to be better as the atomic number increases. The contribution from each atomic shell has a considerable discrepancy with XCOM database except for K-shell. However, because the photoelectric cross section arising from K-shell is dominant, the net photoelectric cross section was almost insensitive to the different handling in Fock potential. Conclusion: The photoelectric cross-section program has been developed based on the fully self-consistent relativistic HF atomic model. Due to small effect on the Fock potential for K-shell electrons, the difference from XCOM database was limited: 1% to 8% for low-Z elements in 10keV-1MeV energy ranges. This work was partly supported by the JSPS Core-to-Core Program (No. 23003)« less

  10. Electronic Strategies To Manage Key Relationships.

    ERIC Educational Resources Information Center

    Carr, Nora

    2003-01-01

    Describes how Charlotte-Mecklenburg (North Carolina) district used a relational database, e-mail, electronic newsletters, cable television, telecommunications, and the Internet to enhance communications with their constituencies. (PKP)

  11. OrChem - An open source chemistry search engine for Oracle(R).

    PubMed

    Rijnbeek, Mark; Steinbeck, Christoph

    2009-10-22

    Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net.

  12. Central Appalachian basin natural gas database: distribution, composition, and origin of natural gases

    USGS Publications Warehouse

    Román Colón, Yomayra A.; Ruppert, Leslie F.

    2015-01-01

    The U.S. Geological Survey (USGS) has compiled a database consisting of three worksheets of central Appalachian basin natural gas analyses and isotopic compositions from published and unpublished sources of 1,282 gas samples from Kentucky, Maryland, New York, Ohio, Pennsylvania, Tennessee, Virginia, and West Virginia. The database includes field and reservoir names, well and State identification number, selected geologic reservoir properties, and the composition of natural gases (methane; ethane; propane; butane, iso-butane [i-butane]; normal butane [n-butane]; iso-pentane [i-pentane]; normal pentane [n-pentane]; cyclohexane, and hexanes). In the first worksheet, location and American Petroleum Institute (API) numbers from public or published sources are provided for 1,231 of the 1,282 gas samples. A second worksheet of 186 gas samples was compiled from published sources and augmented with public location information and contains carbon, hydrogen, and nitrogen isotopic measurements of natural gas. The third worksheet is a key for all abbreviations in the database. The database can be used to better constrain the stratigraphic distribution, composition, and origin of natural gas in the central Appalachian basin.

  13. Missing Modality Transfer Learning via Latent Low-Rank Constraint.

    PubMed

    Ding, Zhengming; Shao, Ming; Fu, Yun

    2015-11-01

    Transfer learning is usually exploited to leverage previously well-learned source domain for evaluating the unknown target domain; however, it may fail if no target data are available in the training stage. This problem arises when the data are multi-modal. For example, the target domain is in one modality, while the source domain is in another. To overcome this, we first borrow an auxiliary database with complete modalities, then consider knowledge transfer across databases and across modalities within databases simultaneously in a unified framework. The contributions are threefold: 1) a latent factor is introduced to uncover the underlying structure of the missing modality from the known data; 2) transfer learning in two directions allows the data alignment between both modalities and databases, giving rise to a very promising recovery; and 3) an efficient solution with theoretical guarantees to the proposed latent low-rank transfer learning algorithm. Comprehensive experiments on multi-modal knowledge transfer with missing target modality verify that our method can successfully inherit knowledge from both auxiliary database and source modality, and therefore significantly improve the recognition performance even when test modality is inaccessible in the training stage.

  14. ISC-GEM: Global Instrumental Earthquake Catalogue (1900-2009), I. Data collection from early instrumental seismological bulletins

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Harris, James; Villaseñor, Antonio; Storchak, Dmitry A.; Engdahl, E. Robert; Lee, William H. K.

    2015-02-01

    In order to produce a new global reference earthquake catalogue based on instrumental data covering the last 100+ years of global earthquakes, we collected, digitized and processed an unprecedented amount of printed early instrumental seismological bulletins with fundamental parametric data for relocating and reassessing the magnitude of earthquakes that occurred in the period between 1904 and 1970. This effort was necessary in order to produce an earthquake catalogue with locations and magnitudes as homogeneous as possible. The parametric data obtained and processed during this work fills a large gap in electronic bulletin data availability. This new dataset complements the data publicly available in the International Seismological Centre (ISC) Bulletin starting in 1964. With respect to the amplitude-period data necessary to re-compute magnitude, we searched through the global collection of printed bulletins stored at the ISC and entered relevant station parametric data into the database. As a result, over 110,000 surface and body-wave amplitude-period pairs for re-computing standard magnitudes MS and mb were added to the ISC database. To facilitate earthquake relocation, different sources have been used to retrieve body-wave arrival times. These were entered into the database using optical character recognition methods (International Seismological Summary, 1918-1959) or manually (e.g., British Association for the Advancement of Science, 1913-1917). In total, ∼1,000,000 phase arrival times were added to the ISC database for large earthquakes that occurred in the time interval 1904-1970. The selection of earthquakes for which data was added depends on time period and magnitude: for the early years of last century (until 1917) only very large earthquakes were selected for processing (M ⩾ 7.5), whereas in the periods 1918-1959 and 1960-2009 the magnitude thresholds are 6.25 and 5.5, respectively. Such a selection was mainly dictated by limitations in time and funding. Although the newly available parametric data is only a subset of the station data available in the printed bulletins, its electronic availability will be important for any future study of earthquakes that occurred during the early instrumental period.

  15. AccuNet/AP (Associated Press) Multimedia Archive

    ERIC Educational Resources Information Center

    Young, Terrence E., Jr.

    2004-01-01

    The AccuNet/AP Multimedia Archive is an electronic library containing the AP's current photos and a selection of pictures from their enormous print and negative library, as well as text and graphic material. It is composed of two photo databases as well as graphics, text, and audio databases. The features of this database are briefly described in…

  16. The Wannabee Culture: Why No-One Does What They Used To.

    ERIC Educational Resources Information Center

    Dixon, Anne

    1998-01-01

    Electronic publishing has been an agent for change in not just how one publishes but in what one publishes. Describes HyperCite, a joint project with the Institution of Electrical Engineers (IEE) to create INSPEC database. Highlights include the database; the research phase (cross database searching and new interface); and what and how much was…

  17. Library Micro-Computing, Vol. 1. Reprints from the Best of "ONLINE" [and]"DATABASE."

    ERIC Educational Resources Information Center

    Online, Inc., Weston, CT.

    Reprints of 18 articles pertaining to library microcomputing appear in this collection, the first of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1) an integrated library…

  18. Patterns of Undergraduates' Use of Scholarly Databases in a Large Research University

    ERIC Educational Resources Information Center

    Mbabu, Loyd Gitari; Bertram, Albert; Varnum, Ken

    2013-01-01

    Authentication data was utilized to explore undergraduate usage of subscription electronic databases. These usage patterns were linked to the information literacy curriculum of the library. The data showed that out of the 26,208 enrolled undergraduate students, 42% of them accessed a scholarly database at least once in the course of the entire…

  19. The Research Potential of the Electronic OED Database at the University of Waterloo: A Case Study.

    ERIC Educational Resources Information Center

    Berg, Donna Lee

    1991-01-01

    Discusses the history and structure of the online database of the second edition of the Oxford English Dictionary (OED) and the software tools developed at the University of Waterloo to manipulate the unusually complex database. Four sample searches that indicate some types of problems that might be encountered are appended. (DB)

  20. Development of a validated algorithm for the diagnosis of paediatric asthma in electronic medical records

    PubMed Central

    Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt

    2016-01-01

    An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1–17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1–17 years. The algorithm was also run for ages 3–17 and 6–17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada. PMID:27882997

  1. Development of a validated algorithm for the diagnosis of paediatric asthma in electronic medical records.

    PubMed

    Cave, Andrew J; Davey, Christina; Ahmadi, Elaheh; Drummond, Neil; Fuentes, Sonia; Kazemi-Bajestani, Seyyed Mohammad Reza; Sharpe, Heather; Taylor, Matt

    2016-11-24

    An accurate estimation of the prevalence of paediatric asthma in Alberta and elsewhere is hampered by uncertainty regarding disease definition and diagnosis. Electronic medical records (EMRs) provide a rich source of clinical data from primary-care practices that can be used in better understanding the occurrence of the disease. The Canadian Primary Care Sentinel Surveillance Network (CPCSSN) database includes cleaned data extracted from the EMRs of primary-care practitioners. The purpose of the study was to develop and validate a case definition of asthma in children 1-17 who consult family physicians, in order to provide primary-care estimates of childhood asthma in Alberta as accurately as possible. The validation involved the comparison of the application of a theoretical algorithm (to identify patients with asthma) to a physician review of records included in the CPCSSN database (to confirm an accurate diagnosis). The comparison yielded 87.4% sensitivity, 98.6% specificity and a positive and negative predictive value of 91.2% and 97.9%, respectively, in the age group 1-17 years. The algorithm was also run for ages 3-17 and 6-17 years, and was found to have comparable statistical values. Overall, the case definition and algorithm yielded strong sensitivity and specificity metrics and was found valid for use in research in CPCSSN primary-care practices. The use of the validated asthma algorithm may improve insight into the prevalence, diagnosis, and management of paediatric asthma in Alberta and Canada.

  2. Alaska IPASS database preparation manual.

    Treesearch

    P. McHugh; D. Olson; C. Schallau

    1989-01-01

    Describes the data, their sources, and the calibration procedures used in compiling a database for the Alaska IPASS (interactive policy analysis simulation system) model. Although this manual is for Alaska, it provides generic instructions for analysts preparing databases for other geographical areas.

  3. Multiple elastic scattering of electrons in condensed matter

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2017-01-01

    Since the 1940s, much attention has been devoted to the problem of accurate theoretical description of electron transport in condensed matter. The needed information for describing different aspects of the electron transport is the angular distribution of electron directions after multiple elastic collisions. This distribution can be expanded into a series of Legendre polynomials with coefficients, Al. In the present work, a database of these coefficients for all elements up to uranium (Z=92) and a dense grid of electron energies varying from 50 to 5000 eV has been created. The database makes possible the following applications: (i) accurate interpolation of coefficients Al for any element and any energy from the above range, (ii) fast calculations of the differential and total elastic-scattering cross sections, (iii) determination of the angular distribution of directions after multiple collisions, (iv) calculations of the probability of elastic backscattering from solids, and (v) calculations of the calibration curves for determination of the inelastic mean free paths of electrons. The last two applications provide data with comparable accuracy to Monte Carlo simulations, yet the running time is decreased by several orders of magnitude. All of the above applications are implemented in the Fortran program MULTI_SCATT. Numerous illustrative runs of this program are described. Despite a relatively large volume of the database of coefficients Al, the program MULTI_SCATT can be readily run on personal computers.

  4. Compact x-ray source and panel

    DOEpatents

    Sampayon, Stephen E [Manteca, CA

    2008-02-12

    A compact, self-contained x-ray source, and a compact x-ray source panel having a plurality of such x-ray sources arranged in a preferably broad-area pixelized array. Each x-ray source includes an electron source for producing an electron beam, an x-ray conversion target, and a multilayer insulator separating the electron source and the x-ray conversion target from each other. The multi-layer insulator preferably has a cylindrical configuration with a plurality of alternating insulator and conductor layers surrounding an acceleration channel leading from the electron source to the x-ray conversion target. A power source is connected to each x-ray source of the array to produce an accelerating gradient between the electron source and x-ray conversion target in any one or more of the x-ray sources independent of other x-ray sources in the array, so as to accelerate an electron beam towards the x-ray conversion target. The multilayer insulator enables relatively short separation distances between the electron source and the x-ray conversion target so that a thin panel is possible for compactness. This is due to the ability of the plurality of alternating insulator and conductor layers of the multilayer insulators to resist surface flashover when sufficiently high acceleration energies necessary for x-ray generation are supplied by the power source to the x-ray sources.

  5. MyMolDB: a micromolecular database solution with open source and free components.

    PubMed

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  6. ECR ion source with electron gun

    DOEpatents

    Xie, Z.Q.; Lyneis, C.M.

    1993-10-26

    An Advanced Electron Cyclotron Resonance ion source having an electron gun for introducing electrons into the plasma chamber of the ion source is described. The ion source has a injection enclosure and a plasma chamber tank. The plasma chamber is defined by a plurality of longitudinal magnets. The electron gun injects electrons axially into the plasma chamber such that ionization within the plasma chamber occurs in the presence of the additional electrons produced by the electron gun. The electron gun has a cathode for emitting electrons therefrom which is heated by current supplied from an AC power supply while bias potential is provided by a bias power supply. A concentric inner conductor and outer conductor carry heating current to a carbon chuck and carbon pusher which hold the cathode in place and also heat the cathode. In the Advanced Electron Cyclotron Resonance ion source, the electron gun replaces the conventional first stage used in prior electron cyclotron resonance ion generators. 5 figures.

  7. Assessing the number of fire fatalities in a defined population.

    PubMed

    Jonsson, Anders; Bergqvist, Anders; Andersson, Ragnar

    2015-12-01

    Fire-related fatalities and injuries have become a growing governmental concern in Sweden, and a national vision zero strategy has been adopted stating that nobody should get killed or seriously injured from fires. There is considerable uncertainty, however, regarding the numbers of both deaths and injuries due to fires. Different national sources present different numbers, even on deaths, which obstructs reliable surveillance of the problem over time. We assume the situation is similar in other countries. This study seeks to assess the true number of fire-related deaths in Sweden by combining sources, and to verify the coverage of each individual source. By doing so, we also wish to demonstrate the possibilities of improved surveillance practices. Data from three national sources were collected and matched; a special database on fatal fires held by The Swedish Contingencies Agency (nationally responsible for fire prevention), a database on forensic medical examinations held by the National Board of Forensic Medicine, and the cause of death register held by the Swedish National Board of Health and Welfare. The results disclose considerable underreporting in the single sources. The national database on fatal fires, serving as the principal source for policy making on fire prevention matters, underestimates the true situation by 20%. Its coverage of residential fires appears to be better than other fires. Systematic safety work and informed policy-making presuppose access to correct and reliable numbers. By combining several different sources, as suggested in this study, the national database on fatal fires is now considerably improved and includes regular matching with complementary sources.

  8. Teaching Chemical Information in a Liberal Arts Curriculum

    NASA Astrophysics Data System (ADS)

    Ricker, Alison Scott; Thompson, Robert Q.

    1999-11-01

    We first offered Chemical Information as a one-credit, semester-long course in 1993 and have continued to team-teach it each fall. We offer this summary of our course as a model that might be adapted in other settings, acknowledging that no single course can adequately prepare chemists for the many challenges involved in finding, evaluating, and utilizing chemical information. The focus on information retrieval, evaluation, and presentation in a separate course has worked well for us, successfully integrating concepts of information literacy in a chemical context. We cover a wide array of topics, beginning with print and electronic resources on our campus and moving quickly to databases and other sources on the Internet. Searching CA Online via STN Express and STN Easy is emphasized more than any other single source. We have described the course in some detail elsewhere and give here a synopsis of our current approach and significant changes in the course over the last two years.

  9. The design and implementation of hydrographical information management system (HIMS)

    NASA Astrophysics Data System (ADS)

    Sui, Haigang; Hua, Li; Wang, Qi; Zhang, Anming

    2005-10-01

    With the development of hydrographical work and information techniques, the large variety of hydrographical information including electronic charts, documents and other materials are widely used, and the traditional management mode and techniques are unsuitable for the development of the Chinese Marine Safety Administration Bureau (CMSAB). How to manage all kinds of hydrographical information has become an important and urgent problem. A lot of advanced techniques including GIS, RS, spatial database management and VR techniques are introduced for solving these problems. Some design principles and key techniques of the HIMS including the mixed mode base on B/S, C/S and stand-alone computer mode, multi-source & multi-scale data organization and management, multi-source data integration and diverse visualization of digital chart, efficient security control strategies are illustrated in detail. Based on the above ideas and strategies, an integrated system named Hydrographical Information Management System (HIMS) was developed. And the HIMS has been applied in the Shanghai Marine Safety Administration Bureau and obtained good evaluation.

  10. An Aeroacoustic Study of a Leading Edge Slat Configuration

    NASA Technical Reports Server (NTRS)

    Mendoza, J. M.; Brooks, T. F.; Humphreys, W. M., Jr.

    2002-01-01

    Aeroacoustic evaluations of high-lift devices have been carried out in the Quiet Flow Facility of the NASA Langley Research Center. The present paper describes detailed flow and acoustic measurements that have been made in order to better understand the noise generated from airflow over a wing leading edge slat configuration, and to possibly predict and reduce this noise source. The acoustic database is obtained by a moveable Small Aperture Directional Array of microphones designed to electronically steer to different portions of models under study. The slat is shown to be a uniform distributed noise source. The data was processed such that spectra and directivity were determined with respect to a one-foot span of slat. The spectra are normalized in various fashions to demonstrate slat noise character. In order to equate portions of the spectra to different slat noise components, trailing edge noise predictions using measured slat boundary layer parameters as inputs are compared to the measured slat noise spectra.

  11. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Electronic Approval: Another Step toward a Paperless Office.

    ERIC Educational Resources Information Center

    Blythe, Kenneth C.; Morrison, Dennis L.

    1992-01-01

    Pennsylvania State University's award-winning electronic approval system allows administrative documents to be electronically generated, approved, and updated in the university's central database. Campus business can thus be conducted faster, less expensively, more accurately, and with greater security than with traditional paper approval…

  13. 17 CFR 37.205 - Audit trail.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... trading; and (iv) Identification of each account to which fills are allocated. (3) Electronic analysis capability. A swap execution facility's audit trail program shall include electronic analysis capability with respect to all audit trail data in the transaction history database. Such electronic analysis capability...

  14. Measuring health system resource use for economic evaluation: a comparison of data sources.

    PubMed

    Pollicino, Christine; Viney, Rosalie; Haas, Marion

    2002-01-01

    A key challenge for evaluators and health system planners is the identification, measurement and valuation of resource use for economic evaluation. Accurately capturing all significant resource use is particularly difficult in the Australian context where there is no comprehensive database from which researchers can draw. Evaluators and health system planners need to consider different approaches to data collection for estimating resource use for economic evaluation, and the relative merits of the different data sources available. This paper illustrates the issues that arise in using different data sources using a sub-sample of the data being collected for an economic evaluation. Specifically, it compares the use of Australia's largest administrative database on resource use, the Health Insurance Commission database, with the use of patient-supplied data. The extent of agreement and discrepancies between the two data sources is investigated. Findings from this study and recommendations as to how to deal with different data sources are presented.

  15. BAO Plate Archive Project: Digitization, Electronic Database and Research Programmes

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Abrahamyan, H. V.; Andreasyan, H. R.; Azatyan, N. M.; Farmanyan, S. V.; Gigoyan, K. S.; Gyulzadyan, M. V.; Khachatryan, K. G.; Knyazyan, A. V.; Kostandyan, G. R.; Mikayelyan, G. A.; Nikoghosyan, E. H.; Paronyan, G. M.; Vardanyan, A. V.

    2016-06-01

    The most important part of the astronomical observational heritage are astronomical plate archives created on the basis of numerous observations at many observatories. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,000 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt type and other smaller telescopes during 1947-1991. In 2002-2005, the famous Markarian Survey (also called First Byurakan Survey, FBS) 1874 plates were digitized and the Digitized FBS (DFBS) was created. New science projects have been conducted based on these low-dispersion spectroscopic material. A large project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage was started in 2015. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 11 astronomers and 2 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization, as well as Armenian Virtual Observatory (ArVO) database will be used to accommodate all new data. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects, mainly including high proper motion stars, variable objects and Solar System bodies.

  16. BAO Plate Archive digitization, creation of electronic database and its scientific usage

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg M.

    2015-08-01

    Astronomical plate archives created on the basis of numerous observations at many observatories are important part of the astronomical heritage. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,500 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. In 2002-2005, the famous Markarian Survey (First Byurakan Survey, FBS) 2000 plates were digitized and the Digitized FBS (DFBS, http://www.aras.am/Dfbs/dfbs.html) was created. New science projects have been conducted based on these low-dispersion spectroscopic material. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 9 astronomers and 3 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization, as well as Armenian Virtual Observatory (ArVO) database to accommodate all new data. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects.

  17. New DMSP Database of Precipitating Auroral Electrons and Ions.

    PubMed

    Redmon, Robert J; Denig, William F; Kilcommons, Liam M; Knipp, Delores J

    2017-08-01

    Since the mid 1970's, the Defense Meteorological Satellite Program (DMSP) spacecraft have operated instruments for monitoring the space environment from low earth orbit. As the program evolved, so to have the measurement capabilities such that modern DMSP spacecraft include a comprehensive suite of instruments providing estimates of precipitating electron and ion fluxes, cold/bulk plasma composition and moments, the geomagnetic field, and optical emissions in the far and extreme ultraviolet. We describe the creation of a new public database of precipitating electrons and ions from the Special Sensor J (SSJ) instrument, complete with original counts, calibrated differential fluxes adjusted for penetrating radiation, estimates of the total kinetic energy flux and characteristic energy, uncertainty estimates, and accurate ephemerides. These are provided in a common and self-describing format that covers 30+ years of DMSP spacecraft from F06 (launched in 1982) through F18 (launched in 2009). This new database is accessible at the National Centers for Environmental Information (NCEI) and the Coordinated Data Analysis Web (CDAWeb). We describe how the new database is being applied to high latitude studies of: the co-location of kinetic and electromagnetic energy inputs, ionospheric conductivity variability, field aligned currents and auroral boundary identification. We anticipate that this new database will support a broad range of space science endeavors from single observatory studies to coordinated system science investigations.

  18. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse event reporting, confidential enquiries, disease-only registers and health surveys. Databases in group I can be used not only to assess effectiveness but also to assess diffusion and equity. Databases in group II can only assess diffusion. Group III has restricted scope for assessing HTs, except for analysis of adverse events. For use in costing, databases need to include unit costs or prices. Some databases included unit cost as well as a specific HT. A list of around 270 databases was identified at the level of UK, England and Wales or England (over 1000 including Scotland, Wales and Northern Ireland). Allocation of these to the above groups identified around 60 databases with some potential for HT assessment, roughly half to group I. Eighteen clinical registers were identified as having the greatest potential although the clinical administrative datasets had potential mainly owing to their inclusion of a wide range of technologies. Only two databases were identified that could directly be used in costing. The review of the potential capture of HTs prioritized by the UK's NHS R&D HTA programme showed that only 10% would be captured in these databases, mainly drugs prescribed in primary care. The review of the use of routine databases in any form of HT assessment indicated that clinical registers were mainly used for national comparative audit. Some databases have only been used in annual reports, usually time trend analysis. A few peer-reviewed papers used a clinical register to assess the effectiveness of a technology. Accessibility is suggested as a barrier to using most databases. Clinical administrative databases (group Ib) have mainly been used to build population needs indices and performance indicators. A review of the validity of used databases showed that although internal consistency checks were common, relatively few had any form of external audit. Some comparative audit databases have data scrutinised by participating units. Issues around coverage and coding have, in general, received little attention. NHS funding of databases has been mainly for 'Central Returns' for management purposes, which excludes those databases with the greatest potential for HT assessment. Funding for databases was various, but some are unfunded, relying on goodwill. The estimated total cost of databases in group I plus selected databases from groups II and III has been estimated at pound 50 million or around 0.1% of annual NHS spend. A few databases with limited potential for HT assessment account for the bulk of spending. Suggestions for policy include clarification of responsibility for the strategic development of databases, improved resourcing, and issues around coding, confidentiality, ownership and access, maintenance of clinical support, optimal use of information technology, filling gaps and remedying deficiencies. Recommendations for researchers include closer policy links between routine data and R&D, and selective investment in the more promising databases. Recommended research topics include optimal capture and coding of the range of HTs, international comparisons of the role, funding and use of routine data in healthcare systems and use of routine database in trials and in modelling. Independent evaluations are recommended for information strategies (such as those around the National Service Frameworks and various collaborations) and for electronic patient and health records.

  19. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  20. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  1. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  2. 14 CFR 221.500 - Transmission of electronic tariffs to subscribers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...

  3. The Use of Telecommunications in Australian Education.

    ERIC Educational Resources Information Center

    Hammond, Morrison F.

    1986-01-01

    Discusses telecommunications services used in Australian education. They include Minerva (electronic mail), Midas (database accessing), Viatel (interactive videotext), and Telememo (electronic mail used to exchange information between schools. (JN)

  4. ePORT, NASA's Computer Database Program for System Safety Risk Management Oversight (Electronic Project Online Risk Tool)

    NASA Technical Reports Server (NTRS)

    Johnson, Paul W.

    2008-01-01

    ePORT (electronic Project Online Risk Tool) provides a systematic approach to using an electronic database program to manage a program/project risk management processes. This presentation will briefly cover the standard risk management procedures, then thoroughly cover NASA's Risk Management tool called ePORT. This electronic Project Online Risk Tool (ePORT) is a web-based risk management program that provides a common framework to capture and manage risks, independent of a programs/projects size and budget. It is used to thoroughly cover the risk management paradigm providing standardized evaluation criterion for common management reporting, ePORT improves Product Line, Center and Corporate Management insight, simplifies program/project manager reporting, and maintains an archive of data for historical reference.

  5. Dental insurance: A systematic review.

    PubMed

    Garla, Bharath Kumar; Satish, G; Divya, K T

    2014-12-01

    To review uses of finance in dentistry. A search of 25 electronic databases and World Wide Web was conducted. Relevant journals were hand searched and further information was requested from authors. Inclusion criteria were a predefined hierarchy of evidence and objectives. Study validity was assessed with checklists. Two reviewers independently screened sources, extracted data, and assessed validity. Insurance has come of ages and has become the mainstay of payment in many developed countries. So much so that all the alternative forms of payment which originated as an alternative to fee for service now depend on insurance at one point or the other. Fee for service is still the major form of payment in many developing countries including India. It is preferred in many instances since the payment is made immediately.

  6. CROPPER: a metagene creator resource for cross-platform and cross-species compendium studies.

    PubMed

    Paananen, Jussi; Storvik, Markus; Wong, Garry

    2006-09-22

    Current genomic research methods provide researchers with enormous amounts of data. Combining data from different high-throughput research technologies commonly available in biological databases can lead to novel findings and increase research efficiency. However, combining data from different heterogeneous sources is often a very arduous task. These sources can be different microarray technology platforms, genomic databases, or experiments performed on various species. Our aim was to develop a software program that could facilitate the combining of data from heterogeneous sources, and thus allow researchers to perform genomic cross-platform/cross-species studies and to use existing experimental data for compendium studies. We have developed a web-based software resource, called CROPPER that uses the latest genomic information concerning different data identifiers and orthologous genes from the Ensembl database. CROPPER can be used to combine genomic data from different heterogeneous sources, allowing researchers to perform cross-platform/cross-species compendium studies without the need for complex computational tools or the requirement of setting up one's own in-house database. We also present an example of a simple cross-platform/cross-species compendium study based on publicly available Parkinson's disease data derived from different sources. CROPPER is a user-friendly and freely available web-based software resource that can be successfully used for cross-species/cross-platform compendium studies.

  7. Implementation of the Regulatory Authority Information System in Egypt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, S.D.; Schetnan, R.; Hasan, A.

    2006-07-01

    As part of the implementation of a bar-code-based system to track radioactive sealed sources (RSS) in Egypt, the Regulatory Authority Information System Personal Digital Assistant (RAIS PDA) Application was developed to extend the functionality of the International Atomic Energy Agency's (IAEA's) RAIS database by allowing users to download RSS data from the database to a portable PDA equipped with a bar-code scanner. [1, 4] The system allows users in the field to verify radioactive sealed source data, gather radioactive sealed source audit information, and upload that data to the RAIS database. This paper describes the development of the RAIS PDAmore » Application, its features, and how it will be implemented in Egypt. (authors)« less

  8. A digital library for medical imaging activities

    NASA Astrophysics Data System (ADS)

    dos Santos, Marcelo; Furuie, Sérgio S.

    2007-03-01

    This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.

  9. Critical Care Health Informatics Collaborative (CCHIC): Data, tools and methods for reproducible research: A multi-centre UK intensive care database.

    PubMed

    Harris, Steve; Shi, Sinan; Brealey, David; MacCallum, Niall S; Denaxas, Spiros; Perez-Suarez, David; Ercole, Ari; Watkinson, Peter; Jones, Andrew; Ashworth, Simon; Beale, Richard; Young, Duncan; Brett, Stephen; Singer, Mervyn

    2018-04-01

    To build and curate a linkable multi-centre database of high resolution longitudinal electronic health records (EHR) from adult Intensive Care Units (ICU). To develop a set of open-source tools to make these data 'research ready' while protecting patient's privacy with a particular focus on anonymisation. We developed a scalable EHR processing pipeline for extracting, linking, normalising and curating and anonymising EHR data. Patient and public involvement was sought from the outset, and approval to hold these data was granted by the NHS Health Research Authority's Confidentiality Advisory Group (CAG). The data are held in a certified Data Safe Haven. We followed sustainable software development principles throughout, and defined and populated a common data model that links to other clinical areas. Longitudinal EHR data were loaded into the CCHIC database from eleven adult ICUs at 5 UK teaching hospitals. From January 2014 to January 2017, this amounted to 21,930 and admissions (18,074 unique patients). Typical admissions have 70 data-items pertaining to admission and discharge, and a median of 1030 (IQR 481-2335) time-varying measures. Training datasets were made available through virtual machine images emulating the data processing environment. An open source R package, cleanEHR, was developed and released that transforms the data into a square table readily analysable by most statistical packages. A simple language agnostic configuration file will allow the user to select and clean variables, and impute missing data. An audit trail makes clear the provenance of the data at all times. Making health care data available for research is problematic. CCHIC is a unique multi-centre longitudinal and linkable resource that prioritises patient privacy through the highest standards of data security, but also provides tools to clean, organise, and anonymise the data. We believe the development of such tools are essential if we are to meet the twin requirements of respecting patient privacy and working for patient benefit. The CCHIC database is now in use by health care researchers from academia and industry. The 'research ready' suite of data preparation tools have facilitated access, and linkage to national databases of secondary care is underway. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. An Algorithm for Building an Electronic Database.

    PubMed

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  11. Microsecond Electron Beam Source with Electron Energy Up to 400 Kev and Plasma Anode

    NASA Astrophysics Data System (ADS)

    Abdullin, É. N.; Basov, G. F.; Shershnev, S.

    2017-12-01

    A new high-power source of electrons with plasma anode for producing high-current microsecond electron beams with electron energy up to 400 keV has been developed, manufactured, and put in operation. To increase the cross section and pulse current duration of the beam, a multipoint explosive emission cathode is used in the electron beam source, and the beam is formed in an applied external guiding magnetic field. The Marx generator with vacuum insulation is used as a high-voltage source. Electron beams with electron energy up to 300-400 keV, current of 5-15 kA, duration of 1.5-3 μs, energy up to 4 kJ, and cross section up to 150 cm2 have been produced. The operating modes of the electron beam source are realized in which the applied voltage is influenced weakly on the current. The possibility of source application for melting of metal surfaces is demonstrated.

  12. Manual Gene Ontology annotation workflow at the Mouse Genome Informatics Database

    PubMed Central

    Drabkin, Harold J.; Blake, Judith A.

    2012-01-01

    The Mouse Genome Database, the Gene Expression Database and the Mouse Tumor Biology database are integrated components of the Mouse Genome Informatics (MGI) resource (http://www.informatics.jax.org). The MGI system presents both a consensus view and an experimental view of the knowledge concerning the genetics and genomics of the laboratory mouse. From genotype to phenotype, this information resource integrates information about genes, sequences, maps, expression analyses, alleles, strains and mutant phenotypes. Comparative mammalian data are also presented particularly in regards to the use of the mouse as a model for the investigation of molecular and genetic components of human diseases. These data are collected from literature curation as well as downloads of large datasets (SwissProt, LocusLink, etc.). MGI is one of the founding members of the Gene Ontology (GO) and uses the GO for functional annotation of genes. Here, we discuss the workflow associated with manual GO annotation at MGI, from literature collection to display of the annotations. Peer-reviewed literature is collected mostly from a set of journals available electronically. Selected articles are entered into a master bibliography and indexed to one of eight areas of interest such as ‘GO’ or ‘homology’ or ‘phenotype’. Each article is then either indexed to a gene already contained in the database or funneled through a separate nomenclature database to add genes. The master bibliography and associated indexing provide information for various curator-reports such as ‘papers selected for GO that refer to genes with NO GO annotation’. Once indexed, curators who have expertise in appropriate disciplines enter pertinent information. MGI makes use of several controlled vocabularies that ensure uniform data encoding, enable robust analysis and support the construction of complex queries. These vocabularies range from pick-lists to structured vocabularies such as the GO. All data associations are supported with statements of evidence as well as access to source publications. PMID:23110975

  13. Manual Gene Ontology annotation workflow at the Mouse Genome Informatics Database.

    PubMed

    Drabkin, Harold J; Blake, Judith A

    2012-01-01

    The Mouse Genome Database, the Gene Expression Database and the Mouse Tumor Biology database are integrated components of the Mouse Genome Informatics (MGI) resource (http://www.informatics.jax.org). The MGI system presents both a consensus view and an experimental view of the knowledge concerning the genetics and genomics of the laboratory mouse. From genotype to phenotype, this information resource integrates information about genes, sequences, maps, expression analyses, alleles, strains and mutant phenotypes. Comparative mammalian data are also presented particularly in regards to the use of the mouse as a model for the investigation of molecular and genetic components of human diseases. These data are collected from literature curation as well as downloads of large datasets (SwissProt, LocusLink, etc.). MGI is one of the founding members of the Gene Ontology (GO) and uses the GO for functional annotation of genes. Here, we discuss the workflow associated with manual GO annotation at MGI, from literature collection to display of the annotations. Peer-reviewed literature is collected mostly from a set of journals available electronically. Selected articles are entered into a master bibliography and indexed to one of eight areas of interest such as 'GO' or 'homology' or 'phenotype'. Each article is then either indexed to a gene already contained in the database or funneled through a separate nomenclature database to add genes. The master bibliography and associated indexing provide information for various curator-reports such as 'papers selected for GO that refer to genes with NO GO annotation'. Once indexed, curators who have expertise in appropriate disciplines enter pertinent information. MGI makes use of several controlled vocabularies that ensure uniform data encoding, enable robust analysis and support the construction of complex queries. These vocabularies range from pick-lists to structured vocabularies such as the GO. All data associations are supported with statements of evidence as well as access to source publications.

  14. Sankofa pediatric HIV disclosure intervention cyber data management: building capacity in a resource-limited setting and ensuring data quality.

    PubMed

    Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah

    2015-01-01

    Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.

  15. Genome databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts inmore » the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.« less

  16. Overview of Historical Earthquake Document Database in Japan and Future Development

    NASA Astrophysics Data System (ADS)

    Nishiyama, A.; Satake, K.

    2014-12-01

    In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami heights.

  17. Global economic burden of schizophrenia: a systematic review

    PubMed Central

    Chong, Huey Yi; Teoh, Siew Li; Wu, David Bin-Chia; Kotirum, Surachai; Chiou, Chiun-Fang; Chaiyakunapruk, Nathorn

    2016-01-01

    Background Schizophrenia is one of the top 25 leading causes of disability worldwide in 2013. Despite its low prevalence, its health, social, and economic burden has been tremendous, not only for patients but also for families, caregivers, and the wider society. The magnitude of disease burden investigated in an economic burden study is an important source to policymakers in decision making. This study aims to systematically identify studies focusing on the economic burden of schizophrenia, describe the methods and data sources used, and summarize the findings of economic burden of schizophrenia. Methods A systematic review was performed for economic burden studies in schizophrenia using four electronic databases (Medline, EMBASE, PsycINFO, and EconLit) from inception to August 31, 2014. Results A total of 56 articles were included in this review. More than 80% of the studies were conducted in high-income countries. Most studies had undertaken a retrospective- and prevalence-based study design. The bottom-up approach was commonly employed to determine cost, while human capital method was used for indirect cost estimation. Database and literature were the most commonly used data sources in cost estimation in high-income countries, while chart review and interview were the main data sources in low and middle-income countries. Annual costs for the schizophrenia population in the country ranged from US$94 million to US$102 billion. Indirect costs contributed to 50%–85% of the total costs associated with schizophrenia. The economic burden of schizophrenia was estimated to range from 0.02% to 1.65% of the gross domestic product. Conclusion The enormous economic burden in schizophrenia is suggestive of the inadequate provision of health care services to these patients. An informed decision is achievable with the increasing recognition among public and policymakers that schizophrenia is burdensome. This results in better resource allocation and the development of policy-oriented research for this highly disabling yet under-recognized mental health disease. PMID:26937191

  18. Global economic burden of schizophrenia: a systematic review.

    PubMed

    Chong, Huey Yi; Teoh, Siew Li; Wu, David Bin-Chia; Kotirum, Surachai; Chiou, Chiun-Fang; Chaiyakunapruk, Nathorn

    2016-01-01

    Schizophrenia is one of the top 25 leading causes of disability worldwide in 2013. Despite its low prevalence, its health, social, and economic burden has been tremendous, not only for patients but also for families, caregivers, and the wider society. The magnitude of disease burden investigated in an economic burden study is an important source to policymakers in decision making. This study aims to systematically identify studies focusing on the economic burden of schizophrenia, describe the methods and data sources used, and summarize the findings of economic burden of schizophrenia. A systematic review was performed for economic burden studies in schizophrenia using four electronic databases (Medline, EMBASE, PsycINFO, and EconLit) from inception to August 31, 2014. A total of 56 articles were included in this review. More than 80% of the studies were conducted in high-income countries. Most studies had undertaken a retrospective- and prevalence-based study design. The bottom-up approach was commonly employed to determine cost, while human capital method was used for indirect cost estimation. Database and literature were the most commonly used data sources in cost estimation in high-income countries, while chart review and interview were the main data sources in low and middle-income countries. Annual costs for the schizophrenia population in the country ranged from US$94 million to US$102 billion. Indirect costs contributed to 50%-85% of the total costs associated with schizophrenia. The economic burden of schizophrenia was estimated to range from 0.02% to 1.65% of the gross domestic product. The enormous economic burden in schizophrenia is suggestive of the inadequate provision of health care services to these patients. An informed decision is achievable with the increasing recognition among public and policymakers that schizophrenia is burdensome. This results in better resource allocation and the development of policy-oriented research for this highly disabling yet under-recognized mental health disease.

  19. Implementation of an interactive database interface utilizing HTML, PHP, JavaScript, and MySQL in support of water quality assessments in the Northeastern North Carolina Pasquotank Watershed

    NASA Astrophysics Data System (ADS)

    Guion, A., Jr.; Hodgkins, H.

    2015-12-01

    The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.

  20. Care plan program reduces the number of visits for challenging psychiatric patients in the ED.

    PubMed

    Abello, Arthur; Brieger, Ben; Dear, Kim; King, Ben; Ziebell, Chris; Ahmed, Atheer; Milling, Truman J

    2012-09-01

    A small number of patients representing a significant demand on emergency department (ED) services present regularly for a variety of reasons, including psychiatric or behavioral complaints and lack of access to other services. A care plan program was created as a database of ED high users and patients of concern, as identified by ED staff and approved by program administrators to improve care and mitigate ED strain. A list of medical record numbers was assembled by searching the care plan program database for adult patients initially enrolled between the dates of November 1, 2006, and October 21, 2007. Inclusion criteria were the occurrence of a psychiatric International Classification Diseases, Ninth Revision, code in their medical record and a care plan level implying a serious psychiatric disorder causing harmful behavior. Additional data about these patients were acquired using an indigent care tracking database and electronic medical records. Variables collected from these sources were analyzed for changes before and after program enrollment. Of 501 patients in the database in the period studied, 48 patients fulfilled the criteria for the cohort. There was a significant reduction in the number of visits to the ED from the year before program enrollment to the year after enrollment (8.9, before; 5.9, after; P < .05). There was also an increase in psychiatric hospital visits (2%, before; 25%, after; P < .05). An alert program that identifies challenging ED patients with psychiatric conditions and creates a care plan appears to reduce visits and lead to more appropriate use of other resources. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. PAGER 2.0: an update to the pathway, annotated-list and gene-signature electronic repository for Human Network Biology

    PubMed Central

    Yue, Zongliang; Zheng, Qi; Neylon, Michael T; Yoo, Minjae; Shin, Jimin; Zhao, Zhiying; Tan, Aik Choon

    2018-01-01

    Abstract Integrative Gene-set, Network and Pathway Analysis (GNPA) is a powerful data analysis approach developed to help interpret high-throughput omics data. In PAGER 1.0, we demonstrated that researchers can gain unbiased and reproducible biological insights with the introduction of PAGs (Pathways, Annotated-lists and Gene-signatures) as the basic data representation elements. In PAGER 2.0, we improve the utility of integrative GNPA by significantly expanding the coverage of PAGs and PAG-to-PAG relationships in the database, defining a new metric to quantify PAG data qualities, and developing new software features to simplify online integrative GNPA. Specifically, we included 84 282 PAGs spanning 24 different data sources that cover human diseases, published gene-expression signatures, drug–gene, miRNA–gene interactions, pathways and tissue-specific gene expressions. We introduced a new normalized Cohesion Coefficient (nCoCo) score to assess the biological relevance of genes inside a PAG, and RP-score to rank genes and assign gene-specific weights inside a PAG. The companion web interface contains numerous features to help users query and navigate the database content. The database content can be freely downloaded and is compatible with third-party Gene Set Enrichment Analysis tools. We expect PAGER 2.0 to become a major resource in integrative GNPA. PAGER 2.0 is available at http://discovery.informatics.uab.edu/PAGER/. PMID:29126216

  2. Effects of implementing electronic medical records on primary care billings and payments: a before–after study

    PubMed Central

    Shultz, Susan E.; Tu, Karen

    2013-01-01

    Background Several barriers to the adoption of electronic medical records (EMRs) by family physicians have been discussed, including the costs of implementation, impact on work flow and loss of productivity. We examined billings and payments received before and after implementation of EMRs among primary care physicians in the province of Ontario. We also examined billings and payments before and after switching from a fee-for-service to a capitation payment model, because EMR implementation coincided with primary care reform in the province. Methods We used information from the Electronic Medical Record Administrative Data Linked Database (EMRALD) to conduct a retrospective before–after study. The EMRALD database includes EMR data extracted from 183 community-based family physicians in Ontario. We included EMRALD physicians who were eligible to bill the Ontario Health Insurance Plan at least 18 months before and after the date they started using EMRs and had completed a full 18-month period before Mar. 31, 2011, when the study stopped. The main outcome measures were physicians’ monthly billings and payments for office visits and total annual payments received from all government sources. Two index dates were examined: the date physicians started using EMRs and were in a stable payment model (n = 64) and the date physicians switched from a fee-for-service to a capitation payment model (n = 42). Results Monthly billings and payments for office visits did not decrease after the implementation of EMRs. The overall weighted mean annual payment from all government sources increased by 27.7% after the start of EMRs among EMRALD physicians; an increase was also observed among all other primary care physicians in Ontario, but it was not as great (14.4%). There was a decline in monthly billings and payments for office visits after physicians changed payment models, but an increase in their overall annual government payments. Interpretation Implementation of EMRs by primary care physicians did not result in decreased billings or government payments for office visits. Further economic analyses are needed to measure the effects of EMR implementation on productivity and the costs of implementing an EMR system, including the costs of nonclinical work by physicians and their staff. PMID:25077111

  3. ACToR Chemical Structure processing using Open Source ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d

  4. Health Information Needs and Reliability of Sources Among Nondegree Health Sciences Students: A Prerequisite for Designing eHealth Literacy.

    PubMed

    Haruna, Hussein; Tshuma, Ndumiso; Hu, Xiao

    Understanding health information needs and health-seeking behavior is a prerequisite for developing an electronic health information literacy (EHIL) or eHealth literacy program for nondegree health sciences students. At present, interest in researching health information needs and reliable sources paradigms has gained momentum in many countries. However, most studies focus on health professionals and students in higher education institutions. The present study was aimed at providing new insight and filling the existing gap by examining health information needs and reliability of sources among nondegree health sciences students in Tanzania. A cross-sectional study was conducted in 15 conveniently selected health training institutions, where 403 health sciences students were participated. Thirty health sciences students were both purposely and conveniently chosen from each health-training institution. The selected students were pursuing nursing and midwifery, clinical medicine, dentistry, environmental health sciences, pharmacy, and medical laboratory sciences courses. Involved students were either in their first year, second year, or third year of study. Health sciences students' health information needs focus on their educational requirements, clinical practice, and personal information. They use print, human, and electronic health information. They lack eHealth research skills in navigating health information resources and have insufficient facilities for accessing eHealth information, a lack of specialists in health information, high costs for subscription electronic information, and unawareness of the availability of free Internet and other online health-related databases. This study found that nondegree health sciences students have limited skills in EHIL. Thus, designing and incorporating EHIL skills programs into the curriculum of nondegree health sciences students is vital. EHIL is a requirement common to all health settings, learning environments, and levels of study. Our future intention is to design EHIL to support nondegree health sciences students to retrieve and use available health information resources on the Internet. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  5. Implementing and maintaining a researchable database from electronic medical records: a perspective from an academic family medicine department.

    PubMed

    Stewart, Moira; Thind, Amardeep; Terry, Amanda L; Chevendra, Vijaya; Marshall, J Neil

    2009-11-01

    Electronic medical records (EMRs) are posited as a tool for improving practice, policy and research in primary healthcare. This paper describes the Deliver Primary Healthcare Information (DELPHI) Project at the Department of Family Medicine at the University of Western Ontario, focusing on its development, current status and research potential in order to share experiences with researchers in similar contexts. The project progressed through four stages: (a) participant recruitment, (b) EMR software modification and implementation, (c) database creation and (d) data quality assessment. Currently, the DELPHI database holds more than two years of high-quality, de-identified data from 10 practices, with 30,000 patients and nearly a quarter of a million encounters.

  6. Integration of multiple DICOM Web servers into an enterprise-wide Web-based electronic medical record

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.

    1999-07-01

    The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.

  7. Environmental health impacts of tobacco farming: a review of the literature.

    PubMed

    Lecours, Natacha; Almeida, Guilherme E G; Abdallah, Jumanne M; Novotny, Thomas E

    2012-03-01

    To review the literature on environmental health impacts of tobacco farming and to summarise the findings and research gaps in this field. A standard literature search was performed using multiple electronic databases for identification of peer-reviewed articles. The internet and organisational databases were also used to find other types of documents (eg, books and reports). The reference lists of identified relevant documents were reviewed to find additional sources. The selected studies documented many negative environmental impacts of tobacco production at the local level, often linking them with associated social and health problems. The common agricultural practices related to tobacco farming, especially in low-income and middle-income countries, lead to deforestation and soil degradation. Agrochemical pollution and deforestation in turn lead to ecological disruptions that cause a loss of ecosystem services, including land resources, biodiversity and food sources, which negatively impact human health. Multinational tobacco companies' policies and practices contribute to environmental problems related to tobacco leaf production. Development and implementation of interventions against the negative environmental impacts of tobacco production worldwide are necessary to protect the health of farmers, particularly in low-income and middle-income countries. Transitioning these farmers out of tobacco production is ultimately the resolution to this environmental health problem. In order to inform policy, however, further research is needed to better quantify the health impacts of tobacco farming and evaluate the potential alternative livelihoods that may be possible for tobacco farmers globally.

  8. Development of expert systems for analyzing electronic documents

    NASA Astrophysics Data System (ADS)

    Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.

    2018-05-01

    The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.

  9. ECR ion source with electron gun

    DOEpatents

    Xie, Zu Q.; Lyneis, Claude M.

    1993-01-01

    An Advanced Electron Cyclotron Resonance ion source (10) having an electron gun (52) for introducing electrons into the plasma chamber (18) of the ion source (10). The ion source (10) has a injection enclosure (12) and a plasma chamber tank (14). The plasma chamber (18) is defined by a plurality of longitudinal magnets (16). The electron gun (52) injects electrons axially into the plasma chamber (18) such that ionization within the plasma chamber (18) occurs in the presence of the additional electrons produced by the electron gun (52). The electron gun (52) has a cathode (116) for emitting electrons therefrom which is heated by current supplied from an AC power supply (96) while bias potential is provided by a bias power supply (118). A concentric inner conductor (60) and Outer conductor (62) carry heating current to a carbon chuck (104) and carbon pusher (114) Which hold the cathode (116) in place and also heat the cathode (16). In the Advanced Electron Cyclotron Resonance ion source (10), the electron gun (52) replaces the conventional first stage used in prior art electron cyclotron resonance ion generators.

  10. Exposure to benzodiazepines (anxiolytics, hypnotics and related drugs) in seven European electronic healthcare databases: a cross-national descriptive study from the PROTECT-EU Project.

    PubMed

    Huerta, Consuelo; Abbing-Karahagopian, Victoria; Requena, Gema; Oliva, Belén; Alvarez, Yolanda; Gardarsdottir, Helga; Miret, Montserrat; Schneider, Cornelia; Gil, Miguel; Souverein, Patrick C; De Bruin, Marie L; Slattery, Jim; De Groot, Mark C H; Hesse, Ulrik; Rottenkolber, Marietta; Schmiedl, Sven; Montero, Dolores; Bate, Andrew; Ruigomez, Ana; García-Rodríguez, Luis Alberto; Johansson, Saga; de Vries, Frank; Schlienger, Raymond G; Reynolds, Robert F; Klungel, Olaf H; de Abajo, Francisco José

    2016-03-01

    Studies on drug utilization usually do not allow direct cross-national comparisons because of differences in the respective applied methods. This study aimed to compare time trends in BZDs prescribing by applying a common protocol and analyses plan in seven European electronic healthcare databases. Crude and standardized prevalence rates of drug prescribing from 2001-2009 were calculated in databases from Spain, United Kingdon (UK), The Netherlands, Germany and Denmark. Prevalence was stratified by age, sex, BZD type [(using ATC codes), i.e. BZD-anxiolytics BZD-hypnotics, BZD-related drugs and clomethiazole], indication and number of prescription. Crude prevalence rates of BZDs prescribing ranged from 570 to 1700 per 10,000 person-years over the study period. Standardization by age and sex did not substantially change the differences. Standardized prevalence rates increased in the Spanish (+13%) and UK databases (+2% and +8%) over the study period, while they decreased in the Dutch databases (-4% and -22%), the German (-12%) and Danish (-26%) database. Prevalence of anxiolytics outweighed that of hypnotics in the Spanish, Dutch and Bavarian databases, but the reverse was shown in the UK and Danish databases. Prevalence rates consistently increased with age and were two-fold higher in women than in men in all databases. A median of 18% of users received 10 or more prescriptions in 2008. Although similar methods were applied, the prevalence of BZD prescribing varied considerably across different populations. Clinical factors related to BZDs and characteristics of the databases may explain these differences. Copyright © 2015 John Wiley & Sons, Ltd.

  11. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  12. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases

    PubMed Central

    Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-01-01

    Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757

  13. New UV-source catalogs, UV spectral database, UV variables and science tools from the GALEX surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, Luciana; de la Vega, Alexander; Shiao, Bernard; Bohlin, Ralph

    2018-03-01

    We present a new, expanded and improved catalog of Ultraviolet (UV) sources from the GALEX All-Sky Imaging survey: GUVcat_AIS (Bianchi et al. in Astrophys. J. Suppl. Ser. 230:24, 2017). The catalog includes 83 million unique sources (duplicate measurements and rim artifacts are removed) measured in far-UV and near-UV. With respect to previous versions (Bianchi et al. in Mon. Not. R. Astron. Soc. 411:2770 2011a, Adv. Space Res. 53:900-991, 2014), GUVcat_AIS covers a slightly larger area, 24,790 square degrees, and includes critical corrections and improvements, as well as new tags, in particular to identify sources in the footprint of extended objects, where pipeline source detection may fail and custom-photometry may be necessary. The UV unique-source catalog facilitates studies of density of sources, and matching of the UV samples with databases at other wavelengths. We also present first results from two ongoing projects, addressing respectively UV variability searches on time scales from seconds to years by mining the GALEX photon archive, and the construction of a database of ˜120,000 GALEX UV spectra (range ˜1300-3000 Å), including quality and calibration assessment and classification of the grism, hence serendipitous, spectral sources.

  14. Development of Electronic Resources across Networks in Thailand.

    ERIC Educational Resources Information Center

    Ratchatavorn, Phandao

    2002-01-01

    Discusses the development of electronic resources across library networks in Thailand to meet user needs, particularly electronic journals. Topics include concerns about journal access; limited budgets for library acquisitions of journals; and sharing resources through a centralized database system that allows Web access to journals via Internet…

  15. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  16. Development of a database for the verification of trans-ionospheric remote sensing systems

    NASA Astrophysics Data System (ADS)

    Leitinger, R.

    2005-08-01

    Remote sensing systems need verification by means of in-situ data or by means of model data. In the case of ionospheric occultation inversion, ionosphere tomography and other imaging methods on the basis of satellite-to-ground or satellite-to-satellite electron content, the availability of in-situ data with adequate spatial and temporal co-location is a very rare case, indeed. Therefore the method of choice for verification is to produce artificial electron content data with realistic properties, subject these data to the inversion/retrieval method, compare the results with model data and apply a suitable type of “goodness of fit” classification. Inter-comparison of inversion/retrieval methods should be done with sets of artificial electron contents in a “blind” (or even “double blind”) way. The set up of a relevant database for the COST 271 Action is described. One part of the database will be made available to everyone interested in testing of inversion/retrieval methods. The artificial electron content data are calculated by means of large-scale models that are “modulated” in a realistic way to include smaller scale and dynamic structures, like troughs and traveling ionospheric disturbances.

  17. Improved Dust Forecast Products for Southwest Asia Forecasters through Dust Source Database Advancements

    NASA Astrophysics Data System (ADS)

    Brooks, G. R.

    2011-12-01

    Dust storm forecasting is a critical part of military theater operations in Afghanistan and Iraq as well as other strategic areas of the globe. The Air Force Weather Agency (AFWA) has been using the Dust Transport Application (DTA) as a forecasting tool since 2001. Initially developed by The Johns Hopkins University Applied Physics Laboratory (JHUAPL), output products include dust concentration and reduction of visibility due to dust. The performance of the products depends on several factors including the underlying dust source database, treatment of soil moisture, parameterization of dust processes, and validity of the input atmospheric model data. Over many years of analysis, seasonal dust forecast biases of the DTA have been observed and documented. As these products are unique and indispensible for U.S. and NATO forces, amendments were required to provide the best forecasts possible. One of the quickest ways to scientifically address the dust concentration biases noted over time was to analyze the weaknesses in, and adjust the dust source database. Dust source database strengths and weaknesses, the satellite analysis and adjustment process, and tests which confirmed the resulting improvements in the final dust concentration and visibility products will be shown.

  18. [Health and quality of life of medical residents].

    PubMed

    Lourenção, Luciano Garcia; Moscardini, Airton Camacho; Soler, Zaida Aurora Sperli Geraldes

    2010-01-01

    This article highlights the relationship between health and quality of life among the resident medical staff. A review was carried out to analyze the content of the relationship under study. Sources for this search were the Virtual Health Library (VHL), by BIREME (Centro Latino-American and Caribbean Center on Health Sciences Information), the Electronic databases Medline (Medical Literature Analysis and Retrieval System On-Line) Lilacs (Literatura Latino-American and Caribbean Health Sciences), SciELO (Scientific Electronic Library Online) and the email address scholar.google.com.br. Descriptors used were: Quality of life, Burnout, Internship and Residency. Planning and analysis of scientific literature, was performed to evaluate and discuss issues presented in the studies related to the subject, considering the distribution of publications according to country of origin, date of publication, source and title, focus of study and main conclusions. Studies published point to high rates of burnout, stress, depression, fatigue and insomnia among medical residents; moreover a lack of coping strategies, the relationship between workload and quality of life, require a change of medical legislation regarding work-based learning. Studies have shown that an adequate training program is needed not only to increase professional qualification and personal quality of life, but also to provide safety during patient treatment. It is known that residency training is stressful; it is nevertheless a process required to prepare for a solid career and personal growth of the young medical staff.

  19. DEVELOPMENT AND APPLICATION OF THE DORIAN (DOSE-RESPONSE INFORMATION ANALYSIS) SYSTEM

    EPA Science Inventory

    • Migration of ArrayTrack from the proprietary Oracle database to open source Postgres database.
    • Making the public version of the ebKB available with provisions for soliciting input from collaborators and outside users.
    • Continued development ...

    • ICRP Publication 107. Nuclear decay data for dosimetric calculations.

      PubMed

      Eckerman, K; Endo, A

      2008-01-01

      In this report, the Commission provides an electronic database of the physical data needed in calculations of radionuclide-specific protection and operational quantities. This database supersedes the data of Publication 38 (ICRP, 1983), and will be used in future ICRP publications of dose coefficients for the intake of or exposure to radionuclides in the workplace and the environment.The database contains information on the half-lives, decay chains, and yields and energies of radiations emitted in nuclear transformations of 1252 radionuclides of 97 elements. The CD accompanying the publication provides electronic access to complete tables of the emitted radiations, as well as the beta and neutron spectra. The database has been constructed such that user-developed software can extract the data needed for further calculations of a radionuclide of interest. A Windows-based application is provided to display summary information on a user-specified radionuclide, as well as the general characterisation of the nuclides contained in the database. In addition, the application provides a means by which the user can export the emissions of a specified radionuclide for use in subsequent calculations.

  1. A structured vocabulary for indexing dietary supplements in databases in the United States

    PubMed Central

    Saldanha, Leila G; Dwyer, Johanna T; Holden, Joanne M; Ireland, Jayne D.; Andrews, Karen W; Bailey, Regan L; Gahche, Jaime J.; Hardy, Constance J; Møller, Anders; Pilch, Susan M.; Roseland, Janet M

    2011-01-01

    Food composition databases are critical to assess and plan dietary intakes. Dietary supplement databases are also needed because dietary supplements make significant contributions to total nutrient intakes. However, no uniform system exists for classifying dietary supplement products and indexing their ingredients in such databases. Differing approaches to classifying these products make it difficult to retrieve or link information effectively. A consistent approach to classifying information within food composition databases led to the development of LanguaL™, a structured vocabulary. LanguaL™ is being adapted as an interface tool for classifying and retrieving product information in dietary supplement databases. This paper outlines proposed changes to the LanguaL™ thesaurus for indexing dietary supplement products and ingredients in databases. The choice of 12 of the original 14 LanguaL™ facets pertinent to dietary supplements, modifications to their scopes, and applications are described. The 12 chosen facets are: Product Type; Source; Part of Source; Physical State, Shape or Form; Ingredients; Preservation Method, Packing Medium, Container or Wrapping; Contact Surface; Consumer Group/Dietary Use/Label Claim; Geographic Places and Regions; and Adjunct Characteristics of food. PMID:22611303

  2. SIMS: addressing the problem of heterogeneity in databases

    NASA Astrophysics Data System (ADS)

    Arens, Yigal

    1997-02-01

    The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.

  3. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  4. From patient care to research: a validation study examining the factors contributing to data quality in a primary care electronic medical record database.

    PubMed

    Coleman, Nathan; Halas, Gayle; Peeler, William; Casaclang, Natalie; Williamson, Tyler; Katz, Alan

    2015-02-05

    Electronic Medical Records (EMRs) are increasingly used in the provision of primary care and have been compiled into databases which can be utilized for surveillance, research and informing practice. The primary purpose of these records is for the provision of individual patient care; validation and examination of underlying limitations is crucial for use for research and data quality improvement. This study examines and describes the validity of chronic disease case definition algorithms and factors affecting data quality in a primary care EMR database. A retrospective chart audit of an age stratified random sample was used to validate and examine diagnostic algorithms applied to EMR data from the Manitoba Primary Care Research Network (MaPCReN), part of the Canadian Primary Care Sentinel Surveillance Network (CPCSSN). The presence of diabetes, hypertension, depression, osteoarthritis and chronic obstructive pulmonary disease (COPD) was determined by review of the medical record and compared to algorithm identified cases to identify discrepancies and describe the underlying contributing factors. The algorithm for diabetes had high sensitivity, specificity and positive predictive value (PPV) with all scores being over 90%. Specificities of the algorithms were greater than 90% for all conditions except for hypertension at 79.2%. The largest deficits in algorithm performance included poor PPV for COPD at 36.7% and limited sensitivity for COPD, depression and osteoarthritis at 72.0%, 73.3% and 63.2% respectively. Main sources of discrepancy included missing coding, alternative coding, inappropriate diagnosis detection based on medications used for alternate indications, inappropriate exclusion due to comorbidity and loss of data. Comparison to medical chart review shows that at MaPCReN the CPCSSN case finding algorithms are valid with a few limitations. This study provides the basis for the validated data to be utilized for research and informs users of its limitations. Analysis of underlying discrepancies provides the ability to improve algorithm performance and facilitate improved data quality.

  5. 10-fs-level synchronization of photocathode laser with RF-oscillator for ultrafast electron and X-ray sources

    PubMed Central

    Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon

    2017-01-01

    Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today’s ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources. PMID:28067288

  6. 10-fs-level synchronization of photocathode laser with RF-oscillator for ultrafast electron and X-ray sources

    NASA Astrophysics Data System (ADS)

    Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon

    2017-01-01

    Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today’s ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources.

  7. 10-fs-level synchronization of photocathode laser with RF-oscillator for ultrafast electron and X-ray sources.

    PubMed

    Yang, Heewon; Han, Byungheon; Shin, Junho; Hou, Dong; Chung, Hayun; Baek, In Hyung; Jeong, Young Uk; Kim, Jungwon

    2017-01-09

    Ultrafast electron-based coherent radiation sources, such as free-electron lasers (FELs), ultrafast electron diffraction (UED) and Thomson-scattering sources, are becoming more important sources in today's ultrafast science. Photocathode laser is an indispensable common subsystem in these sources that generates ultrafast electron pulses. To fully exploit the potentials of these sources, especially for pump-probe experiments, it is important to achieve high-precision synchronization between the photocathode laser and radio-frequency (RF) sources that manipulate electron pulses. So far, most of precision laser-RF synchronization has been achieved by using specially designed low-noise Er-fibre lasers at telecommunication wavelength. Here we show a modular method that achieves long-term (>1 day) stable 10-fs-level synchronization between a commercial 79.33-MHz Ti:sapphire laser oscillator and an S-band (2.856-GHz) RF oscillator. This is an important first step toward a photocathode laser-based femtosecond RF timing and synchronization system that is suitable for various small- to mid-scale ultrafast X-ray and electron sources.

  8. [The virtual library in equity, health, and human development].

    PubMed

    Valdés, América

    2002-01-01

    This article attempts to describe the rationale that has led to the development of information sources dealing with equity, health, and human development in countries of Latin America and the Caribbean within the context of the Virtual Health Library (Biblioteca Virtual en Salud, BVS). Such information sources include the scientific literature, databases in printed and electronic format, institutional directories and lists of specialists, lists of events and courses, distance education programs, specialty journals and bulletins, as well as other means of disseminating health information. The pages that follow deal with the development of a Virtual Library in Equity, Health, and Human Development, an effort rooted in the conviction that decision-making and policy geared toward achieving greater equity in health must, of necessity, be based on coherent, well-organized, and readily accessible first-rate scientific information. Information is useless unless it is converted into knowledge that benefits society. The Virtual Library in Equity, Health, and Human Development is a coordinated effort to develop a decentralized regional network of scientific information sources, with strict quality control, from which public officials can draw data and practical examples that can help them set health and development policies geared toward achieving greater equity for all.

  9. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    PubMed

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  10. OrChem - An open source chemistry search engine for Oracle®

    PubMed Central

    2009-01-01

    Background Registration, indexing and searching of chemical structures in relational databases is one of the core areas of cheminformatics. However, little detail has been published on the inner workings of search engines and their development has been mostly closed-source. We decided to develop an open source chemistry extension for Oracle, the de facto database platform in the commercial world. Results Here we present OrChem, an extension for the Oracle 11G database that adds registration and indexing of chemical structures to support fast substructure and similarity searching. The cheminformatics functionality is provided by the Chemistry Development Kit. OrChem provides similarity searching with response times in the order of seconds for databases with millions of compounds, depending on a given similarity cut-off. For substructure searching, it can make use of multiple processor cores on today's powerful database servers to provide fast response times in equally large data sets. Availability OrChem is free software and can be redistributed and/or modified under the terms of the GNU Lesser General Public License as published by the Free Software Foundation. All software is available via http://orchem.sourceforge.net. PMID:20298521

  11. Development of a consumer product ingredient database for chemical exposure screening and prioritization.

    PubMed

    Goldsmith, M-R; Grulke, C M; Brooks, R D; Transue, T R; Tan, Y M; Frame, A; Egeghy, P P; Edwards, R; Chang, D T; Tornero-Velez, R; Isaacs, K; Wang, A; Johnson, J; Holm, K; Reich, M; Mitchell, J; Vallero, D A; Phillips, L; Phillips, M; Wambaugh, J F; Judson, R S; Buckley, T J; Dary, C C

    2014-03-01

    Consumer products are a primary source of chemical exposures, yet little structured information is available on the chemical ingredients of these products and the concentrations at which ingredients are present. To address this data gap, we created a database of chemicals in consumer products using product Material Safety Data Sheets (MSDSs) publicly provided by a large retailer. The resulting database represents 1797 unique chemicals mapped to 8921 consumer products and a hierarchy of 353 consumer product "use categories" within a total of 15 top-level categories. We examine the utility of this database and discuss ways in which it will support (i) exposure screening and prioritization, (ii) generic or framework formulations for several indoor/consumer product exposure modeling initiatives, (iii) candidate chemical selection for monitoring near field exposure from proximal sources, and (iv) as activity tracers or ubiquitous exposure sources using "chemical space" map analyses. Chemicals present at high concentrations and across multiple consumer products and use categories that hold high exposure potential are identified. Our database is publicly available to serve regulators, retailers, manufacturers, and the public for predictive screening of chemicals in new and existing consumer products on the basis of exposure and risk. Published by Elsevier Ltd.

  12. Optimization of Compton Source Performance through Electron Beam Shaping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyzhenkov, Alexander; Yampolsky, Nikolai

    2016-09-26

    We investigate a novel scheme for significantly increasing the brightness of x-ray light sources based on inverse Compton scattering (ICS) - scattering laser pulses off relativistic electron beams. The brightness of ICS sources is limited by the electron beam quality since electrons traveling at different angles, and/or having different energies, produce photons with different energies. Therefore, the spectral brightness of the source is defined by the 6d electron phase space shape and size, as well as laser beam parameters. The peak brightness of the ICS source can be maximized then if the electron phase space is transformed in a waymore » so that all electrons scatter off the x-ray photons of same frequency in the same direction, arriving to the observer at the same time. We describe the x-ray photon beam quality through the Wigner function (6d photon phase space distribution) and derive it for the ICS source when the electron and laser rms matrices are arbitrary.« less

  13. Valid Statistical Analysis for Logistic Regression with Multiple Sources

    NASA Astrophysics Data System (ADS)

    Fienberg, Stephen E.; Nardi, Yuval; Slavković, Aleksandra B.

    Considerable effort has gone into understanding issues of privacy protection of individual information in single databases, and various solutions have been proposed depending on the nature of the data, the ways in which the database will be used and the precise nature of the privacy protection being offered. Once data are merged across sources, however, the nature of the problem becomes far more complex and a number of privacy issues arise for the linked individual files that go well beyond those that are considered with regard to the data within individual sources. In the paper, we propose an approach that gives full statistical analysis on the combined database without actually combining it. We focus mainly on logistic regression, but the method and tools described may be applied essentially to other statistical models as well.

  14. 49 CFR 237.155 - Documents and records.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the information required by this part; (3) The track owner monitors its electronic records database...; (4) The track owner shall train its employees who use the system on the proper use of the electronic...

  15. Human health risk assessment database, "the NHSRC toxicity value database": supporting the risk assessment process at US EPA's National Homeland Security Research Center.

    PubMed

    Moudgal, Chandrika J; Garrahan, Kevin; Brady-Roberts, Eletha; Gavrelis, Naida; Arbogast, Michelle; Dun, Sarah

    2008-11-15

    The toxicity value database of the United States Environmental Protection Agency's (EPA) National Homeland Security Research Center has been in development since 2004. The toxicity value database includes a compilation of agent property, toxicity, dose-response, and health effects data for 96 agents: 84 chemical and radiological agents and 12 biotoxins. The database is populated with multiple toxicity benchmark values and agent property information from secondary sources, with web links to the secondary sources, where available. A selected set of primary literature citations and associated dose-response data are also included. The toxicity value database offers a powerful means to quickly and efficiently gather pertinent toxicity and dose-response data for a number of agents that are of concern to the nation's security. This database, in conjunction with other tools, will play an important role in understanding human health risks, and will provide a means for risk assessors and managers to make quick and informed decisions on the potential health risks and determine appropriate responses (e.g., cleanup) to agent release. A final, stand alone MS ACESSS working version of the toxicity value database was completed in November, 2007.

  16. Electronic nicotine delivery system (electronic cigarette) awareness, use, reactions and beliefs: a systematic review

    PubMed Central

    Pepper, Jessica K; Brewer, Noel T

    2015-01-01

    Objective We sought to systematically review the literature on electronic nicotine delivery systems (ENDS, also called electronic cigarettes) awareness, use, reactions and beliefs. Data sources We searched five databases for articles published between 2006 and 1 July 2013 that contained variations of the phrases ‘electronic cigarette’, ‘e-cigarette’ and ‘electronic nicotine delivery’. Study selection Of the 244 abstracts identified, we excluded articles not published in English, articles unrelated to ENDS, dissertation abstracts and articles without original data on prespecified outcomes. Data extraction Two reviewers coded each article for ENDS awareness, use, reactions and beliefs. Data synthesis 49 studies met inclusion criteria. ENDS awareness increased from 16% to 58% from 2009 to 2011, and use increased from 1% to 6%. The majority of users were current or former smokers. Many users found ENDS satisfying, and some engaged in dual use of ENDS and other tobacco. No longitudinal studies examined whether ENDS serve as ‘gateways’ to future tobacco use. Common reasons for using ENDS were quitting smoking and using a product that is healthier than cigarettes. Self-reported survey data and prospective trials suggest that ENDS might help cigarette smokers quit, but no randomised controlled trials with probability samples compared ENDS with other cessation tools. Some individuals used ENDS to avoid smoking restrictions. Conclusions ENDS use is expanding rapidly despite experts’ concerns about safety, dual use and possible ‘gateway’ effects. More research is needed on effective public health messages, perceived health risks, validity of self-reports of smoking cessation and the use of different kinds of ENDS. PMID:24259045

  17. LUNAR DUST GRAIN CHARGING BY ELECTRON IMPACT: COMPLEX ROLE OF SECONDARY ELECTRON EMISSIONS IN SPACE ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbas, M. M.; Craven, P. D.; LeClair, A. C.

    2010-08-01

    Dust grains in various astrophysical environments are generally charged electrostatically by photoelectric emissions with radiation from nearby sources, or by electron/ion collisions by sticking or secondary electron emissions (SEEs). The high vacuum environment on the lunar surface leads to some unusual physical and dynamical phenomena involving dust grains with high adhesive characteristics, and levitation and transportation over long distances. Knowledge of the dust grain charges and equilibrium potentials is important for understanding a variety of physical and dynamical processes in the interstellar medium, and heliospheric, interplanetary/planetary, and lunar environments. It has been well recognized that the charging properties of individualmore » micron-/submicron-size dust grains are expected to be substantially different from the corresponding values for bulk materials. In this paper, we present experimental results on the charging of individual 0.2-13 {mu}m size dust grains selected from Apollo 11 and 17 dust samples, and spherical silica particles by exposing them to mono-energetic electron beams in the 10-200 eV energy range. The dust charging process by electron impact involving the SEEs discussed is found to be a complex charging phenomenon with strong particle size dependence. The measurements indicate substantial differences between the polarity and magnitude of the dust charging rates of individual small-size dust grains, and the measurements and model properties of corresponding bulk materials. A more comprehensive plan of measurements of the charging properties of individual dust grains for developing a database for realistic models of dust charging in astrophysical and lunar environments is in progress.« less

  18. Lunary Dust Grain Charging by Electron Impact: Complex Role of Secondary Electron Emissions in Space Environments

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Tankosic, D.; Crave, P. D.; LeClair, A.; Spann, J. F.

    2010-01-01

    Dust grains in various astrophysical environments are generally charged electrostatically by photoelectric emissions with radiation from nearby sources, or by electron/ion collisions by sticking or secondary electron emissions (SEES). The high vacuum environment on the lunar surface leads to some unusual physical and dynamical phenomena involving dust grains with high adhesive characteristics, and levitation and transportation over long distances. Knowledge of the dust grain charges and equilibrium potentials is important for understanding a variety of physical and dynamical processes in the interstellar medium, and heliospheric, interplanetary/ planetary, and lunar environments. It has been well recognized that the charging properties of individual micron-/submicron-size dust grains are expected to be substantially different from the corresponding values for bulk materials. In this paper, we present experimental results on the charging of individual 0.2-13 m size dust grains selected from Apollo 11 and 17 dust samples, and spherical silica particles by exposing them to mono-energetic electron beams in the 10-200 eV energy range. The dust charging process by electron impact involving the SEES discussed is found to be a complex charging phenomenon with strong particle size dependence. The measurements indicate substantial differences between the polarity and magnitude of the dust charging rates of individual small-size dust grains, and the measurements and model properties of corresponding bulk materials. A more comprehensive plan of measurements of the charging properties of individual dust grains for developing a database for realistic models of dust charging in astrophysical and lunar environments is in progress.

  19. How can the research potential of the clinical quality databases be maximized? The Danish experience.

    PubMed

    Nørgaard, M; Johnsen, S P

    2016-02-01

    In Denmark, the need for monitoring of clinical quality and patient safety with feedback to the clinical, administrative and political systems has resulted in the establishment of a network of more than 60 publicly financed nationwide clinical quality databases. Although primarily devoted to monitoring and improving quality of care, the potential of these databases as data sources in clinical research is increasingly being recognized. In this review, we describe these databases focusing on their use as data sources for clinical research, including their strengths and weaknesses as well as future concerns and opportunities. The research potential of the clinical quality databases is substantial but has so far only been explored to a limited extent. Efforts related to technical, legal and financial challenges are needed in order to take full advantage of this potential. © 2016 The Association for the Publication of the Journal of Internal Medicine.

  20. Direction-division multiplexed holographic free-electron-driven light sources

    NASA Astrophysics Data System (ADS)

    Clarke, Brendan P.; MacDonald, Kevin F.; Zheludev, Nikolay I.

    2018-01-01

    We report on a free-electron-driven light source with a controllable direction of emission. The source comprises a microscopic array of plasmonic surface-relief holographic domains, each tailored to direct electron-induced light emission at a selected wavelength into a collimated beam in a prescribed direction. The direction-division multiplexed source is tested by driving it with the 30 kV electron beam of a scanning electron microscope: light emission, at a wavelength of 800 nm in the present case, is switched among different output angles by micron-scale repositioning of the electron injection point among domains. Such sources, with directional switching/tuning possible at picosecond timescales, may be applied to field-emission and surface-conduction electron-emission display technologies, optical multiplexing, and charged-particle-beam position metrology.

  1. Update on NASA Space Shuttle Earth Observations Photography on the laser videodisc for rapid image access

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh

    1994-01-01

    There have been many significant improvements in the public access to the Space Shuttle Earth Observations Photography Database. New information is provided for the user community on the recently released videodisc of this database. Topics covered included the following: earlier attempts; our first laser videodisc in 1992; the new laser videodisc in 1994; and electronic database access.

  2. Detailed Modeling of Physical Processes in Electron Sources for Accelerator Applications

    NASA Astrophysics Data System (ADS)

    Chubenko, Oksana; Afanasev, Andrei

    2017-01-01

    At present, electron sources are essential in a wide range of applications - from common technical use to exploring the nature of matter. Depending on the application requirements, different methods and materials are used to generate electrons. State-of-the-art accelerator applications set a number of often-conflicting requirements for electron sources (e.g., quantum efficiency vs. polarization, current density vs. lifetime, etc). Development of advanced electron sources includes modeling and design of cathodes, material growth, fabrication of cathodes, and cathode testing. The detailed simulation and modeling of physical processes is required in order to shed light on the exact mechanisms of electron emission and to develop new-generation electron sources with optimized efficiency. The purpose of the present work is to study physical processes in advanced electron sources and develop scientific tools, which could be used to predict electron emission from novel nano-structured materials. In particular, the area of interest includes bulk/superlattice gallium arsenide (bulk/SL GaAs) photo-emitters and nitrogen-incorporated ultrananocrystalline diamond ((N)UNCD) photo/field-emitters. Work supported by The George Washington University and Euclid TechLabs LLC.

  3. 10 CFR 2.1011 - Management of electronic information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Management of electronic information. 2.1011 Section 2... High-Level Radioactive Waste at a Geologic Repository § 2.1011 Management of electronic information. (a... Language)-compliant (ANSI IX3.135-1992/ISO 9075-1992) database management system (DBMS). Alternatively, the...

  4. Electronic Publishing and Document Delivery; A Case Study of Commercial Information Services on the Internet.

    ERIC Educational Resources Information Center

    Abbott, Anthony

    1992-01-01

    Discusses the electronic publishing activities of Meckler Publishing on the Internet, including a publications catalog, an electronic journal, and tables of contents databases. Broader issues of commercial network publishing are also addressed, including changes in the research process, changes in publishing, bibliographic control,…

  5. Source Data Applicability Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven D.; Ring, Robert W.

    2016-01-01

    Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system where it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for assigning uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide a case study example by translating Ground Benign (GB) and Ground Mobile (GM) to the Airborne Uninhabited Fighter (AUF) environment for three electronic components often found in space launch vehicle control systems. The classification method will be followed by uncertainty-importance routines to assess the need to for more applicable data to reduce uncertainty.

  6. Awareness, perception and barriers to seeking information from online academic databases and medical journals as sources of information.

    PubMed

    Wong, Li Ping; Mohamad Shakir, Sharina Mahavera; Tong, Wen Ting; Alias, Haridah; Aghamohammadi, Nasrin; Arumugam, Kulenthran

    2017-10-16

    Medical students' use of online medical journals as a source of information is crucial in the learning pathway to become medical doctors. We conducted a cross-sectional survey study among University medical students between December 2012 and March 2013 to assess their awareness, perceived usefulness, practices, and barriers to seeking information from online academic databases and medical journals. The response rate was 67.53%. The majority of the students knew of the availability of online academic databases and medical journals. The mean score for awareness (4.25 of possible 11.0), perceived usefulness (13.95 of possible 33.0), and practice (10.67 of possible 33.0) were low. The mean barrier score toward using online academic databases and medical journals was 25.41 (of possible 45.0). Multivariate findings showed that significant barriers associated with overall usage of online databases and medical journals were 1) not knowing where or how to locate databases and 2) unsureness of using the Boolean operators. Availability of full text subscriptions was found to be an important factor in using online databases. Study findings highlighted the need to increase awareness of academic databases' availability and increase training on ways to search online academic databases and medical journals.

  7. Literature searching for clinical and cost-effectiveness studies used in health technology assessment reports carried out for the National Institute for Clinical Excellence appraisal system.

    PubMed

    Royle, P; Waugh, N

    2003-01-01

    To contribute to making searching for Technology Assessment Reports (TARs) more cost-effective by suggesting an optimum literature retrieval strategy. A sample of 20 recent TARs. All sources used to search for clinical and cost-effectiveness studies were recorded. In addition, all studies that were included in the clinical and cost-effectiveness sections of the TARs were identified, and their characteristics recorded, including author, journal, year, study design, study size and quality score. Each was also classified by publication type, and then checked to see whether it was indexed in the following databases: MEDLINE, EMBASE, and then either the Cochrane Controlled Trials Register (CCTR) for clinical effectiveness studies or the NHS Economic Evaluation Database (NHS EED) for the cost-effectiveness studies. Any study not found in at least one of these databases was checked to see whether it was indexed in the Science Citation Index (SCI) and BIOSIS, and the American Society of Clinical Oncology (ASCO) Online if a cancer review. Any studies still not found were checked to see whether they were in a number of additional databases. The median number of sources searched per TAR was 20, and the range was from 13 to 33 sources. Six sources (CCTR, DARE, EMBASE, MEDLINE, NHS EED and sponsor/industry submissions to National Institute for Clinical Excellence) were used in all reviews. After searching the MEDLINE, EMBASE and NHS EED databases, 87.3% of the clinical effectiveness studies and 94.8% of the cost-effectiveness studies were found, rising to 98.2% when SCI, BIOSIS and ASCO Online and 97.9% when SCI and ASCO Online, respectively, were added. The median number of sources searched for the 14 TARs that included an economic model was 9.0 per TAR. A sensitive search filter for identifying non-randomised controlled trials (RCT), constructed for MEDLINE and using the search terms from the bibliographic records in the included studies, retrieved only 85% of the known sample. Therefore, it is recommended that when searching for non-RCT studies a search is done for the intervention alone, and records are then scanned manually for those that look relevant. Searching additional databases beyond the Cochrane Library (which includes CCTR, NHS EED and the HTA database), MEDLINE, EMBASE and SCI, plus BIOSIS limited to meeting abstracts only, was seldom found to be effective in retrieving additional studies for inclusion in the clinical and cost-effectiveness sections of TARs (apart from reviews of cancer therapies, where a search of the ASCO database is recommended). A more selective approach to database searching would suffice in most cases and would save resources, thereby making the TAR process more efficient. However, searching non-database sources (including submissions from manufacturers, recent meeting abstracts, contact with experts and checking reference lists) does appear to be a productive way of identifying further studies.

  8. Filtering peripheral high temperature electrons in a cylindrical rf-driven plasmas by an axisymmetric radial magnetic field

    NASA Astrophysics Data System (ADS)

    Akahoshi, Hikaru; Takahashi, Kazunori; Ando, Akira

    2018-03-01

    High temperature electrons generated near a radial wall of a cylindrical source tube in a radiofrequency (rf) inductively-coupled plasma is filtered by an axisymmetric radial magnetic field formed near the source exit by locating annular permanent magnets, where the axial magnetic field strength in the radially central region is fairly uniform inside the source tube and is close to zero near the source exit. The source is operated at 3 mTorr in argon and the rf antenna is powered by a 13.56 MHz and 400 W rf generator. Measurement of electron energy probability functions shows the presence of the peripheral high temperature electrons inside the source, while the temperature of the peripheral electrons downstream of the source is observed to be reduced.

  9. Active fault databases and seismic hazard calculations: a compromise between science and practice. Review of case studies from Spain.

    NASA Astrophysics Data System (ADS)

    Garcia-Mayordomo, Julian; Martin-Banda, Raquel; Insua-Arevalo, Juan Miguel; Alvarez-Gomez, Jose Antonio; Martinez-Diaz, Jose Jesus

    2017-04-01

    Since the Quaternary Active Faults Database of Iberia (QAFI) was released in February 2012 a number of studies aimed at producing seismic hazard assessments have made use of it. We will present a summary of the shortcomings and advantages that were faced when QAFI was considered in different seismic hazard studies. These include the production of the new official seismic hazard map of Spain, performed in the view of the foreseen adoption of Eurocode-8 throughout 2017. The QAFI database was considered as a complementary source of information for designing the seismogenic source-zone models used in the calculations, and particularly for the estimation of maximum magnitude distribution in each zone, as well as for assigning the predominant rupture mechanism based on style of faulting. We will also review the different results obtained by other studies that considered QAFI faults as independent seismogenic-sources in opposition to source-zones, revealing, on one hand, the crucial importance of data-reliability and, on the other, the very much influence that ground motion attenuation models have on the actual impact of fault-sources on hazard results. Finally, we will present briefly the updated version of the database (QAFI v.3, 2015), which includes an original scheme for evaluating the reliability of fault seismic parameters specifically devised to facilitate decision-making to seismic hazard practitioners.

  10. Technology and Microcomputers for an Information Centre/Special Library.

    ERIC Educational Resources Information Center

    Daehn, Ralph M.

    1984-01-01

    Discusses use of microcomputer hardware and software, telecommunications methods, and advanced library methods to create a specialized information center's database of literature relating to farm machinery and food processing. Systems and services (electronic messaging, serials control, database creation, cataloging, collections, circulation,…

  11. Sustaining Indigenous Languages in Cyberspace.

    ERIC Educational Resources Information Center

    Cazden, Courtney B.

    This paper describes how certain types of electronic technologies, specifically CD-ROMs, computerized databases, and telecommunications networks, are being incorporated into language and culture revitalization projects in Alaska and around the Pacific. The paper presents two examples of CD-ROMs and computerized databases from Alaska, describing…

  12. Average stopping powers for electron and photon sources for radiobiological modeling and microdosimetric applications

    NASA Astrophysics Data System (ADS)

    Vassiliev, Oleg N.; Kry, Stephen F.; Grosshans, David R.; Mohan, Radhe

    2018-03-01

    This study concerns calculation of the average electronic stopping power for photon and electron sources. It addresses two problems that have not yet been fully resolved. The first is defining the electron spectrum used for averaging in a way that is most suitable for radiobiological modeling. We define it as the spectrum of electrons entering the sensitive to radiation volume (SV) within the cell nucleus, at the moment they enter the SV. For this spectrum we derive a formula that combines linearly the fluence spectrum and the source spectrum. The latter is the distribution of initial energies of electrons produced by a source. Previous studies used either the fluence or source spectra, but not both, thereby neglecting a part of the complete spectrum. Our derived formula reduces to these two prior methods in the case of high and low energy sources, respectively. The second problem is extending electron spectra to low energies. Previous studies used an energy cut-off on the order of 1 keV. However, as we show, even for high energy sources, such as 60Co, electrons with energies below 1 keV contribute about 30% to the dose. In this study all the spectra were calculated with Geant4-DNA code and a cut-off energy of only 11 eV. We present formulas for calculating frequency- and dose-average stopping powers, numerical results for several important electron and photon sources, and tables with all the data needed to use our formulas for arbitrary electron and photon sources producing electrons with initial energies up to  ∼1 MeV.

  13. Stopping-Power and Range Tables for Electrons, Protons, and Helium Ions

    National Institute of Standards and Technology Data Gateway

    SRD 124 NISStopping-Power and Range Tables for Electrons, Protons, and Helium Ions (Web, free access)   The databases ESTAR, PSTAR, and ASTAR calculate stopping-power and range tables for electrons, protons, or helium ions. Stopping-power and range tables can be calculated for electrons in any user-specified material and for protons and helium ions in 74 materials.

  14. Searching and exploitation of distributed geospatial data sources via the Naval Research Lab's Geospatial Information Database (GIDB) Portal System

    NASA Astrophysics Data System (ADS)

    McCreedy, Frank P.; Sample, John T.; Ladd, William P.; Thomas, Michael L.; Shaw, Kevin B.

    2005-05-01

    The Naval Research Laboratory"s Geospatial Information Database (GIDBTM) Portal System has been extended to now include an extensive geospatial search functionality. The GIDB Portal System interconnects over 600 distributed geospatial data sources via the Internet with a thick client, thin client and a PDA client. As the GIDB Portal System has rapidly grown over the last two years (adding hundreds of geospatial sources), the obvious requirement has arisen to more effectively mine the interconnected sources in near real-time. How the GIDB Search addresses this issue is the prime focus of this paper.

  15. A Unified Flash Flood Database across the United States

    USGS Publications Warehouse

    Gourley, Jonathan J.; Hong, Yang; Flamig, Zachary L.; Arthur, Ami; Clark, Robert; Calianno, Martin; Ruin, Isabelle; Ortel, Terry W.; Wieczorek, Michael; Kirstetter, Pierre-Emmanuel; Clark, Edward; Krajewski, Witold F.

    2013-01-01

    Despite flash flooding being one of the most deadly and costly weather-related natural hazards worldwide, individual datasets to characterize them in the United States are hampered by limited documentation and can be difficult to access. This study is the first of its kind to assemble, reprocess, describe, and disseminate a georeferenced U.S. database providing a long-term, detailed characterization of flash flooding in terms of spatiotemporal behavior and specificity of impacts. The database is composed of three primary sources: 1) the entire archive of automated discharge observations from the U.S. Geological Survey that has been reprocessed to describe individual flooding events, 2) flash-flooding reports collected by the National Weather Service from 2006 to the present, and 3) witness reports obtained directly from the public in the Severe Hazards Analysis and Verification Experiment during the summers 2008–10. Each observational data source has limitations; a major asset of the unified flash flood database is its collation of relevant information from a variety of sources that is now readily available to the community in common formats. It is anticipated that this database will be used for many diverse purposes, such as evaluating tools to predict flash flooding, characterizing seasonal and regional trends, and improving understanding of dominant flood-producing processes. We envision the initiation of this community database effort will attract and encompass future datasets.

  16. Implementation of a standardized electronic tool improves compliance, accuracy, and efficiency of trainee-to-trainee patient care handoffs after complex general surgical oncology procedures.

    PubMed

    Clarke, Callisia N; Patel, Sameer H; Day, Ryan W; George, Sobha; Sweeney, Colin; Monetes De Oca, Georgina Avaloa; Aiss, Mohamed Ait; Grubbs, Elizabeth G; Bednarski, Brian K; Lee, Jeffery E; Bodurka, Diane C; Skibber, John M; Aloia, Thomas A

    2017-03-01

    Duty-hour regulations have increased the frequency of trainee-trainee patient handoffs. Each handoff creates a potential source for communication errors that can lead to near-miss and patient-harm events. We investigated the utility, efficacy, and trainee experience associated with implementation of a novel, standardized, electronic handoff system. We conducted a prospective intervention study of trainee-trainee handoffs of inpatients undergoing complex general surgical oncology procedures at a large tertiary institution. Preimplementation data were measured using trainee surveys and direct observation and by tracking delinquencies in charting. A standardized electronic handoff tool was created in a research electronic data capture (REDCap) database using the previously validated I-PASS methodology (illness severity, patient summary, action list, situational awareness and contingency planning, and synthesis). Electronic handoff was augmented by direct communication via phone or face-to-face interaction for inpatients deemed "watcher" or "unstable." Postimplementation handoff compliance, communication errors, and trainee work flow were measured and compared to preimplementation values using standard statistical analysis. A total of 474 handoffs (203 preintervention and 271 postintervention) were observed over the study period; 86 handoffs involved patients admitted to the surgical intensive care unit, 344 patients admitted to the surgical stepdown unit, and 44 patients on the surgery ward. Implementation of the structured electronic tool resulted in an increase in trainee handoff compliance from 73% to 96% (P < .001) and decreased errors in communication by 50% (P = .044) while improving trainee efficiency and workflow. A standardized electronic tool augmented by direct communication for higher acuity patients can improve compliance, accuracy, and efficiency of handoff communication between surgery trainees. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. An Introduction to Services Accessible on the Internet.

    ERIC Educational Resources Information Center

    Giguere, Marlene

    1992-01-01

    Provides an overview of the INTERNET and INTERNET services of interest to libraries, including electronic mail, bulletin boards, electronic publishing, online public access catalogs and databases, and downloaded texts and software. (16 references) (MES)

  18. Guidance Determining Applicability of New Major Source Regulations in the Granting of Construction Permits to Sources of Air Emissions

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  19. Bibliographical database of radiation biological dosimetry and risk assessment: Part 1, through June 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straume, T.; Ricker, Y.; Thut, M.

    1988-08-29

    This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. Itmore » should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database.« less

  20. Shutterless ion mobility spectrometer with fast pulsed electron source

    NASA Astrophysics Data System (ADS)

    Bunert, E.; Heptner, A.; Reinecke, T.; Kirk, A. T.; Zimmermann, S.

    2017-02-01

    Ion mobility spectrometers (IMS) are devices for fast and very sensitive trace gas analysis. The measuring principle is based on an initial ionization process of the target analyte. Most IMS employ radioactive electron sources, such as 63Ni or 3H. These radioactive materials have the disadvantage of legal restrictions and the electron emission has a predetermined intensity and cannot be controlled or disabled. In this work, we replaced the 3H source of our IMS with 100 mm drift tube length with our nonradioactive electron source, which generates comparable spectra to the 3H source. An advantage of our emission current controlled nonradioactive electron source is that it can operate in a fast pulsed mode with high electron intensities. By optimizing the geometric parameters and developing fast control electronics, we can achieve very short electron emission pulses for ionization with high intensities and an adjustable pulse width of down to a few nanoseconds. This results in small ion packets at simultaneously high ion densities, which are subsequently separated in the drift tube. Normally, the required small ion packet is generated by a complex ion shutter mechanism. By omitting the additional reaction chamber, the ion packet can be generated directly at the beginning of the drift tube by our pulsed nonradioactive electron source with only slight reduction in resolving power. Thus, the complex and costly shutter mechanism and its electronics can also be omitted, which leads to a simple low-cost IMS-system with a pulsed nonradioactive electron source and a resolving power of 90.

  1. Omics databases on kidney disease: where they can be found and how to benefit from them.

    PubMed

    Papadopoulos, Theofilos; Krochmal, Magdalena; Cisek, Katryna; Fernandes, Marco; Husi, Holger; Stevens, Robert; Bascands, Jean-Loup; Schanstra, Joost P; Klein, Julie

    2016-06-01

    In the recent decades, the evolution of omics technologies has led to advances in all biological fields, creating a demand for effective storage, management and exchange of rapidly generated data and research discoveries. To address this need, the development of databases of experimental outputs has become a common part of scientific practice in order to serve as knowledge sources and data-sharing platforms, providing information about genes, transcripts, proteins or metabolites. In this review, we present omics databases available currently, with a special focus on their application in kidney research and possibly in clinical practice. Databases are divided into two categories: general databases with a broad information scope and kidney-specific databases distinctively concentrated on kidney pathologies. In research, databases can be used as a rich source of information about pathophysiological mechanisms and molecular targets. In the future, databases will support clinicians with their decisions, providing better and faster diagnoses and setting the direction towards more preventive, personalized medicine. We also provide a test case demonstrating the potential of biological databases in comparing multi-omics datasets and generating new hypotheses to answer a critical and common diagnostic problem in nephrology practice. In the future, employment of databases combined with data integration and data mining should provide powerful insights into unlocking the mysteries of kidney disease, leading to a potential impact on pharmacological intervention and therapeutic disease management.

  2. Comparison of Various Databases for Estimation of Dietary Polyphenol Intake in the Population of Polish Adults.

    PubMed

    Witkowska, Anna M; Zujko, Małgorzata E; Waśkiewicz, Anna; Terlikowska, Katarzyna M; Piotrowski, Walerian

    2015-11-11

    The primary aim of the study was to estimate the consumption of polyphenols in a population of 6661 subjects aged between 20 and 74 years representing a cross-section of the Polish society, and the second objective was to compare the intakes of flavonoids calculated on the basis of the two commonly used databases. Daily food consumption data were collected in 2003-2005 using a single 24-hour dietary recall. Intake of total polyphenols was estimated using an online Phenol-Explorer database, and flavonoid intake was determined using following data sources: the United States Department of Agriculture (USDA) database combined of flavonoid and isoflavone databases, and the Phenol-Explorer database. Total polyphenol intake, which was calculated with the Phenol-Explorer database, was 989 mg/day with the major contributions of phenolic acids 556 mg/day and flavonoids 403.5 mg/day. The flavonoid intake calculated on the basis of the USDA databases was 525 mg/day. This study found that tea is the primary source of polyphenols and flavonoids for the studied population, including mainly flavanols, while coffee is the most important contributor of phenolic acids, mostly hydroxycinnamic acids. Our study also demonstrated that flavonoid intakes estimated according to various databases may substantially differ. Further work should be undertaken to expand polyphenol databases to better reflect their food contents.

  3. Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.

    ERIC Educational Resources Information Center

    Conlon, Sumali Pin-Ngern; And Others

    1993-01-01

    Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)

  4. Business Faculty Research: Satisfaction with the Web versus Library Databases

    ERIC Educational Resources Information Center

    Dewald, Nancy H.; Silvius, Matthew A.

    2005-01-01

    Business faculty members teaching at undergraduate campuses of the Pennsylvania State University were surveyed in order to assess their satisfaction with free Web sources and with subscription databases for their professional research. Although satisfaction with the Web's ease of use was higher than that for databases, overall satisfaction for…

  5. Environment Online: The Greening of Databases. Part 2. Scientific and Technical Databases.

    ERIC Educational Resources Information Center

    Alston, Patricia Gayle

    1991-01-01

    This second in a series of articles about online sources of environmental information describes scientific and technical databases that are useful for searching environmental data. Topics covered include chemicals and hazardous substances; agriculture; pesticides; water; forestry, oil, and energy resources; air; environmental and occupational…

  6. Characteristics of Resources Represented in the OCLC CORC Database.

    ERIC Educational Resources Information Center

    Connell, Tschera Harkness; Prabha, Chandra

    2002-01-01

    Examines the characteristics of Web resources in Online Computer Library Center's (OCLC) Cooperative Online Resource Catalog (CORC) in terms of subject matter, source of content, publication patterns, and units of information chosen for representation in the database. Suggests that the ability to successfully use a database depends on…

  7. DSSTOX STRUCTURE-SEARCHABLE PUBLIC TOXICITY DATABASE NETWORK: CURRENT PROGRESS AND NEW INITIATIVES TO IMPROVE CHEMO-BIOINFORMATICS CAPABILITIES

    EPA Science Inventory

    The EPA DSSTox website (http://www/epa.gov/nheerl/dsstox) publishes standardized, structure-annotated toxicity databases, covering a broad range of toxicity disciplines. Each DSSTox database features documentation written in collaboration with the source authors and toxicity expe...

  8. Where Full-Text Is Viable.

    ERIC Educational Resources Information Center

    Cotton, P. L.

    1987-01-01

    Defines two types of online databases: source, referring to those intended to be complete in themselves, whether full-text or abstracts; and bibliographic, meaning those that are not complete. Predictions are made about the future growth rate of these two types of databases, as well as full-text versus abstract databases. (EM)

  9. Photo-stimulated low electron temperature high current diamond film field emission cathode

    DOEpatents

    Shurter,; Roger Philips, Devlin [Los Alamos, NM; David James, Moody [Santa Fe, NM; Nathan Andrew, Taccetti [Los Alamos, NM; Jose Martin, Russell [Santa Fe, NM; John, Steven [Los Alamos, NM

    2012-07-24

    An electron source includes a back contact surface having a means for attaching a power source to the back contact surface. The electron source also includes a layer comprising platinum in direct contact with the back contact surface, a composite layer of single-walled carbon nanotubes embedded in platinum in direct contact with the layer comprising platinum. The electron source also includes a nanocrystalline diamond layer in direct contact with the composite layer. The nanocrystalline diamond layer is doped with boron. A portion of the back contact surface is removed to reveal the underlying platinum. The electron source is contained in an evacuable container.

  10. Teleform scannable data entry: an efficient method to update a community-based medical record? Community care coordination network Database Group.

    PubMed Central

    Guerette, P.; Robinson, B.; Moran, W. P.; Messick, C.; Wright, M.; Wofford, J.; Velez, R.

    1995-01-01

    Community-based multi-disciplinary care of chronically ill individuals frequently requires the efforts of several agencies and organizations. The Community Care Coordination Network (CCCN) is an effort to establish a community-based clinical database and electronic communication system to facilitate the exchange of pertinent patient data among primary care, community-based and hospital-based providers. In developing a primary care based electronic record, a method is needed to update records from the field or remote sites and agencies and yet maintain data quality. Scannable data entry with fixed fields, optical character recognition and verification was compared to traditional keyboard data entry to determine the relative efficiency of each method in updating the CCCN database. PMID:8563414

  11. Database integration in a multimedia-modeling environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include toolsmore » to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.« less

  12. Feasibility and utility of applications of the common data model to multiple, disparate observational health databases.

    PubMed

    Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B

    2015-05-01

    To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  13. 15 CFR 995.4 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... database resulting from the transformation of the ENC by ECDIS for appropriate use, updates to the ENC by... of the 1974 SOLAS Convention. Electronic Navigational Chart (ENC) means a database, standardized as to content, structure, and format, issued for use with ECDIS on the authority of government...

  14. 7 CFR 274.3 - Retailer management.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... retailer, and it must include acceptable privacy and security features. Such systems shall only be... terminals that are capable of relaying electronic transactions to a central database computer for... specifications prior to implementation of the EBT system to enable third party processors to access the database...

  15. Review of the Complications Associated with Treatment of Oropharyngeal Cancer: A Guide to the Dental Practitioner

    PubMed Central

    Turner, Lena; Mupparapu, Muralidhar; Akintoye, Sunday O

    2013-01-01

    Objectives Oropharyngeal cancer (OPC) is the 6th most common cancer worldwide. Focus on risk factors, improved diagnostic methods and effective management strategies have made it possible to successfully treat OPC. However, the 5-year survival rate has not improved for several years due to multiple treatment complications, tissue morbidity, loss of function and diminished quality of life. Survivors are faced with complications like oral mucositis, hyposalivation, osteoradionecrosis; tissue fibrosis, morbidity from jaw resection; disfigurement and loss of function that further diminish quality of life. The aim of this review is to highlight major complications associated with treatment of OPC via a literature search and review of available options for identification and management of these complications. Data Sources Relevant publications on oral complications of OPC therapy were thoroughly reviewed from the literature published between the years 1988 and 2012. Material and Method We evaluated reported incidence, prevalence and risk factors for oral complications of chemotherapy and radiotherapy for OPC. The authors conducted electronic search using English language databases namely PubMed Plus, Medline (Pre-Medline and Medline), Cochrane Database of systematic reviews (evidence-based medicine), Dentistry & Oral sciences source, AccessScience, Embase, Evidence-Based Medicine Reviews Multifile, Google Scholar, ISI Journal Citation Reports, Ovid Multi-Database. Conclusion We identified the most common complications associated with the treatment of oral cancers. Based on the information gathered, there is evidence that survival of OPC extends beyond eradication of the diseased tissue. Understanding the potential treatment complications and utilizing available resources to prevent and minimize them are important. Caring for OPC survivors should be a multidisciplinary team approach involving the dentist, oncologist, internist and social worker to improve the currently stagnant 5-year survival rate of OPC. More emphasis on improved quality of life after elimination of the cancer will ultimately improve OPC survivorship. PMID:23444208

  16. Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python

    NASA Astrophysics Data System (ADS)

    Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor

    2017-04-01

    As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via ODBC or JDBC respectively, 2) an instance to represent the spatial data stored in the database as a dataframe in Python, called the IdaGeoDataFrame, with a specific geometry attribute which recognises a planar geometry column in dashDB and 3) Python wrappers for spatial functions like within, distance, area, buffer} and more which dashDB currently supports to make the querying process from Python much simpler for the users. The spatial functions translate well-known geopandas-like syntax into SQL queries utilising the database connection to perform spatial operations in-database and can operate on single geometries as well two different geometries from different IdaGeoDataFrames. The in-database queries strictly follow the standards of OpenGIS Implementation Specification for Geographic information - Simple feature access for SQL. The results of the operations obtained can thereby be accessed dynamically via interactive Jupyter notebooks from any system which supports Python, without any additional dependencies and can also be combined with other open source libraries such as matplotlib and folium in-built within Jupyter notebooks for visualization purposes. We built a use case to analyse crime hotspots in New York city to validate our implementation and visualized the results as a choropleth map for each borough.

  17. The Relationship between Searches Performed in Online Databases and the Number of Full-Text Articles Accessed: Measuring the Interaction between Database and E-Journal Collections

    ERIC Educational Resources Information Center

    Lamothe, Alain R.

    2011-01-01

    The purpose of this paper is to report the results of a quantitative analysis exploring the interaction and relationship between the online database and electronic journal collections at the J. N. Desmarais Library of Laurentian University. A very strong relationship exists between the number of searches and the size of the online database…

  18. CEBS: a comprehensive annotated database of toxicological data

    PubMed Central

    Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer

    2017-01-01

    The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660

  19. Development of a standardized Intranet database of formulation records for nonsterile compounding, Part 2.

    PubMed

    Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela

    2012-01-01

    In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.

  20. A mobile trauma database with charge capture.

    PubMed

    Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin

    2005-11-01

    Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient revenue. Greater familiarity and broader use of mobile database technology holds the potential for even greater revenue capture.

  1. Developing seismogenic source models based on geologic fault data

    USGS Publications Warehouse

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the Euro-Mediterranean, http://www.share-eu.org/; EMME in the Middle East, http://www.emme-gem.org/) and global scale (e.g., GEM, http://www.globalquakemodel.org/; Anonymous 2008). To some extent, each of these efforts is still trying to resolve the level of optimal detail required for this type of compilation. The comparison we provide defines a common standard for consideration by the international community for future regional and global seismogenic source models by identifying the necessary parameters that capture the essence of geological fault data in order to characterize seismogenic sources. In addition, we inform potential users of differences in our usage of common geological/seismological terms to avoid inappropriate use of the data in our models and provide guidance to convert the data from one model to the other (for detailed instructions, see the electronic supplement to this article). Applying our recommendations will permit probabilistic seismic hazard assessment codes to run seamlessly using either seismogenic source input. The USGS and INGV database schema compare well at a first-level inspection. Both databases contain a set of fields representing generalized fault three-dimensional geometry and additional fields that capture the essence of past earthquake occurrences. Nevertheless, there are important differences. When we further analyze supposedly comparable fields, many are defined differently. These differences would cause anomalous results in hazard prediction if one assumes the values are similarly defined. The data, however, can be made fully compatible using simple transformations.

  2. Individual identification via electrocardiogram analysis.

    PubMed

    Fratini, Antonio; Sansone, Mario; Bifulco, Paolo; Cesarelli, Mario

    2015-08-14

    During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.

  3. A framework for capturing clinical data sets from computerized sources.

    PubMed

    McDonald, C J; Overhage, J M; Dexter, P; Takesue, B Y; Dwyer, D M

    1997-10-15

    The pressure to improve health care and provide better care at a lower cost has generated the need for efficient capture of clinical data. Many data sets are now being defined to analyze health care. Historically, review and research organizations have simply determined what data they wanted to collect, developed forms, and then gathered the information through chart review without regard to what is already available institutionally in computerized databases. Today, much electronic patient information is available in operational data systems (for example, laboratory systems, pharmacy systems, and surgical scheduling systems) and is accessible by agencies and organizations through standards for messages, codes, and encrypted electronic mail. Such agencies and organizations should define the elements of their data sets in terms of standardized operational data, and data producers should fully adopt these code and message standards. The Health Plan Employer Data and Information Set and the Council of State and Territorial Epidemiologists in collaboration with the Centers for Disease Control and Prevention and the Association of State and Territorial Public Health Laboratory Directors provide examples of how this can be done.

  4. ETHERNET BASED EMBEDDED SYSTEM FOR FEL DIAGNOSTICS AND CONTROLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jianxun Yan; Daniel Sexton; Steven Moore

    2006-10-24

    An Ethernet based embedded system has been developed to upgrade the Beam Viewer and Beam Position Monitor (BPM) systems within the free-electron laser (FEL) project at Jefferson Lab. The embedded microcontroller was mounted on the front-end I/O cards with software packages such as Experimental Physics and Industrial Control System (EPICS) and Real Time Executive for Multiprocessor System (RTEMS) running as an Input/Output Controller (IOC). By cross compiling with the EPICS, the RTEMS kernel, IOC device supports, and databases all of these can be downloaded into the microcontroller. The first version of the BPM electronics based on the embedded controller wasmore » built and is currently running in our FEL system. The new version of BPM that will use a Single Board IOC (SBIOC), which integrates with an Field Programming Gate Array (FPGA) and a ColdFire embedded microcontroller, is presently under development. The new system has the features of a low cost IOC, an open source real-time operating system, plug&play-like ease of installation and flexibility, and provides a much more localized solution.« less

  5. When Questions Are Answers: Using a Survey to Achieve Faculty Awareness of the Library's Electronic Resources.

    ERIC Educational Resources Information Center

    Weingart, Sandra J.; Anderson, Janet A.

    2000-01-01

    Describes a study conducted at the Utah State University library that investigated electronic database awareness and use by 856 administrators and teaching faculty. Responses to a survey revealed the need for greater publicity regarding new electronic acquisitions, training opportunities, and methods of remote access. (Author/LRW)

  6. Advanced X-Ray Sources Ensure Safe Environments

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Ames Research Center awarded inXitu Inc. (formerly Microwave Power Technology), of Mountain View, California, an SBIR contract to develop a new design of electron optics for forming and focusing electron beams that is applicable to a broad class of vacuum electron devices. This technology offers an inherently rugged and more efficient X-ray source for material analysis; a compact and rugged X-ray source for smaller rovers on future Mars missions; and electron beam sources to reduce undesirable emissions from small, widely distributed pollution sources; and remediation of polluted sites.

  7. Review of Safety Reports Involving Electronic Flight Bags.

    DOT National Transportation Integrated Search

    2010-04-01

    Safety events in which Electronic Flight Bags (EFBs) were a factor are reviewed. Relevant reports were obtained from the public Aviation Safety Reporting System (ASRS) database and the National Transportation Safety Board (NTSB) accident report datab...

  8. 49 CFR 237.155 - Documents and records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... inspection and reproduction by the Federal Railroad Administration. (a) Electronic recordkeeping; general... the information required by this part; (3) The track owner monitors its electronic records database...

  9. 49 CFR 237.155 - Documents and records.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... inspection and reproduction by the Federal Railroad Administration. (a) Electronic recordkeeping; general... the information required by this part; (3) The track owner monitors its electronic records database...

  10. 49 CFR 237.155 - Documents and records.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... inspection and reproduction by the Federal Railroad Administration. (a) Electronic recordkeeping; general... the information required by this part; (3) The track owner monitors its electronic records database...

  11. National Solar Radiation Database 1991-2010 Update: User's Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilcox, S. M.

    This user's manual provides information on the updated 1991-2010 National Solar Radiation Database. Included are data format descriptions, data sources, production processes, and information about data uncertainty.

  12. [Study of the effect of heat source separation distance on plasma physical properties in laser-pulsed GMAW hybrid welding based on spectral diagnosis technique].

    PubMed

    Liao, Wei; Hua, Xue-Ming; Zhang, Wang; Li, Fang

    2014-05-01

    In the present paper, the authors calculated the plasma's peak electron temperatures under different heat source separation distance in laser- pulse GMAW hybrid welding based on Boltzmann spectrometry. Plasma's peak electron densities under the corresponding conditions were also calculated by using the Stark width of the plasma spectrum. Combined with high-speed photography, the effect of heat source separation distance on electron temperature and electron density was studied. The results show that with the increase in heat source separation distance, the electron temperatures and electron densities of laser plasma did not changed significantly. However, the electron temperatures of are plasma decreased, and the electron densities of are plasma first increased and then decreased.

  13. ACToR: Aggregated Computational Toxicology Resource (T) ...

    EPA Pesticide Factsheets

    The EPA Aggregated Computational Toxicology Resource (ACToR) is a set of databases compiling information on chemicals in the environment from a large number of public and in-house EPA sources. ACToR has 3 main goals: (1) The serve as a repository of public toxicology information on chemicals of interest to the EPA, and in particular to be a central source for the testing data on all chemicals regulated by all EPA programs; (2) To be a source of in vivo training data sets for building in vitro to in vivo computational models; (3) To serve as a central source of chemical structure and identity information for the ToxCastTM and Tox21 programs. There are 4 main databases, all linked through a common set of chemical information and a common structure linking chemicals to assay data: the public ACToR system (available at http://actor.epa.gov), the ToxMiner database holding ToxCast and Tox21 data, along with results form statistical analyses on these data; the Tox21 chemical repository which is managing the ordering and sample tracking process for the larger Tox21 project; and the public version of ToxRefDB. The public ACToR system contains information on ~500K compounds with toxicology, exposure and chemical property information from >400 public sources. The web site is visited by ~1,000 unique users per month and generates ~1,000 page requests per day on average. The databases are built on open source technology, which has allowed us to export them to a number of col

  14. Database Search Engines: Paradigms, Challenges and Solutions.

    PubMed

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  15. Quantification of the Uncertainties for the Space Launch System Liftoff/Transition and Ascent Databases

    NASA Technical Reports Server (NTRS)

    Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.

    2016-01-01

    A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.

  16. DGIdb 3.0: a redesign and expansion of the drug-gene interaction database.

    PubMed

    Cotto, Kelsy C; Wagner, Alex H; Feng, Yang-Yang; Kiwala, Susanna; Coffman, Adam C; Spies, Gregory; Wollam, Alex; Spies, Nicholas C; Griffith, Obi L; Griffith, Malachi

    2018-01-04

    The drug-gene interaction database (DGIdb, www.dgidb.org) consolidates, organizes and presents drug-gene interactions and gene druggability information from papers, databases and web resources. DGIdb normalizes content from 30 disparate sources and allows for user-friendly advanced browsing, searching and filtering for ease of access through an intuitive web user interface, application programming interface (API) and public cloud-based server image. DGIdb v3.0 represents a major update of the database. Nine of the previously included 24 sources were updated. Six new resources were added, bringing the total number of sources to 30. These updates and additions of sources have cumulatively resulted in 56 309 interaction claims. This has also substantially expanded the comprehensive catalogue of druggable genes and anti-neoplastic drug-gene interactions included in the DGIdb. Along with these content updates, v3.0 has received a major overhaul of its codebase, including an updated user interface, preset interaction search filters, consolidation of interaction information into interaction groups, greatly improved search response times and upgrading the underlying web application framework. In addition, the expanded API features new endpoints which allow users to extract more detailed information about queried drugs, genes and drug-gene interactions, including listings of PubMed IDs, interaction type and other interaction metadata.

  17. NOAA Propagation Database Value in Tsunami Forecast Guidance

    NASA Astrophysics Data System (ADS)

    Eble, M. C.; Wright, L. M.

    2016-02-01

    The National Oceanic and Atmospheric Administration (NOAA) Center for Tsunami Research (NCTR) has developed a tsunami forecasting capability that combines a graphical user interface with data ingestion and numerical models to produce estimates of tsunami wave arrival times, amplitudes, current or water flow rates, and flooding at specific coastal communities. The capability integrates several key components: deep-ocean observations of tsunamis in real-time, a basin-wide pre-computed propagation database of water level and flow velocities based on potential pre-defined seismic unit sources, an inversion or fitting algorithm to refine the tsunami source based on the observations during an event, and tsunami forecast models. As tsunami waves propagate across the ocean, observations from the deep ocean are automatically ingested into the application in real-time to better define the source of the tsunami itself. Since passage of tsunami waves over a deep ocean reporting site is not immediate, we explore the value of the NOAA propagation database in providing placeholder forecasts in advance of deep ocean observations. The propagation database consists of water elevations and flow velocities pre-computed for 50 x 100 [km] unit sources in a continuous series along all known ocean subduction zones. The 2011 Japan Tohoku tsunami is presented as the case study

  18. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315

  19. BioWarehouse: a bioinformatics database warehouse toolkit.

    PubMed

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  20. Searching fee and non-fee toxicology information resources: an overview of selected databases.

    PubMed

    Wright, L L

    2001-01-12

    Toxicology profiles organize information by broad subjects, the first of which affirms identity of the agent studied. Studies here show two non-fee databases (ChemFinder and ChemIDplus) verify the identity of compounds with high efficiency (63% and 73% respectively) with the fee-based Chemical Abstracts Registry file serving well to fill data gaps (100%). Continued searching proceeds using knowledge of structure, scope and content to select databases. Valuable sources for information are factual databases that collect data and facts in special subject areas organized in formats available for analysis or use. Some sources representative of factual files are RTECS, CCRIS, HSDB, GENE-TOX and IRIS. Numerous factual databases offer a wealth of reliable information; however, exhaustive searches probe information published in journal articles and/or technical reports with records residing in bibliographic databases such as BIOSIS, EMBASE, MEDLINE, TOXLINE and Web of Science. Listed with descriptions are numerous factual and bibliographic databases supplied by 11 producers. Given the multitude of options and resources, it is often necessary to seek service desk assistance. Questions were posed by telephone and e-mail to service desks at DIALOG, ISI, MEDLARS, Micromedex and STN International. Results of the survey are reported.

Top