Science.gov

Sample records for administrative databases methods

  1. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  2. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  3. Redis database administration tool

    SciTech Connect

    Martinez, J. J.

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  4. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  5. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  6. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  7. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  8. The Dutch Hospital Standardised Mortality Ratio (HSMR) method and cardiac surgery: benchmarking in a national cohort using hospital administration data versus a clinical database

    PubMed Central

    Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H

    2014-01-01

    Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377

  9. TWRS information locator database system administrator`s manual

    SciTech Connect

    Knutson, B.J., Westinghouse Hanford

    1996-09-13

    This document is a guide for use by the Tank Waste Remediation System (TWRS) Information Locator Database (ILD) System Administrator. The TWRS ILD System is an inventory of information used in the TWRS Systems Engineering process to represent the TWRS Technical Baseline. The inventory is maintained in the form of a relational database developed in Paradox 4.5.

  10. A Database System for Course Administration.

    ERIC Educational Resources Information Center

    Benbasat, Izak; And Others

    1982-01-01

    Describes a computer-assisted testing system which produces multiple-choice examinations for a college course in business administration. The system uses SPIRES (Stanford Public Information REtrieval System) to manage a database of questions and related data, mark-sense cards for machine grading tests, and ACL (6) (Audit Command Language) to…

  11. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database...

  12. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  13. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  14. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  15. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  16. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  17. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  18. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  19. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  20. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  1. VIEWCACHE: An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Sellis, Timoleon

    1991-01-01

    The objective is to illustrate the concept of incremental access to distributed databases. An experimental database management system, ADMS, which has been developed at the University of Maryland, in College Park, uses VIEWCACHE, a database access method based on incremental search. VIEWCACHE is a pointer-based access method that provides a uniform interface for accessing distributed databases and catalogues. The compactness of the pointer structures formed during database browsing and the incremental access method allow the user to search and do inter-database cross-referencing with no actual data movement between database sites. Once the search is complete, the set of collected pointers pointing to the desired data are dereferenced.

  2. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database architecture and administration. 52.25 Section 52.25 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The...

  3. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database architecture and administration. 52.25 Section 52.25 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The...

  4. Database Administration: Concepts, Tools, Experiences, and Problems.

    ERIC Educational Resources Information Center

    Leong-Hong, Belkis; Marron, Beatrice

    The concepts of data base administration, the role of the data base administrator (DBA), and computer software tools useful in data base administration are described in order to assist data base technologists and managers. A study of DBA's in the Federal Government is detailed in terms of the functions they perform, the software tools they use,…

  5. Creating a resource database for nursing service administration.

    PubMed

    Clougherty, J; McCloskey, J C; Johnson, M; Casula, M; Gardner, D; Kelly, K; Maas, M; Delaney, C; Blegen, M

    1991-01-01

    In response to the current information explosion in nursing service administration (NSA), the authors felt a need to collect and organize available resources for use by their faculty and graduate students. An electronic database was developed to facilitate the use of the collected print and software resources. This article describes the creation of the NSA Resource Database from the time the need for it was realized to its completion. There is discussion regarding the criteria used for writing the database, what the database screens look like and why and what the database contains. The article also discusses the use and users of the NSA Resource Database to date. PMID:2036589

  6. Identifying Primary Spontaneous Pneumothorax from Administrative Databases: A Validation Study

    PubMed Central

    Frechette, Eric; Guidolin, Keegan; Seyam, Ayman; Choi, Yun-Hee; Jones, Sarah; McClure, J. Andrew; Winick-Ng, Jennifer; Welk, Blayne; Malthaner, Richard A.

    2016-01-01

    Introduction. Primary spontaneous pneumothorax (PSP) is a disorder commonly encountered in healthy young individuals. There is no differentiation between PSP and secondary pneumothorax (SP) in the current version of the International Classification of Diseases (ICD-10). This complicates the conduct of epidemiological studies on the subject. Objective. To validate the accuracy of an algorithm that identifies cases of PSP from administrative databases. Methods. The charts of 150 patients who consulted the emergency room (ER) with a recorded main diagnosis of pneumothorax were reviewed to define the type of pneumothorax that occurred. The corresponding hospital administrative data collected during previous hospitalizations and ER visits were processed through the proposed algorithm. The results were compared over two different age groups. Results. There were 144 cases of pneumothorax correctly coded (96%). The results obtained from the PSP algorithm demonstrated a significantly higher sensitivity (97% versus 81%, p = 0.038) and positive predictive value (87% versus 46%, p < 0.001) in patients under 40 years of age than in older patients. Conclusions. The proposed algorithm is adequate to identify cases of PSP from administrative databases in the age group classically associated with the disease. This makes possible its utilization in large population-based studies. PMID:27445518

  7. [Bias and confounding: pharmacoepidemiological study using administrative database].

    PubMed

    Nojiri, Shuko

    2015-01-01

    The provision of health care frequently creates digitalized data such as hospital-based electronic data, medication prescription records, and claims data collectively termed "administrative database research". The data source and analytical opportunities for study create risks that can lead to misinterpretation or bias the results. This review serves as an introduction to the concept of bias and confounding to help researchers conduct methodologically sound pharmacoepidemiologic research projects using administrative databases. Beyond general considerations for observational study, there are several unique issues related to database research that should be addressed. The risks of uninterpretable or biased results can be minimized by: providing a robust description of the data tables used; focusing on why and how they were created; measuring and reporting the accuracy of diagnostic and procedural codes used; and properly accounting for any time-dependent nature of variables. The hallmark of good research is rigorously careful analysis and interpretation. The promise for value of real world evidence using databases in medical decision making must be balanced against concerns related to observational inherited limitations for bias and confounding. Researchers should aim to avoid bias in the design of a study, adjust for confounding, and discuss the effects of residual bias on the results. PMID:26028416

  8. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but

  9. Regulatory and ethical considerations for linking clinical and administrative databases.

    PubMed

    Dokholyan, Rachel S; Muhlbaier, Lawrence H; Falletta, John M; Jacobs, Jeffrey P; Shahian, David; Haan, Constance K; Peterson, Eric D

    2009-06-01

    Clinical data registries are valuable tools that support evidence development, performance assessment, comparative effectiveness studies, and the adoption of new treatments into routine clinical practice. Although these registries do not have important information on long-term therapies or clinical events, administrative claims databases offer a potentially valuable complement. This article focuses on the regulatory and ethical considerations that arise from the use of registry data for research, including linkage of clinical and administrative data sets. (1) Are such activities primarily designed for quality assessment and improvement, research, or both, as this determines the appropriate ethical and regulatory standards? (2) Does the submission of data to a central registry, which may subsequently be linked to other data sources, require review by the institutional review board (IRB) of each participating organization? (3) What levels and mechanisms of IRB oversight are appropriate for the existence of a linked central data repository and the specific studies that may subsequently be developed using it? (4) Under what circumstances are waivers of informed consent and Health Insurance Portability and Accountability Act authorization required? (5) What are the requirements for a limited data set that would qualify a research activity as not involving human subjects and thus not subject to further IRB review? The approaches outlined in this article represent a local interpretation of the regulations in the context of several clinical data registry projects and focuses on a specific case study of the Society of Thoracic Surgeons National Database. PMID:19464406

  10. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  11. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  12. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false TRS User Registration Database and... Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall validate... Database on a per-call basis. Emergency 911 calls are excepted from this requirement. (i) Validation...

  13. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false TRS User Registration Database and... Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall validate... Database on a per-call basis. Emergency 911 calls are excepted from this requirement. (i) Validation...

  14. Classified Computer Configuration Control System (C{sup 4}S), Revision 3, Database Administrator`s Guide

    SciTech Connect

    O`Callaghan, P.B.; Nelson, R.A.; Grambihler, A.J.

    1994-04-01

    This document provides a guide for database administration and specific information for the Classified Computer Configuration Control System (C{sup 4}S). As a guide, this document discusses required database administration functions for the set up of database tables and for users of the system. It is assumed that general and user information has been obtained from the Classified Computer Configuration Control System (C{sup 4}S), Revision 3, User`s Information (WHC 1994).

  15. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis...

  16. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis...

  17. Planning the future of JPL's management and administrative support systems around an integrated database

    NASA Technical Reports Server (NTRS)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  18. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  19. Validity of peptic ulcer disease and upper gastrointestinal bleeding diagnoses in administrative databases: a systematic review protocol

    PubMed Central

    Montedori, Alessandro; Abraha, Iosief; Chiatti, Carlos; Cozzolino, Francesco; Orso, Massimiliano; Luchetta, Maria Laura; Rimland, Joseph M; Ambrosio, Giuseppe

    2016-01-01

    Introduction Administrative healthcare databases are useful to investigate the epidemiology, health outcomes, quality indicators and healthcare utilisation concerning peptic ulcers and gastrointestinal bleeding, but the databases need to be validated in order to be a reliable source for research. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases, 9th Revision and 10th version (ICD-9 and ICD-10) codes for peptic ulcer and upper gastrointestinal bleeding diagnoses. Methods and analysis MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched, using appropriate search strategies. We will include validation studies that used administrative data to identify peptic ulcer disease and upper gastrointestinal bleeding diagnoses or studies that evaluated the validity of peptic ulcer and upper gastrointestinal bleeding codes in administrative data. The following inclusion criteria will be used: (a) the presence of a reference standard case definition for the diseases of interest; (b) the presence of at least one test measure (eg, sensitivity, etc) and (c) the use of an administrative database as a source of data. Pairs of reviewers will independently abstract data using standardised forms and will evaluate quality using the checklist of the Standards for Reporting of Diagnostic Accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA-P) 2015 statement. Ethics and dissemination Ethics approval is not required given that this is a protocol for a systematic review. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide for researchers validating administrative healthcare databases to determine appropriate case definitions for peptic ulcer disease and upper gastrointestinal

  20. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  1. Design of an algorithm to identify persons with mental illness in a police administrative database.

    PubMed

    Hartford, Kathleen; Heslop, Lisa; Stitt, Larry; Hoch, Jeffrey S

    2005-01-01

    North American police maintain a database to track events and information related to their involvement with the public that contain a series of electronic caution/dependency flags attached to an individual's name for internal communication. To identify persons with mental illness in a police administrative database, an algorithm was developed that was composed of (a) caution/dependency flags, (b) addresses, and (c) key search words indicative of mental illness. Based on the level of confidence of the algorithm, persons with mental illness (PMI) were then assigned to one of three categories: Definite, Probable and Possible PMI. Results for 2000 include the sociodemographic characteristics of PMI and non-PMI in the database. The mean number of contacts, types of interactions, re-involvement with a year, charges and dispositions are described. The algorithm provides a cheap, quick method to identify PMI for North American police. It enables police to monitor the effectiveness of pre-arrest diversion programs and allows researchers to analyze questions of criminalization and mental illness. PMID:15710445

  2. Case Method: Its Potential for Training Administrators.

    ERIC Educational Resources Information Center

    Nagel, Greta K.

    1991-01-01

    The case method should be used in both preservice and inservice training for administrators to strengthen training programs and help administrators develop practical human relations skills, learn stress reduction and burnout prevention strategies, learn team-building, and develop critical and reflective thinking skills. (14 references) (MLH)

  3. Development of an Ada programming support environment database SEAD (Software Engineering and Ada Database) administration manual

    NASA Technical Reports Server (NTRS)

    Liaw, Morris; Evesson, Donna

    1988-01-01

    Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.

  4. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  5. Validity of Heart Failure Diagnoses in Administrative Databases: A Systematic Review and Meta-Analysis

    PubMed Central

    McCormick, Natalie; Lacaille, Diane; Bhole, Vidula; Avina-Zubieta, J. Antonio

    2014-01-01

    Objective Heart failure (HF) is an important covariate and outcome in studies of elderly populations and cardiovascular disease cohorts, among others. Administrative data is increasingly being used for long-term clinical research in these populations. We aimed to conduct the first systematic review and meta-analysis of studies reporting on the validity of diagnostic codes for identifying HF in administrative data. Methods MEDLINE and EMBASE were searched (inception to November 2010) for studies: (a) Using administrative data to identify HF; or (b) Evaluating the validity of HF codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value [PPV], negative predictive value, or Kappa scores) for HF, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2011) of original papers. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Using a random-effects model, pooled sensitivity and specificity values were produced, along with estimates of the positive (LR+) and negative (LR−) likelihood ratios, and diagnostic odds ratios (DOR = LR+/LR−) of HF codes. Results Nineteen studies published from1999–2009 were included in the qualitative review. Specificity was ≥95% in all studies and PPV was ≥87% in the majority, but sensitivity was lower (≥69% in ≥50% of studies). In a meta-analysis of the 11 studies reporting sensitivity and specificity values, the pooled sensitivity was 75.3% (95% CI: 74.7–75.9) and specificity was 96.8% (95% CI: 96.8–96.9). The pooled LR+ was 51.9 (20.5–131.6), the LR− was 0.27 (0.20–0.37), and the DOR was 186.5 (96.8–359.2). Conclusions While most HF diagnoses in administrative databases do correspond to true HF cases, about one-quarter of HF cases are not captured. The use of broader search parameters, along with

  6. Inaccurate Ascertainment of Morbidity and Mortality due to Influenza in Administrative Databases: A Population-Based Record Linkage Study

    PubMed Central

    Muscatello, David J.; Amin, Janaki; MacIntyre, C. Raina; Newall, Anthony T.; Rawlinson, William D.; Sintchenko, Vitali; Gilmour, Robin; Thackway, Sarah

    2014-01-01

    Background Historically, counting influenza recorded in administrative health outcome databases has been considered insufficient to estimate influenza attributable morbidity and mortality in populations. We used database record linkage to evaluate whether modern databases have similar limitations. Methods Person-level records were linked across databases of laboratory notified influenza, emergency department (ED) presentations, hospital admissions and death registrations, from the population (∼6.9 million) of New South Wales (NSW), Australia, 2005 to 2008. Results There were 2568 virologically diagnosed influenza infections notified. Among those, 25% of 40 who died, 49% of 1451 with a hospital admission and 7% of 1742 with an ED presentation had influenza recorded on the respective database record. Compared with persons aged ≥65 years and residents of regional and remote areas, respectively, children and residents of major cities were more likely to have influenza coded on their admission record. Compared with older persons and admitted patients, respectively, working age persons and non-admitted persons were more likely to have influenza coded on their ED record. On both ED and admission records, persons with influenza type A infection were more likely than those with type B infection to have influenza coded. Among death registrations, hospital admissions and ED presentations with influenza recorded as a cause of illness, 15%, 28% and 1.4%, respectively, also had laboratory notified influenza. Time trends in counts of influenza recorded on the ED, admission and death databases reflected the trend in counts of virologically diagnosed influenza. Conclusions A minority of the death, hospital admission and ED records for persons with a virologically diagnosed influenza infection identified influenza as a cause of illness. Few database records with influenza recorded as a cause had laboratory confirmation. The databases have limited value for estimating incidence

  7. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  8. Using administrative databases in the surveillance of depressive disorders--case definitions.

    PubMed

    Alaghehbandan, Reza; Macdonald, Don; Barrett, Brendan; Collins, Kayla; Chen, Yue

    2012-12-01

    The objective of this study was to assess the usefulness of provincial administrative databases in carrying out surveillance on depressive disorders. Electronic medical records (EMRs) at 3 family practice clinics in St. John's, NL, Canada, were audited; 253 depressive disorder cases and 257 patients not diagnosed with a depressive disorder were selected. The EMR served as the "gold standard," which then was compared to these same patients investigated through the use of various case definitions applied against the provincial hospital and physician administrative databases. Variables used in the development of the case definitions were depressive disorder diagnoses (either in hospital or physician claims data), date of diagnosis, and service provider type [general practitioner (GP) vs. psychiatrist]. Of the 120 case definitions investigated, 26 were found to have a kappa statistic greater than 0.6, of which 5 case definitions were considered the most appropriate for surveillance of depressive disorders. Of the 5 definitions, the following case definition, with a 77.5% sensitivity and 93% specificity, was found to be the most valid ([ ≥1 hospitalizations OR ≥1 psychiatrist visit related to depressive disorders any time] OR ≥2 GP visits related to depressive disorders within the first 2 years of diagnosis). This study found that provincial administrative databases may be useful for carrying out surveillance on depressive disorders among the adult population. The approach used in this study was simple and resulted in rather reasonable sensitivity and specificity. PMID:22788998

  9. Using administrative databases in the surveillance of depressive disorders--case definitions.

    PubMed

    Alaghehbandan, Reza; Macdonald, Don; Barrett, Brendan; Collins, Kayla; Chen, Yue

    2012-12-01

    The objective of this study was to assess the usefulness of provincial administrative databases in carrying out surveillance on depressive disorders. Electronic medical records (EMRs) at 3 family practice clinics in St. John's, NL, Canada, were audited; 253 depressive disorder cases and 257 patients not diagnosed with a depressive disorder were selected. The EMR served as the "gold standard," which then was compared to these same patients investigated through the use of various case definitions applied against the provincial hospital and physician administrative databases. Variables used in the development of the case definitions were depressive disorder diagnoses (either in hospital or physician claims data), date of diagnosis, and service provider type [general practitioner (GP) vs. psychiatrist]. Of the 120 case definitions investigated, 26 were found to have a kappa statistic greater than 0.6, of which 5 case definitions were considered the most appropriate for surveillance of depressive disorders. Of the 5 definitions, the following case definition, with a 77.5% sensitivity and 93% specificity, was found to be the most valid ([ ≥1 hospitalizations OR ≥1 psychiatrist visit related to depressive disorders any time] OR ≥2 GP visits related to depressive disorders within the first 2 years of diagnosis). This study found that provincial administrative databases may be useful for carrying out surveillance on depressive disorders among the adult population. The approach used in this study was simple and resulted in rather reasonable sensitivity and specificity.

  10. A UMLS-based method for integrating information databases into an Intranet.

    PubMed

    Volot, F; Joubert, M; Fieschi, M; Fieschi, D

    1997-01-01

    The Internet and the World Wide Web provide today end-users with capabilities to access universally to information in various and heterogeneous databases. The biomedical domain benefits from this new technology, specially for information retrieval by searching and browsing various sites. Nevertheless, end-users may be disoriented by specific ways to access information on different servers. In the framework of an Intranet design and development, we present a method for integrating information databases based on knowledge sources of the UMLS. The method provides designers of a Web site with facilities to implement an easy and homogeneous access to information. The pages are built dynamically and displayed according to a style sheet and their content stored in a database during the design phase. The database also describes the links between pages. Moreover, this organization provides administrators with powerful capabilities to manage Web sites.

  11. Nursing leadership succession planning in Veterans Health Administration: creating a useful database.

    PubMed

    Weiss, Lizabeth M; Drake, Audrey

    2007-01-01

    An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.

  12. Quantifying limitations in chemotherapy data in administrative health databases: implications for measuring the quality of colorectal cancer care.

    PubMed

    Urquhart, Robin; Rayson, Daniel; Porter, Geoffrey A; Grunfeld, Eva

    2011-08-01

    Reliable chemotherapy data are critical to evaluate the quality of care for patients with colorectal cancer who are treated with curative intent. In Canada, limitations in the availability and completeness of chemotherapy data exist in many administrative health databases. In this paper, we discuss these limitations and present findings from a chart review in Nova Scotia that quantifies the completeness of chemotherapy capture in existing databases. The results demonstrate that even basic information on cancer treatment in administrative databases can be insufficient to perform the types of analyses that most decision-makers require for quality-of-care measurement.

  13. Quantifying limitations in chemotherapy data in administrative health databases: implications for measuring the quality of colorectal cancer care.

    PubMed

    Urquhart, Robin; Rayson, Daniel; Porter, Geoffrey A; Grunfeld, Eva

    2011-08-01

    Reliable chemotherapy data are critical to evaluate the quality of care for patients with colorectal cancer who are treated with curative intent. In Canada, limitations in the availability and completeness of chemotherapy data exist in many administrative health databases. In this paper, we discuss these limitations and present findings from a chart review in Nova Scotia that quantifies the completeness of chemotherapy capture in existing databases. The results demonstrate that even basic information on cancer treatment in administrative databases can be insufficient to perform the types of analyses that most decision-makers require for quality-of-care measurement. PMID:22851984

  14. [Administrative databases as a basic tool for the epidemiology of cardiovascular diseases].

    PubMed

    Monte, Simona; Fanizza, Caterina; Romero, Marilena; Rossi, Elisa; De Rosa, Marisa; Tognoni, Gianni

    2006-03-01

    The broader availability of administrative databases, characterized by increasing data reliability, related to the various steps of the healthcare process, has became also in Italy an important resource for epidemiological studies. Specifically, the methodological developments in the handling and analysis of drug prescription files can be seen as the original and highly informative backbone of a comprehensive monitoring of healthcare delivery processes. The area of chronic cardiovascular treatments occupies a privileged space in these developments, which are illustrated in the paper, with a synthetic presentation of the methodology supported by a model analysis of the epidemiology of heart failure in a healthcare district and by a reference list which has been conceived to provide to the reader a comprehensive perspective on an area so far largely unexplored. PMID:16572986

  15. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that...

  16. A Proposed Framework of Test Administration Methods

    ERIC Educational Resources Information Center

    Thompson, Nathan A.

    2008-01-01

    The widespread application of personal computers to educational and psychological testing has substantially increased the number of test administration methodologies available to testing programs. Many of these mediums are referred to by their acronyms, such as CAT, CBT, CCT, and LOFT. The similarities between the acronyms and the methods…

  17. Phase4: automatic evaluation of database search methods.

    PubMed

    Rehmsmeier, Marc

    2002-12-01

    It has become standard to evaluate newly devised database search methods in terms of sensitivity and selectivity and to compare them with existing methods. This involves the construction of a suitable evaluation scenario, the execution of the methods, the assessment of their performances, and the presentation of the results. Each of these four phases and their smooth connection usually imposes formidable work. To relieve the evaluator of this burden, a system has been designed with which evaluations can be effected rapidly. It is implemented in the programming language Python whose object-oriented features are used to offer a great flexibility in changing the evaluation design. A graphical user interface is provided which offers the usual amenities such as radio- and checkbuttons or file browsing facilities.

  18. Distance correlation methods for discovering associations in large astrophysical databases

    SciTech Connect

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P. E-mail: mrichards@astro.psu.edu

    2014-01-20

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.

  19. Combining state administrative databases and provider records to assess the quality of care for children enrolled in Medicaid.

    PubMed

    Cotter, J J; Smith, W R; Rossiter, L F; Pugh, C B; Bramble, J D

    1999-01-01

    Our objective was to assess the capability of state administrative health care databases to evaluate the quality of immunization rates for a Medicaid managed care population. Data on 5599 2 year olds were obtained from a Medicaid claims database, a health department database, and the records of the children's assigned providers. The study was conducted on 1 managed care program in 1 state. Test performance ratio analyses were used to assess the relative accuracy and contribution of each source of administrative data. We found that of the 67,188 doses needed, 45,511 (68%) were documented as administered per at least 1 of the data sources. Medicaid claims data alone accounted for 18% of immunized children, while health department data used by itself accounted for 12%. Together, these 2 sources identified 34% of immunized children. Large administrative databases, such as Medicaid claims and data from a health department, while valuable sources of information on quality, may underestimate outcomes such as immunization rates. Assessments of the quality of health care should rely on a combination of administrative data and providers' records as sources of information. PMID:10446671

  20. Workshop on laboratory protocol standards for the Molecular Methods Database.

    PubMed

    Klingström, Tomas; Soldatova, Larissa; Stevens, Robert; Roos, T Erik; Swertz, Morris A; Müller, Kristian M; Kalaš, Matúš; Lambrix, Patrick; Taussig, Michael J; Litton, Jan-Eric; Landegren, Ulf; Bongcam-Rudloff, Erik

    2013-01-25

    Management of data to produce scientific knowledge is a key challenge for biological research in the 21st century. Emerging high-throughput technologies allow life science researchers to produce big data at speeds and in amounts that were unthinkable just a few years ago. This places high demands on all aspects of the workflow: from data capture (including the experimental constraints of the experiment), analysis and preservation, to peer-reviewed publication of results. Failure to recognise the issues at each level can lead to serious conflicts and mistakes; research may then be compromised as a result of the publication of non-coherent protocols, or the misinterpretation of published data. In this report, we present the results from a workshop that was organised to create an ontological data-modelling framework for Laboratory Protocol Standards for the Molecular Methods Database (MolMeth). The workshop provided a set of short- and long-term goals for the MolMeth database, the most important being the decision to use the established EXACT description of biomedical ontologies as a starting point. PMID:22687389

  1. In-hospital mortality following lung cancer resection: nationwide administrative database.

    PubMed

    Pagès, Pierre-Benoit; Cottenet, Jonathan; Mariet, Anne-Sophie; Bernard, Alain; Quantin, Catherine

    2016-06-01

    Our aim was to determine the effect of a national strategy for quality improvement in cancer management (the "Plan Cancer") according to time period and to assess the influence of type and volume of hospital activity on in-hospital mortality (IHM) within a large national cohort of patients operated on for lung cancer.From January 2005 to December 2013, 76 235 patients were included in the French Administrative Database. Patient characteristics, hospital volume of activity and hospital type were analysed over three periods: 2005-2007, 2008-2010 and 2011-2013.Global crude IHM was 3.9%: 4.3% during 2005-2007, 4% during 2008-2010 and 3.5% during 2011-2013 (p<0.01). 296, 259 and 209 centres performed pulmonary resections in 2005-2007, 2008-2010 and 2011-2013, respectively (p<0.01). The risk of death was higher in centres performing <13 resections per year than in centres performing >43 resections per year (adjusted (a)OR 1.48, 95% CI 1.197-1.834). The risk of death was lower in the period 2011-2013 than in the period 2008-2010 (aOR 0.841, 95% CI 0.764-0.926). Adjustment variables (age, sex, Charlson score and type of resection) were significantly linked to IHM, whereas the type of hospital was not.The French national strategy for quality improvement seems to have induced a significant decrease in IHM.

  2. Lessons from an enterprise-wide technical and administrative database using CASE and GUI front-ends

    SciTech Connect

    Chan, A.; Crane, G.; MacGregor, I.; Meyer, S.

    1995-07-01

    An enterprise-wide database built via Oracle*CASE is a hot topic. The authors describe the PEP-II/BABAR Project-Wide Database, and the lessons learned in delivering and developing this system with a small team averaging two and one half people. They also give some details of providing World Wide Web (WWW) access to the information, and using Oracle*CASE and Oracle Forms4. The B Factory at the Stanford Linear Accelerator Center (SLAC) is a project built to study the physics of matter and anti-matter. It consists of two accelerator storage rings (PEP-II) and a detector (BABAR)--a project of approximately $250 million with collaboration by many labs worldwide. Foremost among these lessons is that the support and vision of management are key to the successful design and implementation of an enterprise-wide database. The authors faced the challenge of integrating both administrative and technical data into one CASE enterprise design. The goal, defined at the project`s inception in late 1992, was to use a central database as a tool for the collaborating labs to: (1) track quality assurance during construction of the accelerator storage rings and detectors; (2) track down problems faster when they develop; and (3) facilitate the construction process. The focus of the project database, therefore, is on technical data which is less well-defined than administrative data.

  3. Enhancing Clinical Content and Race/Ethnicity Data in Statewide Hospital Administrative Databases: Obstacles Encountered, Strategies Adopted, and Lessons Learned

    PubMed Central

    Pine, Michael; Kowlessar, Niranjana M; Salemi, Jason L; Miyamura, Jill; Zingmond, David S; Katz, Nicole E; Schindler, Joe

    2015-01-01

    Objectives Eight grant teams used Agency for Healthcare Research and Quality infrastructure development research grants to enhance the clinical content of and improve race/ethnicity identifiers in statewide all-payer hospital administrative databases. Principal Findings Grantees faced common challenges, including recruiting data partners and ensuring their continued effective participation, acquiring and validating the accuracy and utility of new data elements, and linking data from multiple sources to create internally consistent enhanced administrative databases. Successful strategies to overcome these challenges included aggressively engaging with providers of critical sources of data, emphasizing potential benefits to participants, revising requirements to lessen burdens associated with participation, maintaining continuous communication with participants, being flexible when responding to participants’ difficulties in meeting program requirements, and paying scrupulous attention to preparing data specifications and creating and implementing protocols for data auditing, validation, cleaning, editing, and linking. In addition to common challenges, grantees also had to contend with unique challenges from local environmental factors that shaped the strategies they adopted. Conclusions The creation of enhanced administrative databases to support comparative effectiveness research is difficult, particularly in the face of numerous challenges with recruiting data partners such as competing demands on information technology resources. Excellent communication, flexibility, and attention to detail are essential ingredients in accomplishing this task. Additional research is needed to develop strategies for maintaining these databases when initial funding is exhausted. PMID:26119470

  4. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  5. A Dynamic Integration Method for Borderland Database using OSM data

    NASA Astrophysics Data System (ADS)

    Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.

    2013-11-01

    Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.

  6. The Québec BCG Vaccination Registry (1956–1992): assessing data quality and linkage with administrative health databases

    PubMed Central

    2014-01-01

    Background Vaccination registries have undoubtedly proven useful for estimating vaccination coverage as well as examining vaccine safety and effectiveness. However, their use for population health research is often limited. The Bacillus Calmette-Guérin (BCG) Vaccination Registry for the Canadian province of Québec comprises some 4 million vaccination records (1926-1992). This registry represents a unique opportunity to study potential associations between BCG vaccination and various health outcomes. So far, such studies have been hampered by the absence of a computerized version of the registry. We determined the completeness and accuracy of the recently computerized BCG Vaccination Registry, as well as examined its linkability with demographic and administrative medical databases. Methods Two systematically selected verification samples, each representing ~0.1% of the registry, were used to ascertain accuracy and completeness of the electronic BCG Vaccination Registry. Agreement between the paper [listings (n = 4,987 records) and vaccination certificates (n = 4,709 records)] and electronic formats was determined along several nominal and BCG-related variables. Linkage feasibility with the Birth Registry (probabilistic approach) and provincial Healthcare Registration File (deterministic approach) was examined using nominal identifiers for a random sample of 3,500 individuals born from 1961 to 1974 and BCG vaccinated between 1970 and 1974. Results Exact agreement was observed for 99.6% and 81.5% of records upon comparing, respectively, the paper listings and vaccination certificates to their corresponding computerized records. The proportion of successful linkage was 77% with the Birth Registry, 70% with the Healthcare Registration File, 57% with both, and varied by birth year. Conclusions Computerization of this Registry yielded excellent results. The registry was complete and accurate, and linkage with administrative databases was highly feasible. This

  7. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 2 2011-10-01 2011-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL...

  8. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS STATE ORGANIZATION AND GENERAL ADMINISTRATION Single State...

  9. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  10. Validity of ICD-9-CM codes for breast, lung and colorectal cancers in three Italian administrative healthcare databases: a diagnostic accuracy study protocol

    PubMed Central

    Abraha, Iosief; Serraino, Diego; Giovannini, Gianni; Stracci, Fabrizio; Casucci, Paola; Alessandrini, Giuliana; Bidoli, Ettore; Chiari, Rita; Cirocchi, Roberto; De Giorgi, Marcello; Franchini, David; Vitale, Maria Francesca; Fusco, Mario; Montedori, Alessandro

    2016-01-01

    Introduction Administrative healthcare databases are useful tools to study healthcare outcomes and to monitor the health status of a population. Patients with cancer can be identified through disease-specific codes, prescriptions and physician claims, but prior validation is required to achieve an accurate case definition. The objective of this protocol is to assess the accuracy of International Classification of Diseases Ninth Revision—Clinical Modification (ICD-9-CM) codes for breast, lung and colorectal cancers in identifying patients diagnosed with the relative disease in three Italian administrative databases. Methods and analysis Data from the administrative databases of Umbria Region (910 000 residents), Local Health Unit 3 of Napoli (1 170 000 residents) and Friuli-Venezia Giulia Region (1 227 000 residents) will be considered. In each administrative database, patients with the first occurrence of diagnosis of breast, lung or colorectal cancer between 2012 and 2014 will be identified using the following groups of ICD-9-CM codes in primary position: (1) 233.0 and (2) 174.x for breast cancer; (3) 162.x for lung cancer; (4) 153.x for colon cancer and (5) 154.0–154.1 and 154.8 for rectal cancer. Only incident cases will be considered, that is, excluding cases that have the same diagnosis in the 5 years (2007–2011) before the period of interest. A random sample of cases and non-cases will be selected from each administrative database and the corresponding medical charts will be assessed for validation by pairs of trained, independent reviewers. Case ascertainment within the medical charts will be based on (1) the presence of a primary nodular lesion in the breast, lung or colon–rectum, documented with imaging or endoscopy and (2) a cytological or histological documentation of cancer from a primary or metastatic site. Sensitivity and specificity with 95% CIs will be calculated. Dissemination Study results will be disseminated widely through

  11. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  12. New OPC verification method using die-to-database inspection

    NASA Astrophysics Data System (ADS)

    Yang, Hyunjo; Choi, Jaeseung; Cho, Byungug; Hong, Jongkyun; Song, Jookyoung; Yim, Donggyu; Kim, Jinwoong; Yamamoto, Masahiro

    2006-03-01

    The minimum feature size of new generation memory devices is approaching down to 50 nm era. And a very precise CD control is demanded not only for cell layouts but also for core and peripheral layouts of DRAM devices. However, as NA of lens system grows higher and higher and Resolution Enhancement Techniques (RETs) becomes more and more aggressive, isolated-dense bias increases and process window for the core and peripheral layouts decreases dramatically. So, the burden of OPC increases in proportion and it is requisite to verify as many features as possible on wafer. If possible, it would be desirable to verify all the features in a die. Recently, a novel inspection tool has been developed which can verify all kinds of patterns on wafer based on Die to Database copmarison method. It can identify all the serious systematic defects of nm order size error from the original layout target and feed back the systematic error points to OPC for more accurate model tuning. In addition we can obtain the full field CD distribution diagram of some specific transistors with hundreds of thousands of measurement data. So, we can analyze the root cause of the CD distribution in a field, such as mask CDU or lens aberrations and so on. And we can also perform Process Window Qualification of all the features in a die. In this paper, OPC verification methodology using the new inspection tool will be introduced and the application to the analysis of full field CD distribution and Process Window Qualification will be presented in detail.

  13. National short line railroad database project, 1995-1996: A report to the Federal Railroad Administration

    SciTech Connect

    Benson, D.; Byberg, T.

    1996-06-30

    The objective of the project was to create a central database containing information representing the American short line railroad industry. In the report, processes involved with obtaining, developing, and maintaining the information in the database are discussed. Several data analysis procedures used to help ensure the integrity of the data are addressed. The second annual American Short Line Railroad Association Data Profile for the 1994 Calendar year is also presented in the paper. Further information extracted and comparisons made during the analysis process are described in detail. Discussions on the development of the paper survey and an electronic survey for the third annual data profile for the 1995 calendar year are also presented. The design and implementation of the electronic survey software package are reviewed in detail. The final process presented is the distribution and collection of the 1995 electronic and paper surveys.

  14. A European Flood Database: facilitating comprehensive flood research beyond administrative boundaries

    NASA Astrophysics Data System (ADS)

    Hall, J.; Arheimer, B.; Aronica, G. T.; Bilibashi, A.; Boháč, M.; Bonacci, O.; Borga, M.; Burlando, P.; Castellarin, A.; Chirico, G. B.; Claps, P.; Fiala, K.; Gaál, L.; Gorbachova, L.; Gül, A.; Hannaford, J.; Kiss, A.; Kjeldsen, T.; Kohnová, S.; Koskela, J. J.; Macdonald, N.; Mavrova-Guirguinova, M.; Ledvinka, O.; Mediero, L.; Merz, B.; Merz, R.; Molnar, P.; Montanari, A.; Osuch, M.; Parajka, J.; Perdigão, R. A. P.; Radevski, I.; Renard, B.; Rogger, M.; Salinas, J. L.; Sauquet, E.; Šraj, M.; Szolgay, J.; Viglione, A.; Volpi, E.; Wilson, D.; Zaimi, K.; Blöschl, G.

    2015-06-01

    The current work addresses one of the key building blocks towards an improved understanding of flood processes and associated changes in flood characteristics and regimes in Europe: the development of a comprehensive, extensive European flood database. The presented work results from ongoing cross-border research collaborations initiated with data collection and joint interpretation in mind. A detailed account of the current state, characteristics and spatial and temporal coverage of the European Flood Database, is presented. At this stage, the hydrological data collection is still growing and consists at this time of annual maximum and daily mean discharge series, from over 7000 hydrometric stations of various data series lengths. Moreover, the database currently comprises data from over 50 different data sources. The time series have been obtained from different national and regional data sources in a collaborative effort of a joint European flood research agreement based on the exchange of data, models and expertise, and from existing international data collections and open source websites. These ongoing efforts are contributing to advancing the understanding of regional flood processes beyond individual country boundaries and to a more coherent flood research in Europe.

  15. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  16. Can Italian Healthcare Administrative Databases Be Used to Compare Regions with Respect to Compliance with Standards of Care for Chronic Diseases?

    PubMed Central

    Gini, Rosa; Schuemie, Martijn J.; Francesconi, Paolo; Lapi, Francesco; Cricelli, Iacopo; Pasqua, Alessandro; Gallina, Pietro; Donato, Daniele; Brugaletta, Salvatore; Donatini, Andrea; Marini, Alessandro; Cricelli, Claudio; Damiani, Gianfranco; Bellentani, Mariadonata; van der Lei, Johan; Sturkenboom, Miriam C. J. M.; Klazinga, Niek S.

    2014-01-01

    Background Italy has a population of 60 million and a universal coverage single-payer healthcare system, which mandates collection of healthcare administrative data in a uniform fashion throughout the country. On the other hand, organization of the health system takes place at the regional level, and local initiatives generate natural experiments. This is happening in particular in primary care, due to the need to face the growing burden of chronic diseases. Health services research can compare and evaluate local initiatives on the basis of the common healthcare administrative data.However reliability of such data in this context needs to be assessed, especially when comparing different regions of the country. In this paper we investigated the validity of healthcare administrative databases to compute indicators of compliance with standards of care for diabetes, ischaemic heart disease (IHD) and heart failure (HF). Methods We compared indicators estimated from healthcare administrative data collected by Local Health Authorities in five Italian regions with corresponding estimates from clinical data collected by General Practitioners (GPs). Four indicators of diagnostic follow-up (two for diabetes, one for IHD and one for HF) and four indicators of appropriate therapy (two each for IHD and HF) were considered. Results Agreement between the two data sources was very good, except for indicators of laboratory diagnostic follow-up in one region and for the indicator of bioimaging diagnostic follow-up in all regions, where measurement with administrative data underestimated quality. Conclusion According to evidence presented in this study, estimating compliance with standards of care for diabetes, ischaemic heart disease and heart failure from healthcare databases is likely to produce reliable results, even though completeness of data on diagnostic procedures should be assessed first. Performing studies comparing regions using such indicators as outcomes is a promising

  17. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Methods of administration. 441.105 Section 441.105 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods...

  18. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 441.105 Section 441.105 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods...

  19. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Methods of administration. 441.105 Section 441.105 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods...

  20. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS... ASSISTANCE PROGRAMS § 205.30 Methods of administration. State plan requirements: A State plan for financial assistance under title I, IV-A, X, XIV or XVI (AABD) of the Social Security Act must provide for such...

  1. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS... ASSISTANCE PROGRAMS § 205.30 Methods of administration. State plan requirements: A State plan for financial assistance under title I, IV-A, X, XIV or XVI (AABD) of the Social Security Act must provide for such...

  2. Self-controlled case series and misclassification bias induced by case selection from administrative hospital databases: application to febrile convulsions in pediatric vaccine pharmacoepidemiology.

    PubMed

    Quantin, Catherine; Benzenine, Eric; Velten, Michel; Huet, Frédéric; Farrington, C Paddy; Tubert-Bitter, Pascale

    2013-12-15

    Vaccine safety studies are increasingly conducted by using administrative health databases and self-controlled case series designs that are based on cases only. Often, several criteria are available to define the cases, which may yield different positive predictive values, as well as different sensitivities, and therefore different numbers of selected cases. The question then arises as to which is the best case definition. This article proposes new methodology to guide this choice based on the bias of the relative incidence and the power of the test. We apply this methodology in a validation study of 4 nested algorithms for identifying febrile convulsions from the administrative databases of 10 French hospitals. We used a sample of 695 children aged 1 month to 3 years who were hospitalized in 2008-2009 with at least 1 diagnosis code of febrile convulsions. The positive predictive values of the algorithms ranged from 81% to 98%, and their sensitivities were estimated to be 47%-99% in data from 1 large hospital. When applying our proposed methods, the algorithm we selected used a restricted diagnosis code and position on the discharge abstract. These criteria, which resulted in the selection of 502 cases with a positive predictive value of 95%, provided the best compromise between high power and low relative bias.

  3. Haemoptysis in adults: a 5-year study using the French nationwide hospital administrative database.

    PubMed

    Abdulmalak, Caroline; Cottenet, Jonathan; Beltramo, Guillaume; Georges, Marjolaine; Camus, Philippe; Bonniaud, Philippe; Quantin, Catherine

    2015-08-01

    Haemoptysis is a serious symptom with various aetiologies. Our aim was to define the aetiologies, outcomes and associations with lung cancer in the entire population of a high-income country.This retrospective multicentre study was based on the French nationwide hospital medical information database collected over 5 years (2008-2012). We analysed haemoptysis incidence, aetiologies, geographical and seasonal distribution and mortality. We studied recurrence, association with lung cancer and mortality in a 3-year follow-up analysis.Each year, ~15 000 adult patients (mean age 62 years, male/female ratio 2/1) were admitted for haemoptysis or had haemoptysis as a complication of their hospital stay, representing 0.2% of all hospitalised patients. Haemoptysis was cryptogenic in 50% of cases. The main aetiologies were respiratory infections (22%), lung cancer (17.4%), bronchiectasis (6.8%), pulmonary oedema (4.2%), anticoagulants (3.5%), tuberculosis (2.7%), pulmonary embolism (2.6%) and aspergillosis (1.1%). Among incident cases, the 3-year recurrence rate was 16.3%. Of the initial cryptogenic haemoptysis patients, 4% were diagnosed with lung cancer within 3 years. Mortality rates during the first stay and at 1 and 3 years were 9.2%, 21.6% and 27%, respectively.This is the first epidemiological study analysing haemoptysis and its outcomes in an entire population. Haemoptysis is a life-threatening symptom unveiling potentially life-threatening underlying conditions.

  4. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2006-08-08

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  5. Computer systems and methods for the query and visualization of multidimensional database

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2010-05-11

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  6. Two administration methods for inhaled salbutamol in intubated patients

    PubMed Central

    Garner, S; Wiest, D; Bradley, J; Habib, D

    2002-01-01

    Aims: To compare serum concentrations and effects on respiratory mechanics and haemodynamics of salbutamol administered by small volume nebuliser (SVN) and metered dose inhaler (MDI) plus spacer. Methods: Blinded, randomised, crossover study in 12 intubated infants and children (mean age 17.8 months) receiving inhaled salbutamol therapy. Subjects received salbutamol as 0.15 mg/kg by SVN and four puffs (400 µg) by MDI plus spacer at a four hour interval in random order. Passive respiratory mechanics were measured by a single breath/single occlusion technique, and serum salbutamol concentrations by liquid chromatography–mass spectrometry at 30 minutes, 1, 2, and 4 hours after each dose. Haemodynamics (heart rate and blood pressure) were recorded at each measurement time. Results: There was no difference in percentage change in respiratory mechanics or haemodynamics between the two methods of administration. Mean area under the curve (AUC0–4) was 5.86 for MDI plus spacer versus 4.93 ng/ml x h for SVN. Conclusions: Serum concentrations and effects on respiratory mechanics and haemodynamics of salbutamol were comparable with the two administration methods under the conditions studied. Future studies are needed to determine the most effective and safe combination of dose and administration method of inhaled salbutamol in mechanically ventilated infants and children. PMID:12089124

  7. Recent Change in Treatment of Disseminated Intravascular Coagulation in Japan: An Epidemiological Study Based on a National Administrative Database.

    PubMed

    Murata, Atsuhiko; Okamoto, Kohji; Mayumi, Toshihiko; Muramatsu, Keiji; Matsuda, Shinya

    2016-01-01

    This study investigated the time trends and hospital factors affecting the use of drugs for infectious disease-associated disseminated intravascular coagulation (DIC) based on a national administrative database. A total of 14 324 patients with infectious disease-associated DIC were referred to 1041 hospitals from 2010 to 2012 in Japan. Patients' data were collected from the administrative database to determine time trends and hospital factors affecting the use of drugs for DIC. Three study periods were established, namely, the fiscal years 2010 (n = 3308), 2011 (n = 5403), and 2012 (n = 5613). The use of antithrombin, heparin, protease inhibitors, and recombinant human soluble thrombomodulin (rhs-TM) for DIC was evaluated. The frequency of use of antithrombin, heparin, and protease inhibitors decreased while that of rhs-TM significantly increased from 2010 to 2012 in Japan (25.1% in 2010, 43.1% in 2011, and 56.8% in 2012; P < .001, respectively). Logistic regression showed that the study period was associated with the use of rhs-TM in patients with DIC. The odds ratio (OR) for 2011 was 2.34 (95% confidence interval [CI], 2.12-2.58; P < .001) whereas that for 2012 was 4.34 (95% CI, 3.94-4.79; P < .001). A large hospital size was the most significant factor associated with the use of rhs-TM in patients with DIC (OR, 3.14; 95% CI, 2.68-3.66; P < .001). The use of rhs-TM has dramatically increased. A large hospital size was significantly associated with the increased use of rhs-TM in patients with DIC from 2010 to 2012 in Japan.

  8. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor); Livny, Miron (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  9. An Efficient Method for Rare Spectra Retrieval in Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Du, Changde; Luo, Ali; Yang, Haifeng; Hou, Wen; Guo, Yanxin

    2016-03-01

    One of the most important aims of astronomical data mining is to systematically search for specific rare objects in a massive spectral data set, given a small fraction of identified samples with the same type. Most existing methods are mainly based on binary classification, which usually suffers from incompleteness when there are too few known samples. Rank-based methods could provide good solutions for such cases. After investigating several algorithms, a method combining a bipartite ranking model with bootstrap aggregating techniques was developed in this paper. The method was applied while searching for carbon stars in the spectral data of Sloan Digital Sky Survey Data Release 10 and compared with several other popular methods used for data mining. Experimental results validate that the proposed method is not only the most effective but also the least time-consuming technique among its competitors when searching for rare spectra in a large but unlabeled data set.

  10. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  11. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  12. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  13. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  14. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  15. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  16. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  17. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  18. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  19. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  20. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  1. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  2. An experimental database for evaluating PIV uncertainty quantification methods

    NASA Astrophysics Data System (ADS)

    Warner, Scott; Neal, Douglas; Sciacchitano, Andrea

    2014-11-01

    Uncertainty quantification for particle image velocimetry (PIV) data has recently become a topic of great interest as shown by the publishing of several different methods within the past few years. A unique experiment has been designed to test the efficacy of PIV uncertainty methods, using a rectangular jet as the flow field. The novel aspect of the experimental setup consists of simultaneous measurements by means of two different time-resolved PIV systems and a hot-wire anemometer (HWA). The first PIV system, called the ``PIV-Measurement'' system, collects the data for which uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of many PIV experiments. The second PIV system, called the ``PIV-HDR'' (high dynamic range) system, has a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire was placed in close proximity to the PIV measurement domain. All three of the measurement systems were carefully set to simultaneously collect time-resolved data on a point-by-point basis. The HWA validates the PIV-HDR system as the reference velocity so that it can be used to evaluate the instantaneous error in the PIV-measurement system.

  3. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.

  4. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations

    PubMed Central

    Kim, Seung Won; Kim, Bae-Hwan

    2016-01-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  5. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  6. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  7. A practical method of chronic ethanol administration in mice.

    PubMed

    Coleman, Ruth A; Young, Betty M; Turner, Lucas E; Cook, Robert T

    2008-01-01

    Mice provide a useful model for the study of immune deficiency caused by chronic alcohol abuse. Their suitability is related to several factors, including in particular the extensive knowledge base in the immunology of mice already existing in the literature. Specific modeling of the immunodeficiency of the chronic human alcoholic requires that ethanol must be administered to the model for a significant portion of its life span. In mice, it has proven to be necessary to administer ethanol daily for up to 32 wk or longer to observe all the immune abnormalities that occur in middle-aged alcoholic humans. Such time spans are problematic with many of the common protocols for ethanol administration. It has been shown by others and confirmed by our group that the most practical way of accomplishing such long protocols is by administering ethanol in water as the only choice of water. Details of management of the chronic ethanol mouse colony are described here that are necessary for the success of such studies, including methods for initiating ethanol administration, maintenance of barrier protection, monitoring weight gain, strain differences and fetal alcohol exposure.

  8. Retrosigmoid Versus Translabyrinthine Approach for Acoustic Neuroma Resection: An Assessment of Complications and Payments in a Longitudinal Administrative Database.

    PubMed

    Cole, Tyler; Veeravagu, Anand; Zhang, Michael; Azad, Tej; Swinney, Christian; Li, Gordon H; Ratliff, John K; Giannotta, Steven L

    2015-10-30

    Object Retrosigmoid (RS) and translabyrinthine (TL) surgery remain essential treatment approaches for symptomatic or enlarging acoustic neuromas (ANs). We compared nationwide complication rates and payments, independent of tumor characteristics, for these two strategies. Methods We identified 346 and 130 patients who underwent RS and TL approaches, respectively, for AN resection in the 2010-2012 MarketScan database, which characterizes primarily privately-insured patients from multiple institutions nationwide. Results Although we found no difference in 30-day general neurological or neurosurgical complication rates, in TL procedures there was a decreased risk for postoperative cranial nerve (CN) VII injury (20.2% vs 10.0%, CI 0.23-0.82), dysphagia (10.4% vs 3.1%, CI 0.10-0.78), and dysrhythmia (8.4% vs 2.3%, CI 0.08-0.86). Overall, there was no difference in surgical repair rates of CSF leak; however, intraoperative fat grafting was significantly higher in TL approaches (19.8% vs 60.2%, CI 3.95-9.43). In patients receiving grafts, there was a trend towards a higher repair rate after RS approach, while in those without grafts, there was a trend towards a higher repair rate after TL approach. Median total payments were $16,856 higher after RS approaches ($67,774 vs $50,918, p < 0.0001), without differences in physician or 90-day postoperative payments. Conclusions  Using a nationwide longitudinal database, we observed that the TL, compared to RS, approach for AN resection experienced lower risks of CN VII injury, dysphagia, and dysrhythmia. There was no significant difference in CSF leak repair rates. The payments for RS procedures exceed payments for TL procedures by approximately $17,000. Data from additional years and non-private sources will further clarify these trends.

  9. Independent Identification Method applied to EDMOND and SonotaCo databases

    NASA Astrophysics Data System (ADS)

    Rudawska, R.; Matlovic, P.; Toth, J.; Kornos, L.; Hajdukova, M.

    2015-10-01

    In recent years, networks of low-light-level video cameras have contributed many new meteoroid orbits. As a result of cooperation and data sharing among national networks and International Meteor Organization Video Meteor Database (IMO VMDB), European Video Meteor Network Database (EDMOND; [2, 3]) has been created. Its current version contains 145 830 orbits collected from 2001 to 2014. Another productive camera network has been that of the Japanese SonotaCo consortium [5], which at present made available 168 030 meteoroid orbits collected from 2007 to 2013. In our survey we used EDMOND database with SonotaCo database together, in order to identify existing meteor showers in both databases (Figure 1 and 2). For this purpose we applied recently intoduced independed identification method [4]. In the first step of the survey we used criterion based on orbital parameters (e, q, i, !, and) to find groups around each meteor within the similarity threshold. Mean parameters of the groups were calculated usingWelch method [6], and compared using a new function based on geocentric parameters (#, #, #, and Vg). Similar groups were merged into final clusters (representing meteor showers), and compared with the IAU Meteor Data Center list of meteor showers [1]. This poster presents the results obtained by the proposed methodology.

  10. Methods for 17β-oestradiol administration to rats.

    PubMed

    Isaksson, Ida-Maria; Theodorsson, Annette; Theodorsson, Elvar; Strom, Jakob O

    2011-11-01

    Several studies indicate that the beneficial or harmful effects of oestrogens in stroke are dose-dependent. Rats are amongst the most frequently used animals in these studies, which calls for thoroughly validated methods for administering 17β-oestradiol to rats. In an earlier study we characterised three different administration methods for 17β-oestradiol over 42 days. The present study assesses the concentrations in a short time perspective, with the addition of a novel peroral method. Female Sprague-Dawley rats were ovariectomised and administered 17β-oestradiol by subcutaneous injections, silastic capsules, pellets and orally (in the nut-cream Nutella(®)), respectively. One group received 17β-oestradiol by silastic capsules without previous washout time. Blood samples were obtained after 30 minutes, 1, 2, 4, 8, 12, 24, 48 and 168 hours and serum 17β-oestradiol (and oestrone sulphate in some samples) was subsequently analysed. For long-term characterisation, one group treated perorally was blood sampled after 2, 7, 14, 21, 28, 35 and 42 days. At sacrifice, uterine horns were weighed and subcutaneous tissue samples were taken for histological assessment. The pellets, silastic capsule and injection groups produced serum 17β-oestradiol concentrations that were initially several orders of magnitude higher than physiological levels, while the peroral groups had 17β-oestradiol levels that were within the physiological range during the entire experiment. The peroral method is a promising option for administering 17β-oestradiol if physiological levels or similarity to women's oral hormone therapy are desired. Uterine weights were found to be a very crude measure of oestrogen exposure. PMID:21834617

  11. GMOMETHODS: the European Union database of reference methods for GMO analysis.

    PubMed

    Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre

    2012-01-01

    In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis. PMID:23451388

  12. GMOMETHODS: the European Union database of reference methods for GMO analysis.

    PubMed

    Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre

    2012-01-01

    In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.

  13. Development of a Publicly Available, Comprehensive Database of Fiber and Health Outcomes: Rationale and Methods

    PubMed Central

    Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.

    2016-01-01

    Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733

  14. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  15. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    SciTech Connect

    Bennett, Joseph W.; Rabe, Karin M.

    2012-11-15

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb{sub 1/2}Mn{sub 1/2})O{sub 3} as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb{sub 2}O{sub 4}; and (3) ferroelectric semiconductors with formula M{sub 2}P{sub 2}(S,Se){sub 6}. A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: Black-Right-Pointing-Pointer Integration of first-principles methods and database mining. Black-Right-Pointing-Pointer Minor structural families with desirable functional properties. Black-Right-Pointing-Pointer Survey of polar entries in the Inorganic Crystal Structural Database.

  16. 29 CFR 1630.7 - Standards, criteria, or methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 4 2011-07-01 2011-07-01 false Standards, criteria, or methods of administration. 1630.7... Standards, criteria, or methods of administration. It is unlawful for a covered entity to use standards, criteria, or methods of administration, which are not job-related and consistent with business...

  17. Development of a Cast Iron Fatigue Properties Database for use with Modern Design Methods

    SciTech Connect

    DeLa'O, James, D.; Gundlach, Richard, B.; Tartaglia, John, M.

    2003-09-18

    A reliable and comprehensive database of design properties for cast iron is key to full and efficient utilization of this versatile family of high production-volume engineering materials. A database of strain-life fatigue properties and supporting data for a wide range of structural cast irons representing industry standard quality was developed in this program. The database primarily covers ASTM/SAE standard structural grades of ADI, CGI, ductile iron and gray iron as well as an austempered gray iron. Twenty-two carefully chosen materials provided by commercial foundries were tested and fifteen additional datasets were contributed by private industry. The test materials are principally distinguished on the basis of grade designation; most grades were tested in a 25 mm section size and in a single material condition common for the particular grade. Selected grades were tested in multiple sections-sizes and/or material conditions to delineate the properties associated with a range of materials for the given grade. The cyclic properties are presented in terms of the conventional strain-life formalism (e.g., SAE J1099). Additionally, cyclic properties for gray iron and CGI are presented in terms of the Downing Model, which was specifically developed to treat the unique stress-strain response associated with gray iron (and to a lesser extent with CGI). The test materials were fully characterized in terms of alloy composition, microstructure and monotonic properties. The CDROM database presents the data in various levels of detail including property summaries for each material, detailed data analyses for each specimen and raw monotonic and cyclic stress-strain data. The CDROM database has been published by the American Foundry Society (AFS) as an AFS Research Publication entitled ''Development of a Cast Iron Fatigue Properties Database for Use in Modern Design Methods'' (ISDN 0-87433-267-2).

  18. a Vomr-Tree Based Parallel Range Query Method on Distributed Spatial Database

    NASA Astrophysics Data System (ADS)

    Fu, Z.; Liu, S.

    2012-07-01

    Spatial index impacts upon the efficiency of spatial query seriously in distributed spatial database. In this paper, we introduce a parallel spatial range query algorithm, based on VoMR-tree index, which incorporates Voronoi diagrams into MR-tree, benefiting from the nearest neighbors. We first augments MR-tree to store the nearest neighbors and constructs the VoMR-tree index by Voronoi diagram. We then propose a novel range query algorithm based on VoMR-tree index. In processing a range query, we discuss the data partition method so that we can improve the efficiency by parallelization in distributed database. Just then a verification strategy is promoted. We show the superiority of the proposed method by extensive experiments using data sets of various sizes. The experimental results reveal that the proposed method improves the performance of range query processing up to three times in comparison with the widely-used R-tree variants.

  19. Association of Opioids and Sedatives with Increased Risk of In-Hospital Cardiopulmonary Arrest from an Administrative Database

    PubMed Central

    Overdyk, Frank J.; Dowling, Oonagh; Marino, Joseph; Qiu, Jiejing; Chien, Hung-Lun; Erslon, Mary; Morrison, Neil; Harrison, Brooke; Dahan, Albert; Gan, Tong J.

    2016-01-01

    Background While opioid use confers a known risk for respiratory depression, the incremental risk of in-hospital cardiopulmonary arrest, respiratory arrest, or cardiopulmonary resuscitation (CPRA) has not been studied. Our aim was to investigate the prevalence, outcomes, and risk profile of in-hospital CPRA for patients receiving opioids and medications with central nervous system sedating side effects (sedatives). Methods A retrospective analysis of adult inpatient discharges from 2008–2012 reported in the Premier Database. Patients were grouped into four mutually exclusive categories: (1) opioids and sedatives, (2) opioids only, (3) sedatives only, and (4) neither opioids nor sedatives. Results Among 21,276,691 inpatient discharges, 53% received opioids with or without sedatives. A total of 96,554 patients suffered CPRA (0.92 per 1000 hospital bed-days). Patients who received opioids and sedatives had an adjusted odds ratio for CPRA of 3.47 (95% CI: 3.40–3.54; p<0.0001) compared with patients not receiving opioids or sedatives. Opioids alone and sedatives alone were associated with a 1.81-fold and a 1.82-fold (p<0.0001 for both) increase in the odds of CPRA, respectively. In opioid patients, locations of CPRA were intensive care (54%), general care floor (25%), and stepdown units (15%). Only 42% of patients survived CPRA and only 22% were discharged home. Opioid patients with CPRA had mean increased hospital lengths of stay of 7.57 days and mean increased total hospital costs of $27,569. Conclusions Opioids and sedatives are independent and additive risk factors for in-hospital CPRA. The impact of opioid sparing analgesia, reduced sedative use, and better monitoring on CPRA incidence deserves further study. PMID:26913753

  20. Prevalence and Costs of Multimorbidity by Deprivation Levels in the Basque Country: A Population Based Study Using Health Administrative Databases

    PubMed Central

    Orueta, Juan F.; García-Álvarez, Arturo; García-Goñi, Manuel; Paolucci, Francesco; Nuño-Solinís, Roberto

    2014-01-01

    Background Multimorbidity is a major challenge for healthcare systems. However, currently, its magnitude and impact in healthcare expenditures is still mostly unknown. Objective To present an overview of the prevalence and costs of multimorbidity by socioeconomic levels in the whole Basque population. Methods We develop a cross-sectional analysis that includes all the inhabitants of the Basque Country (N = 2,262,698). We utilize data from primary health care electronic medical records, hospital admissions, and outpatient care databases, corresponding to a 4 year period. Multimorbidity was defined as the presence of two or more chronic diseases out of a list of 52 of the most important and common chronic conditions given in the literature. We also use socioeconomic and demographic variables such as age, sex, individual healthcare cost, and deprivation level. Predicted adjusted costs were obtained by log-gamma regression models. Results Multimorbidity of chronic diseases was found among 23.61% of the total Basque population and among 66.13% of those older than 65 years. Multimorbid patients account for 63.55% of total healthcare expenditures. Prevalence of multimorbidity is higher in the most deprived areas for all age and sex groups. The annual cost of healthcare per patient generated for any chronic disease depends on the number of coexisting comorbidities, and varies from 637 € for the first pathology in average to 1,657 € for the ninth one. Conclusion Multimorbidity is very common for the Basque population and its prevalence rises in age, and unfavourable socioeconomic environment. The costs of care for chronic patients with several conditions cannot be described as the sum of their individual pathologies in average. They usually increase dramatically according to the number of comorbidities. Given the ageing population, multimorbidity and its consequences should be taken into account in healthcare policy, the organization of care and medical research

  1. The Institute of Public Administration's Document Center: From Paper to Electronic Records--A Full Image Government Documents Database.

    ERIC Educational Resources Information Center

    Al-Zahrani, Rashed S.

    Since its establishment in 1960, the Institute of Public Administration (IPA) in Riyadh, Saudi Arabia has had responsibility for documenting Saudi administrative literature, the official publications of Saudi Arabia, and the literature of regional and international organizations through establishment of the Document Center in 1961. This paper…

  2. Ecological Methods in the Study of Administrative Behavior.

    ERIC Educational Resources Information Center

    Scott, Myrtle; Eklund, Susan J.

    Qualitative/naturalistic inquiry intends to discover whatever naturally occurring order exists rather than to test various theories or conceptual frameworks held by the investigator. Naturalistic, ecological data are urgently needed concerning the behavior of educational administrators. Such data can considerably change the knowledge base of the…

  3. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  4. DOE/MSU composite material fatigue database: Test methods, materials, and analysis

    SciTech Connect

    Mandell, J.F.; Samborsky, D.D.

    1997-12-01

    This report presents a detailed analysis of the results from fatigue studies of wind turbine blade composite materials carried out at Montana State University (MSU) over the last seven years. It is intended to be used in conjunction with the DOE/MSU composite Materials Fatigue Database. The fatigue testing of composite materials requires the adaptation of standard test methods to the particular composite structure of concern. The stranded fabric E-glass reinforcement used by many blade manufacturers has required the development of several test modifications to obtain valid test data for materials with particular reinforcement details, over the required range of tensile and compressive loadings. Additionally, a novel testing approach to high frequency (100 Hz) testing for high cycle fatigue using minicoupons has been developed and validated. The database for standard coupon tests now includes over 4,100 data points for over 110 materials systems. The report analyzes the database for trends and transitions in static and fatigue behavior with various materials parameters. Parameters explored are reinforcement fabric architecture, fiber content, content of fibers oriented in the load direction, matrix material, and loading parameters (tension, compression, and reversed loading). Significant transitions from good fatigue resistance to poor fatigue resistance are evident in the range of materials currently used in many blades. A preliminary evaluation of knockdowns for selected structural details is also presented. The high frequency database provides a significant set of data for various loading conditions in the longitudinal and transverse directions of unidirectional composites out to 10{sup 8} cycles. The results are expressed in stress and strain based Goodman Diagrams suitable for design. A discussion is provided to guide the user of the database in its application to blade design.

  5. The Development and Implementation of an Administrative Database, Telecommunications System, and Training Program to Improve K-12 Magnet/Choice Program Administrative Processes.

    ERIC Educational Resources Information Center

    Black, Mary C.

    In the past, the communication and paperwork structure between K-12 magnet/choice programs and the district-wide program administration was not efficient. In particular, the student application, selection, and notification processes were time-consuming, and did not enable school-based personnel to communicate effectively with district…

  6. From in silico target prediction to multi-target drug design: current databases, methods and applications.

    PubMed

    Koutsoukas, Alexios; Simms, Benjamin; Kirchmair, Johannes; Bond, Peter J; Whitmore, Alan V; Zimmer, Steven; Young, Malcolm P; Jenkins, Jeremy L; Glick, Meir; Glen, Robert C; Bender, Andreas

    2011-11-18

    Given the tremendous growth of bioactivity databases, the use of computational tools to predict protein targets of small molecules has been gaining importance in recent years. Applications span a wide range, from the 'designed polypharmacology' of compounds to mode-of-action analysis. In this review, we firstly survey databases that can be used for ligand-based target prediction and which have grown tremendously in size in the past. We furthermore outline methods for target prediction that exist, both based on the knowledge of bioactivities from the ligand side and methods that can be applied in situations when a protein structure is known. Applications of successful in silico target identification attempts are discussed in detail, which were based partly or in whole on computational target predictions in the first instance. This includes the authors' own experience using target prediction tools, in this case considering phenotypic antibacterial screens and the analysis of high-throughput screening data. Finally, we will conclude with the prospective application of databases to not only predict, retrospectively, the protein targets of a small molecule, but also how to design ligands with desired polypharmacology in a prospective manner.

  7. Discovery of novel mesangial cell proliferation inhibitors using a three-dimensional database searching method.

    PubMed

    Kurogi, Y; Miyata, K; Okamura, T; Hashimoto, K; Tsutsumi, K; Nasu, M; Moriyasu, M

    2001-07-01

    A three-dimensional pharmacophore model of mesangial cell (MC) proliferation inhibitors was generated from a training set of 4-(diethoxyphosphoryl)methyl-N-(3-phenyl-[1,2,4]thiadiazol-5-yl)benzamide, 2, and its derivatives using the Catalyst/HIPHOP software program. On the basis of the in vitro MC proliferation inhibitory activity, a pharmacophore model was generated as seven features consisting of two hydrophobic regions, two hydrophobic aromatic regions, and three hydrogen bond acceptors. Using this model as a three-dimensional query to search the Maybridge database, structurally novel 41 compounds were identified. The evaluation of MC proliferation inhibitory activity using available samples from the 41 identified compounds exhibited over 50% inhibitory activity at the 100 nM range. Interestingly, the newly identified compounds by the 3D database searching method exhibited the reduced inhibition of normal proximal tubular epithelial cell proliferation compared to a training set of compounds. PMID:11428924

  8. Discovery of novel mesangial cell proliferation inhibitors using a three-dimensional database searching method.

    PubMed

    Kurogi, Y; Miyata, K; Okamura, T; Hashimoto, K; Tsutsumi, K; Nasu, M; Moriyasu, M

    2001-07-01

    A three-dimensional pharmacophore model of mesangial cell (MC) proliferation inhibitors was generated from a training set of 4-(diethoxyphosphoryl)methyl-N-(3-phenyl-[1,2,4]thiadiazol-5-yl)benzamide, 2, and its derivatives using the Catalyst/HIPHOP software program. On the basis of the in vitro MC proliferation inhibitory activity, a pharmacophore model was generated as seven features consisting of two hydrophobic regions, two hydrophobic aromatic regions, and three hydrogen bond acceptors. Using this model as a three-dimensional query to search the Maybridge database, structurally novel 41 compounds were identified. The evaluation of MC proliferation inhibitory activity using available samples from the 41 identified compounds exhibited over 50% inhibitory activity at the 100 nM range. Interestingly, the newly identified compounds by the 3D database searching method exhibited the reduced inhibition of normal proximal tubular epithelial cell proliferation compared to a training set of compounds.

  9. Use of Fibrates Monotherapy in People with Diabetes and High Cardiovascular Risk in Primary Care: A French Nationwide Cohort Study Based on National Administrative Databases

    PubMed Central

    Roussel, Ronan; Chaignot, Christophe; Weill, Alain; Travert, Florence; Hansel, Boris; Marre, Michel; Ricordeau, Philippe; Alla, François; Allemand, Hubert

    2015-01-01

    Background and Aim According to guidelines, diabetic patients with high cardiovascular risk should receive a statin. Despite this consensus, fibrate monotherapy is commonly used in this population. We assessed the frequency and clinical consequences of the use of fibrates for primary prevention in patients with diabetes and high cardiovascular risk. Design Retrospective cohort study based on nationwide data from the medical and administrative databases of French national health insurance systems (07/01/08-12/31/09) with a follow-up of up to 30 months. Methods Lipid-lowering drug-naive diabetic patients initiating fibrate or statin monotherapy were identified. Patients at high cardiovascular risk were then selected: patients with a diagnosis of diabetes and hypertension, and >50 (men) or 60 (women), but with no history of cardiovascular events. The composite endpoint comprised myocardial infarction, stroke, amputation, or death. Results Of the 31,652 patients enrolled, 4,058 (12.8%) received a fibrate. Age- and gender-adjusted annual event rates were 2.42% (fibrates) and 2.21% (statins). The proportionality assumption required for the Cox model was not met for the fibrate/statin variable. A multivariate model including all predictors was therefore calculated by dividing data into two time periods, allowing Hazard Ratios to be calculated before (HR<540) and after 540 days (HR>540) of follow-up. Multivariate analyses showed that fibrates were associated with an increased risk for the endpoint after 540 days: HR<540 = 0.95 (95% CI: 0.78–1.16) and HR>540 = 1.73 (1.28–2.32). Conclusion Fibrate monotherapy is commonly prescribed in diabetic patients with high cardiovascular risk and is associated with poorer outcomes compared to statin therapy. PMID:26398765

  10. 45 CFR 235.50 - State plan requirements for methods of personnel administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATION OF FINANCIAL ASSISTANCE PROGRAMS § 235.50 State plan requirements for methods of personnel administration. (a) A State plan for financial assistance programs under title I, IV-A, X, XIV,...

  11. Development of Database Assisted Structure Identification (DASI) Methods for Nontargeted Metabolomics

    PubMed Central

    Menikarachchi, Lochana C.; Dubey, Ritvik; Hill, Dennis W.; Brush, Daniel N.; Grant, David F.

    2016-01-01

    Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns). For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1) substructures required (i.e., seed structures); (2) substructures not allowed; and (3) filters to remove incorrect structures. Our approach (database assisted structure identification, DASI) used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da. PMID:27258318

  12. Development of Database Assisted Structure Identification (DASI) Methods for Nontargeted Metabolomics.

    PubMed

    Menikarachchi, Lochana C; Dubey, Ritvik; Hill, Dennis W; Brush, Daniel N; Grant, David F

    2016-01-01

    Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns). For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1) substructures required (i.e., seed structures); (2) substructures not allowed; and (3) filters to remove incorrect structures. Our approach (database assisted structure identification, DASI) used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da.

  13. Toxicity of ionic liquids: database and prediction via quantitative structure-activity relationship method.

    PubMed

    Zhao, Yongsheng; Zhao, Jihong; Huang, Ying; Zhou, Qing; Zhang, Xiangping; Zhang, Suojiang

    2014-08-15

    A comprehensive database on toxicity of ionic liquids (ILs) is established. The database includes over 4000 pieces of data. Based on the database, the relationship between IL's structure and its toxicity has been analyzed qualitatively. Furthermore, Quantitative Structure-Activity relationships (QSAR) model is conducted to predict the toxicities (EC50 values) of various ILs toward the Leukemia rat cell line IPC-81. Four parameters selected by the heuristic method (HM) are used to perform the studies of multiple linear regression (MLR) and support vector machine (SVM). The squared correlation coefficient (R(2)) and the root mean square error (RMSE) of training sets by two QSAR models are 0.918 and 0.959, 0.258 and 0.179, respectively. The prediction R(2) and RMSE of QSAR test sets by MLR model are 0.892 and 0.329, by SVM model are 0.958 and 0.234, respectively. The nonlinear model developed by SVM algorithm is much outperformed MLR, which indicates that SVM model is more reliable in the prediction of toxicity of ILs. This study shows that increasing the relative number of O atoms of molecules leads to decrease in the toxicity of ILs.

  14. Evaluation of a CFD Method for Aerodynamic Database Development using the Hyper-X Stack Configuration

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert

    2004-01-01

    A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.

  15. Hospitalizations of Infants and Young Children with Down Syndrome: Evidence from Inpatient Person-Records from a Statewide Administrative Database

    ERIC Educational Resources Information Center

    So, S. A.; Urbano, R. C.; Hodapp, R. M.

    2007-01-01

    Background: Although individuals with Down syndrome are increasingly living into the adult years, infants and young children with the syndrome continue to be at increased risk for health problems. Using linked, statewide administrative hospital discharge records of all infants with Down syndrome born over a 3-year period, this study "follows…

  16. A Method of Extracting Sentences Related to Protein Interaction from Literature using a Structure Database

    NASA Astrophysics Data System (ADS)

    Kaneta, Yoshikazu; Munna, Md. Ahaduzzaman; Ohkawa, Takenao

    Because a protein expresses its function through interaction with other substrates, it is vital to create a database of protein interaction. Since the total volume of information on protein interaction is described in terms of thousands of literatures, it is nearly impossible to extract all this information manually. Although extraction systems for interaction information based on the template matching method have already been developed, it is not possible to match all the sentences with interaction information due to the extent of sentence complexity. We propose a method of extracting sentences with interaction information independent of sentence structure. In a protein-compound complex structure, the interacting residue is near to its partner. The distance between them can be calculated by using the structure data in the PDB database, with a short distance indicating that the sentences associated with them might describe the interaction information. In a free-protein structure, the distance cannot be calculated because the coordinates of the protein's partner are not registered in the structure data. Hence, we use the homology protein structure data, which is complexed with the protein's parter. The proposed method was applied to seven literatures written about protein-compound complexes and four literatures written about free proteins, obtaining F-measures of 71% and 72%, respectively.

  17. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: REQUIREMENTS AND LIMITS APPLICABLE TO SPECIFIC SERVICES Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods...

  18. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: REQUIREMENTS AND LIMITS APPLICABLE TO SPECIFIC SERVICES Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods...

  19. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  20. Network II Database

    1994-11-07

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network II Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database.

  1. Conceptual database modeling: a method for enabling end users (radiologists) to understand and develop their information management applications.

    PubMed

    Hawkins, H; Young, S K; Hubert, K C; Hallock, P

    2001-06-01

    As medical technology advances at a rapid pace, clinicians become further and further removed from the design of their own technological tools. This is particularly evident with information management. For radiologists, clinical histories, patient reports, and other pertinent information require sophisticated tools for data handling. However, as databases grow more powerful and sophisticated, systems require the expertise of programmers and information technology personnel. The radiologist, the clinician end-user, must maintain involvement in the development of system tools to insure effective information management. Conceptual database modeling is a design method that serves to bridge the gap between the technological aspects of information management and its clinical applications. Conceptual database modeling involves developing information systems in simple language so that anyone can have input into the overall design. This presentation describes conceptual database modeling, using object role modeling, as a means by which end-users (clinicians) may participate in database development.

  2. Database Manager

    ERIC Educational Resources Information Center

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  3. Serotonin-Norepinephrine Reuptake Inhibitors and the Risk of AKI: A Cohort Study of Eight Administrative Databases and Meta-Analysis

    PubMed Central

    Renoux, Christel; Lix, Lisa M.; Patenaude, Valérie; Bresee, Lauren C.; Paterson, J. Michael; Lafrance, Jean-Philippe; Tamim, Hala; Mahmud, Salaheddin M.; Alsabbagh, Mhd. Wasem; Hemmelgarn, Brenda; Dormuth, Colin R.

    2015-01-01

    Background and objectives A safety signal regarding cases of AKI after exposure to serotonin-norepinephrine reuptake inhibitors (SNRIs) was identified by Health Canada. Therefore, this study assessed whether the use of SNRIs increases the risk of AKI compared with selective serotonin reuptake inhibitors (SSRIs) and examined the risk associated with each individual SNRI. Design, setting, participants, & measurements Multiple retrospective population-based cohort studies were conducted within eight administrative databases from Canada, the United States, and the United Kingdom between January 1997 and March 2010. Within each cohort, a nested case-control analysis was performed to estimate incidence rate ratios (RRs) of AKI associated with SNRIs compared with SSRIs using conditional logistic regression, with adjustment for high-dimensional propensity scores. The overall effect across sites was estimated using meta-analytic methods. Results There were 38,974 cases of AKI matched to 384,034 controls. Current use of SNRIs was not associated with a higher risk of AKI compared with SSRIs (fixed-effect RR, 0.97; 95% confidence interval [95% CI], 0.94 to 1.01). Current use of venlafaxine and desvenlafaxine considered together was not associated with a higher risk of AKI (RR, 0.96; 95% CI, 0.92 to 1.00). For current use of duloxetine, there was significant heterogeneity among site-specific estimates such that a random-effects meta-analysis was performed showing a 16% higher risk, although this risk was not statistically significant (RR, 1.16; 95% CI, 0.96 to 1.40). This result is compatible with residual confounding, because there was a substantial imbalance in the prevalence of diabetes between users of duloxetine and users of others SNRIs or SSRIs. After further adjustment by including diabetes as a covariate in the model along with propensity scores, the fixed-effect RR was 1.02 (95% CI, 0.95 to 1.10). Conclusions There is no evidence that use of SNRIs is associated with a

  4. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    NASA Astrophysics Data System (ADS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-11-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb1/2Mn1/2)O3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb2O4; and (3) ferroelectric semiconductors with formula M2P2(S,Se)6. A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties.

  5. A New Method for Identification of Moving Groups in the HIPPARCOS Database

    NASA Astrophysics Data System (ADS)

    Aguilar, L. A.; Hoogerwerf, R.

    1998-09-01

    A new method to identify moving groups in proper motion catalogues is introduced. It requires parallax and proper motion information. No radial velocity data is needed, thus it is well suited for the Hipparcos database. This method uses all the available information to constrain the locations of stars in velocity space, and then searches for statistically significative groupings of the constrained star locations. As the search is done in velocity space, groups need not be constrained in position on the sky to be identified. Monte Carlo experiments are used to gauge the success of this method for synthetic groups at various distances and kinematics. The method is then used on the Hyades; good agreement in the member list and a difference of only 0.3 km/s in center of mass velocity, is found with the work of work of Perryman et al. (1998), which included radial velocity information besides Hipparcos proper motions. Bibliography: Perryman, M.A., Brown, A.G., Lebreton, Y., Gomez, A., Turon, C., Cayrel de Strobel, G., Mermilliod, J., Robichon, N., Kovalevsky, J., Crifo, F. (1998), A.&A., submitted.

  6. Medical procedures and outcomes of Japanese patients with trisomy 18 or trisomy 13: analysis of a nationwide administrative database of hospitalized patients.

    PubMed

    Ishitsuka, Kazue; Matsui, Hiroki; Michihata, Nobuaki; Fushimi, Kiyohide; Nakamura, Tomoo; Yasunaga, Hideo

    2015-08-01

    The choices of aggressive treatment for trisomy 18 (T18) and trisomy 13 (T13) remain controversial. Here, we describe the current medical procedures and outcomes of patients with T18 and T13 from a nationwide administrative database of hospitalized patients in Japan. We used the database to identify eligible patients with T18 (n = 438) and T13 (n = 133) who were first admitted to one of 200 hospitals between July 2010 and March 2013. Patients were divided into admission at day <7 (early neonatal) and admission at day ≥7 (late neonatal and post neonatal) groups, and we described the medical intervention and status at discharge for each group. In the day <7 groups, surgical interventions were performed for 56 (19.9%) T18 patients and 22 (34.4%) T13 patients, including pulmonary artery banding, and procedures for esophageal atresia and omphalocele. None received intracardiac surgery. The rate of patients discharged to home was higher in the day ≥7 groups than the day <7 groups (T18: 72.6 vs. 38.8%; T13: 73.9 vs. 21.9%, respectively). Our data show that a substantial number of patients with trisomy received surgery and were then discharged home, but, of these, a considerable number required home medical care. This included home oxygen therapy, home mechanical ventilation, and tube feeding. These findings will be useful to clinicians or families who care for patients with T18 and T13. PMID:25847518

  7. Assessment of methods for creating a national building statistics database for atmospheric dispersion modeling

    SciTech Connect

    Velugubantla, S. P.; Burian, S. J.; Brown, M. J.; McKinnon, A. T.; McPherson, T. N.; Han, W. S.

    2004-01-01

    Mesoscale meteorological codes and transport and dispersion models are increasingly being applied in urban areas. Representing urban terrain characteristics in these models is critical for accurate predictions of air flow, heating and cooling, and airborne contaminant concentrations in cities. A key component of urban terrain characterization is the description of building morphology (e.g., height, plan area, frontal area) and derived properties (e.g., roughness length). Methods to determine building morphological statistics range from manual field surveys to automated processing of digital building databases. In order to improve the quality and consistency of mesoscale meteorological and atmospheric dispersion modeling, a national dataset of building morphological statistics is needed. Currently, due to the expense and logistics of conducting detailed field surveys, building statistics have been derived for only small sections of a few cities. In most other cities, modeling projects rely on building statistics estimated using intuition and best guesses. There has been increasing emphasis in recent years to derive building statistics using digital building data or other data sources as a proxy for those data. Although there is a current expansion in public and private sector development of digital building data, at present there is insufficient data to derive a national building statistics database using automated analysis tools. Too many cities lack digital data on building footprints and heights and many of the cities having such data do so for only small areas. Due to the lack of sufficient digital building data, other datasets are used to estimate building statistics. Land use often serves as means to provide building statistics for a model domain, but the strength and consistency of the relationship between land use and building morphology is largely uncertain. In this paper, we investigate whether building statistics can be correlated to the underlying land

  8. Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)

    2004-01-01

    Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.

  9. Database Marketplace 2002: The Database Universe.

    ERIC Educational Resources Information Center

    Tenopir, Carol; Baker, Gayle; Robinson, William

    2002-01-01

    Reviews the database industry over the past year, including new companies and services, company closures, popular database formats, popular access methods, and changes in existing products and services. Lists 33 firms and their database services; 33 firms and their database products; and 61 company profiles. (LRW)

  10. A Multi-Index Integrated Change detection method for updating the National Land Cover Database

    USGS Publications Warehouse

    Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin G.

    2010-01-01

    Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.

  11. A Multi-Index Integrated Change Detection Method for Updating the National Land Cover Database

    NASA Astrophysics Data System (ADS)

    Jin, S.; Yang, L.; Xian, G. Z.; Danielson, P.; Homer, C.

    2010-12-01

    Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.

  12. A method to add richness to the National Landslide Database of Great Britain

    NASA Astrophysics Data System (ADS)

    Taylor, Faith; Freeborough, Katy; Malamud, Bruce; Demeritt, David

    2014-05-01

    Landslides in Great Britain (GB) pose a risk to infrastructure, property and livelihoods. Our understanding of where landslide hazard and impact will be greatest is based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in GB by searching electronic archives of local and regional newspapers. In Great Britain, the British Geological Survey (BGS) are responsible for updating and maintaining records of GB landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of approximately 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. Here we aim to supplement the richness of the NLD by (i) identifying additional landslide events and (ii) adding more detail to existing database entries. This is done by systematically searching the LexisNexis digital archive of 568 local and regional newspapers published in the UK. The first step in the methodology was to construct Boolean search criteria that optimised the balance between minimising the number of irrelevant articles (e.g. "a landslide victory") and maximising those referring to landslide events. This keyword search was then applied to the LexisNexis archive of newspapers for all articles published between 1 January and 31 December 2012, resulting in 1,668 articles. These articles were assessed to determine whether they related to a landslide event. Of the 1,668 articles, approximately 30% (~700) referred to landslide events, with others referring to landslides more generally or themes unrelated to landslides. Examples of information obtained from newspaper articles included: date/time of landslide occurrence, spatial location, size, impact, landslide type and triggering mechanism, although the amount of detail and precision attainable from individual articles was variable. Of the 700 articles found for

  13. Similarity landscapes: An improved method for scientific visualization of information from protein and DNA database searches

    SciTech Connect

    Dogget, N.; Myers, G.; Wills, C.J.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The authors have used computer simulations and examination of a variety of databases to answer questions about a wide range of evolutionary questions. The authors have found that there is a clear distinction in the evolution of HIV-1 and HIV-2, with the former and more virulent virus evolving more rapidly at a functional level. The authors have discovered highly non-random patterns in the evolution of HIV-1 that can be attributed to a variety of selective pressures. In the course of examination of microsatellite DNA (short repeat regions) in microorganisms, the authors have found clear differences between prokaryotes and eukaryotes in their distribution, differences that can be tied to different selective pressures. They have developed a new method (topiary pruning) for enhancing the phylogenetic information contained in DNA sequences. Most recently, the authors have discovered effects in complex rainforest ecosystems that indicate strong frequency-dependent interactions between host species and their parasites, leading to the maintenance of ecosystem variability.

  14. A hybrid spatio-temporal data indexing method for trajectory databases.

    PubMed

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-07-21

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.

  15. A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases

    PubMed Central

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  16. A hybrid spatio-temporal data indexing method for trajectory databases.

    PubMed

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  17. Method of content-based image retrieval for a spinal x-ray image database

    NASA Astrophysics Data System (ADS)

    Krainak, Daniel M.; Long, L. Rodney; Thoma, George R.

    2002-05-01

    The Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM) maintains a digital archive of 17,000 cervical and lumbar spine images collected in the second National Health and Nutrition Examination Survey (NHANES II) conducted by the National Center for Health Statistics (NCHS). Classification of the images for the osteoarthritis research community has been a long-standing goal of researchers at the NLM, collaborators at NCHS, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), and capability to retrieve images based on geometric characteristics of the vertebral bodies is of interest to the vertebral morphometry community. Automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. We implemented a prototype system for a database of 118 spine x-rays and health survey text data related to these x-rays. The system supports conventional text retrieval, as well as retrieval based on shape similarity to a user-supplied vertebral image or sketch.

  18. A METHOD FOR ESTIMATING GAS PRESSURE IN 3013 CONTAINERS USING AN ISP DATABASE QUERY

    SciTech Connect

    Friday, G; L. G. Peppers, L; D. K. Veirs, D

    2008-07-31

    The U.S. Department of Energy's Integrated Surveillance Program (ISP) is responsible for the storage and surveillance of plutonium-bearing material. During storage, plutonium-bearing material has the potential to generate hydrogen gas from the radiolysis of adsorbed water. The generation of hydrogen gas is a safety concern, especially when a container is breached within a glove box during destructive evaluation. To address this issue, the DOE established a standard (DOE, 2004) that sets the criteria for the stabilization and packaging of material for up to 50 years. The DOE has now packaged most of its excess plutonium for long-term storage in compliance with this standard. As part of this process, it is desirable to know within reasonable certainty the total maximum pressure of hydrogen and other gases within the 3013 container if safety issues and compliance with the DOE standards are to be attained. The principal goal of this investigation is to document the method and query used to estimate total (i.e. hydrogen and other gases) gas pressure within a 3013 container based on the material properties and estimated moisture content contained in the ISP database. Initial attempts to estimate hydrogen gas pressure in 3013 containers was based on G-values (hydrogen gas generation per energy input) derived from small scale samples. These maximum G-values were used to calculate worst case pressures based on container material weight, assay, wattage, moisture content, container age, and container volume. This paper documents a revised hydrogen pressure calculation that incorporates new surveillance results and includes a component for gases other than hydrogen. The calculation is produced by executing a query of the ISP database. An example of manual mathematical computations from the pressure equation is compared and evaluated with results from the query. Based on the destructive evaluation of 17 containers, the estimated mean absolute pressure was significantly higher

  19. Clinical experimentation with aerosol antibiotics: current and future methods of administration

    PubMed Central

    Zarogoulidis, Paul; Kioumis, Ioannis; Porpodis, Konstantinos; Spyratos, Dionysios; Tsakiridis, Kosmas; Huang, Haidong; Li, Qiang; Turner, J Francis; Browning, Robert; Hohenforst-Schmidt, Wolfgang; Zarogoulidis, Konstantinos

    2013-01-01

    Currently almost all antibiotics are administered by the intravenous route. Since several systems and situations require more efficient methods of administration, investigation and experimentation in drug design has produced local treatment modalities. Administration of antibiotics in aerosol form is one of the treatment methods of increasing interest. As the field of drug nanotechnology grows, new molecules have been produced and combined with aerosol production systems. In the current review, we discuss the efficiency of aerosol antibiotic studies along with aerosol production systems. The different parts of the aerosol antibiotic methodology are presented. Additionally, information regarding the drug molecules used is presented and future applications of this method are discussed. PMID:24115836

  20. Comparative Research: An Approach to Teaching Research Methods in Political Science and Public Administration

    ERIC Educational Resources Information Center

    Engbers, Trent A

    2016-01-01

    The teaching of research methods has been at the core of public administration education for almost 30 years. But since 1990, this journal has published only two articles on the teaching of research methods. Given the increasing emphasis on data driven decision-making, greater insight is needed into the best practices for teaching public…

  1. Leveraging Administrative Data for Program Evaluations: A Method for Linking Data Sets Without Unique Identifiers.

    PubMed

    Lorden, Andrea L; Radcliff, Tiffany A; Jiang, Luohua; Horel, Scott A; Smith, Matthew L; Lorig, Kate; Howell, Benjamin L; Whitelaw, Nancy; Ory, Marcia

    2016-06-01

    In community-based wellness programs, Social Security Numbers (SSNs) are rarely collected to encourage participation and protect participant privacy. One measure of program effectiveness includes changes in health care utilization. For the 65 and over population, health care utilization is captured in Medicare administrative claims data. Therefore, methods as described in this article for linking participant information to administrative data are useful for program evaluations where unique identifiers such as SSN are not available. Following fuzzy matching methodologies, participant information from the National Study of the Chronic Disease Self-Management Program was linked to Medicare administrative data. Linking variables included participant name, date of birth, gender, address, and ZIP code. Seventy-eight percent of participants were linked to their Medicare claims data. Linking program participant information to Medicare administrative data where unique identifiers are not available provides researchers with the ability to leverage claims data to better understand program effects.

  2. Development of case statements in academic administration: a proactive method for achieving outcomes.

    PubMed

    Mundt, Mary H

    2005-01-01

    The complex nature of higher education presents academic administrators with unique challenges to communicate vision and strategic direction to a variety of internal and external audiences. The administrator must be prepared to engage in persuasive communication to describe the needs and desired outcomes of the academic unit. This article focuses on the use of the case statement as a communication tool for the nursing academic administrator. The case statement is a form of persuasive communication where a situation or need is presented in the context of the mission, vision, and strategic direction of a group or organization. The aim of the case statement is to enlist support in meeting the identified need. Fundamental assumptions about communicating case statements are described, as well as guidelines for how the academic administrator can prepare themselves for using the case statement method.

  3. A comprehensive change detection method for updating the National Land Cover Database to circa 2011

    USGS Publications Warehouse

    Jin, Suming; Yang, Limin; Danielson, Patrick; Homer, Collin G.; Fry, Joyce; Xian, George

    2013-01-01

    The importance of characterizing, quantifying, and monitoring land cover, land use, and their changes has been widely recognized by global and environmental change studies. Since the early 1990s, three U.S. National Land Cover Database (NLCD) products (circa 1992, 2001, and 2006) have been released as free downloads for users. The NLCD 2006 also provides land cover change products between 2001 and 2006. To continue providing updated national land cover and change datasets, a new initiative in developing NLCD 2011 is currently underway. We present a new Comprehensive Change Detection Method (CCDM) designed as a key component for the development of NLCD 2011 and the research results from two exemplar studies. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (CV, RCVMAX, dNBR, and dNDVI) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. For NLCD 2011, the improved and enhanced change products obtained from the CCDM provide critical information on location, magnitude, and direction of potential change areas and serve as a basis for further characterizing land cover changes for the nation. An accuracy assessment from the two study areas show 100% agreement between CCDM mapped no-change class with reference dataset, and 18% and 82% disagreement for the change class for WRS path/row p22r39 and p33r33, respectively. The strength of the CCDM is that the method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and

  4. Empirically Derived Consequences: A Data-Based Method for Prescribing Treatments for Destructive Behavior.

    ERIC Educational Resources Information Center

    Fisher, Wayne; And Others

    1994-01-01

    This study used a data-based assessment to identify reinforcers and punishers for successful treatment of two children with severe destructive behaviors. Results suggested that empirically derived consequences may be useful in decreasing destructive behavior when a functional assessment is inconclusive or suggests that internal stimuli are…

  5. An Interactive Iterative Method for Electronic Searching of Large Literature Databases

    ERIC Educational Resources Information Center

    Hernandez, Marco A.

    2013-01-01

    PubMed® is an on-line literature database hosted by the U.S. National Library of Medicine. Containing over 21 million citations for biomedical literature--both abstracts and full text--in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return…

  6. Analysis of Institutionally Specific Retention Research: A Comparison between Survey and Institutional Database Methods

    ERIC Educational Resources Information Center

    Caison, Amy L.

    2007-01-01

    This study empirically explores the comparability of traditional survey-based retention research methodology with an alternative approach that relies on data commonly available in institutional student databases. Drawing on Tinto's [Tinto, V. (1993). "Leaving College: Rethinking the Causes and Cures of Student Attrition" (2nd Ed.), The University…

  7. Chaos Theory: A Scientific Basis for Alternative Research Methods in Educational Administration.

    ERIC Educational Resources Information Center

    Peca, Kathy

    This paper has three purposes. First, it places in scientific perspective the growing acceptance in educational administration research of alternative methods to empiricism by an explication of chaos theory and its assumptions. Second, it demonstrates that chaos theory provides a scientific basis for investigation of complex qualitative variables…

  8. MANUAL OF ADMINISTRATION AND RECORDING METHODS FOR THE STAATS "MOTIVATED LEARNING" READING PROCEDURE.

    ERIC Educational Resources Information Center

    STAATS, ARTHUR W.; AND OTHERS

    THE STAATS MOTIVATED LEARNING READING PROCEDURE IS AN APPLICATION OF AN INTEGRATED-FUNCTIONAL APPROACH TO LEARNING IN THE AREA OF READING. THE METHOD INVOLVES A SYSTEM OF EXTRINSIC REINFORCEMENT WHICH EMPLOYS TOKENS BACKED UP BY A MONETARY REWARD. THE STUDENT REPORTS TO THE PROGRAM ADMINISTRATOR SOME ITEM FOR WHICH HE WOULD LIKE TO WORK, SUCH AS A…

  9. 78 FR 2280 - Federal Housing Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    .... (See, Section II.F. of the July 15, 2010, notice, at 75 FR 41227). The ten percent discount has proved... INFORMATION: I. Background On July 15, 2010, at 75 FR 41225, HUD published a Federal Register notice... URBAN DEVELOPMENT Federal Housing Administration (FHA) First Look Sales Method Under the...

  10. Personality and Student Performance on Evaluation Methods Used in Business Administration Courses

    ERIC Educational Resources Information Center

    Lakhal, Sawsen; Sévigny, Serge; Frenette, Éric

    2015-01-01

    The objective of this study was to verify whether personality (Big Five model) influences performance on the evaluation methods used in business administration courses. A sample of 169 students enrolled in two compulsory undergraduate business courses responded to an online questionnaire. As it is difficult within the same course to assess…

  11. The Use of Quantitative Methods as an Aid to Decision Making in Educational Administration.

    ERIC Educational Resources Information Center

    Alkin, Marvin C.

    Three quantitative methods are outlined, with suggestions for application to particular problem areas of educational administration: (1) The Leontief input-output analysis, incorporating a "transaction table" for displaying relationships between economic outputs and inputs, mainly applicable to budget analysis and planning; (2) linear programing,…

  12. Methods for the Design and Administration of Web-based Surveys

    PubMed Central

    Schleyer, Titus K. L.; Forrest, Jane L.

    2000-01-01

    This paper describes the design, development, and administration of a Web-based survey to determine the use of the Internet in clinical practice by 450 dental professionals. The survey blended principles of a controlled mail survey with data collection through a Web-based database application. The survey was implemented as a series of simple HTML pages and tested with a wide variety of operating environments. The response rate was 74.2 percent. Eighty-four percent of the participants completed the Web-based survey, and 16 percent used e-mail or fax. Problems identified during survey administration included incompatibilities/technical problems, usability problems, and a programming error. The cost of the Web-based survey was 38 percent less than that of an equivalent mail survey. A general formula for calculating breakeven points between electronic and hardcopy surveys is presented. Web-based surveys can significantly reduce turnaround time and cost compared with mail surveys and may enhance survey item completion rates. PMID:10887169

  13. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference...) None of the administrator of the TRS User Registration Database, the administrator of the VRS...

  14. Methods and Management: NIH Administrators, Federal Oversight, and the Framingham Heart Study

    PubMed Central

    Patel, Sejal S.

    2012-01-01

    Summary This article explores the 1965 controversy over the Framingham Heart Study in the midst of growing oversight into the management of science at the National Institutes of Health (NIH). It describes how, beginning in the early 1960s, federal overseers demanded that NIH administrators adopt particular management styles in administering programs and how these growing pressures led administrators to favor investigative pursuits that allowed for easy prospective accounting of program payoffs, especially those based on experimental methods designed to examine discrete interventions or outcomes of interest. In light of this changing managerial culture within the NIH, the Framingham study and other population laboratories—with their bases in observation and in open-ended study designs—became harder for NIH administrators to justify and defend. PMID:22643985

  15. Optimal Dose and Method of Administration of Intravenous Insulin in the Management of Emergency Hyperkalemia: A Systematic Review

    PubMed Central

    Harel, Ziv; Kamel, Kamel S.

    2016-01-01

    Background and Objectives Hyperkalemia is a common electrolyte disorder that can result in fatal cardiac arrhythmias. Despite the importance of insulin as a lifesaving intervention in the treatment of hyperkalemia in an emergency setting, there is no consensus on the dose or the method (bolus or infusion) of its administration. Our aim was to review data in the literature to determine the optimal dose and route of administration of insulin in the management of emergency hyperkalemia. Design, Setting, Participants, & Measurements We searched several databases from their date of inception through February 2015 for eligible articles published in any language. We included any study that reported on the use of insulin in the management of hyperkalemia. Results We identified eleven studies. In seven studies, 10 units of regular insulin was administered (bolus in five studies, infusion in two studies), in one study 12 units of regular insulin was infused over 30 minutes, and in three studies 20 units of regular insulin was infused over 60 minutes. The majority of included studies were biased. There was no statistically significant difference in mean decrease in serum potassium (K+) concentration at 60 minutes between studies in which insulin was administered as an infusion of 20 units over 60 minutes and studies in which 10 units of insulin was administered as a bolus (0.79±0.25 mmol/L versus 0.78±0.25 mmol/L, P = 0.98) or studies in which 10 units of insulin was administered as an infusion (0.79±0.25 mmol/L versus 0.39±0.09 mmol/L, P = 0.1). Almost one fifth of the study population experienced an episode of hypoglycemia. Conclusion The limited data available in the literature shows no statistically significant difference between the different regimens of insulin used to acutely lower serum K+ concentration. Accordingly, 10 units of short acting insulin given intravenously may be used in cases of hyperkalemia. Alternatively, 20 units of short acting insulin may be

  16. [Method of traditional Chinese medicine formula design based on 3D-database pharmacophore search and patent retrieval].

    PubMed

    He, Yu-su; Sun, Zhi-yi; Zhang, Yan-ling

    2014-11-01

    By using the pharmacophore model of mineralocorticoid receptor antagonists as a starting point, the experiment stud- ies the method of traditional Chinese medicine formula design for anti-hypertensive. Pharmacophore models were generated by 3D-QSAR pharmacophore (Hypogen) program of the DS3.5, based on the training set composed of 33 mineralocorticoid receptor antagonists. The best pharmacophore model consisted of two Hydrogen-bond acceptors, three Hydrophobic and four excluded volumes. Its correlation coefficient of training set and test set, N, and CAI value were 0.9534, 0.6748, 2.878, and 1.119. According to the database screening, 1700 active compounds from 86 source plant were obtained. Because of lacking of available anti-hypertensive medi cation strategy in traditional theory, this article takes advantage of patent retrieval in world traditional medicine patent database, in order to design drug formula. Finally, two formulae was obtained for antihypertensive. PMID:25850277

  17. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  18. Method applied to the background analysis of energy data to be considered for the European Reference Life Cycle Database (ELCD).

    PubMed

    Fazio, Simone; Garraín, Daniel; Mathieux, Fabrice; De la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda

    2015-01-01

    Under the framework of the European Platform on Life Cycle Assessment, the European Reference Life-Cycle Database (ELCD - developed by the Joint Research Centre of the European Commission), provides core Life Cycle Inventory (LCI) data from front-running EU-level business associations and other sources. The ELCD contains energy-related data on power and fuels. This study describes the methods to be used for the quality analysis of energy data for European markets (available in third-party LC databases and from authoritative sources) that are, or could be, used in the context of the ELCD. The methodology was developed and tested on the energy datasets most relevant for the EU context, derived from GaBi (the reference database used to derive datasets for the ELCD), Ecoinvent, E3 and Gemis. The criteria for the database selection were based on the availability of EU-related data, the inclusion of comprehensive datasets on energy products and services, and the general approval of the LCA community. The proposed approach was based on the quality indicators developed within the International Reference Life Cycle Data System (ILCD) Handbook, further refined to facilitate their use in the analysis of energy systems. The overall Data Quality Rating (DQR) of the energy datasets can be calculated by summing up the quality rating (ranging from 1 to 5, where 1 represents very good, and 5 very poor quality) of each of the quality criteria indicators, divided by the total number of indicators considered. The quality of each dataset can be estimated for each indicator, and then compared with the different databases/sources. The results can be used to highlight the weaknesses of each dataset and can be used to guide further improvements to enhance the data quality with regard to the established criteria. This paper describes the application of the methodology to two exemplary datasets, in order to show the potential of the methodological approach. The analysis helps LCA

  19. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  20. Computer systems and methods for the query and visualization of multidimensional databases

    SciTech Connect

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  1. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  2. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  3. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  4. A semantic data dictionary method for database schema integration in CIESIN

    NASA Astrophysics Data System (ADS)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  5. Methods for long-term 17β-estradiol administration to mice.

    PubMed

    Ingberg, E; Theodorsson, A; Theodorsson, E; Strom, J O

    2012-01-01

    Rodent models constitute a cornerstone in the elucidation of the effects and biological mechanisms of 17β-estradiol. However, a thorough assessment of the methods for long-term administration of 17β-estradiol to mice is lacking. The fact that 17β-estradiol has been demonstrated to exert different effects depending on dose emphasizes the need for validated administration regimens. Therefore, 169 female C57BL/6 mice were ovariectomized and administered 17β-estradiol using one of the two commonly used subcutaneous methods; slow-release pellets (0.18 mg, 60-day release pellets; 0.72 mg, 90-day release pellets) and silastic capsules (with/without convalescence period, silastic laboratory tubing, inner/outer diameter: 1.575/3.175 mm, filled with a 14 mm column of 36 μg 17β-estradiol/mL sesame oil), or a novel peroral method (56 μg 17β-estradiol/day/kg body weight in the hazelnut cream Nutella). Forty animals were used as ovariectomized and intact controls. Serum samples were obtained weekly for five weeks and 17β-estradiol concentrations were measured using radioimmunoassay. The peroral method resulted in steady concentrations within--except on one occasion--the physiological range and the silastic capsules produced predominantly physiological concentrations, although exceeding the range by maximum a factor three during the first three weeks. The 0.18 mg pellet yielded initial concentrations an order of magnitude higher than the physiological range, which then decreased drastically, and the 0.72 mg pellet produced between 18 and 40 times higher concentrations than the physiological range during the entire experiment. The peroral method and silastic capsules described in this article constitute reliable modes of administration of 17β-estradiol, superior to the widely used commercial pellets. PMID:22137913

  6. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix

  7. Updating the 2001 National Land Cover Database Impervious Surface Products to 2006 using Landsat Imagery Change Detection Methods

    USGS Publications Warehouse

    Xian, G.; Homer, C.

    2010-01-01

    A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline. ?? 2010 Elsevier Inc.

  8. MapReduce Implementation of a Hybrid Spectral Library-Database Search Method for Large-Scale Peptide Identification

    SciTech Connect

    Kalyanaraman, Anantharaman; Cannon, William R.; Latt, Benjamin K.; Baxter, Douglas J.

    2011-11-01

    A MapReduce-based implementation called MR- MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs.

  9. Global Database on Donation and Transplantation: goals, methods and critical issues (www.transplant-observatory.org).

    PubMed

    Mahillo, Beatriz; Carmona, Mar; Álvarez, Marina; Noel, Luc; Matesanz, Rafael

    2013-04-01

    The Global Database on Donation and Transplantation represents the most comprehensive source to date of worldwide data concerning activities in organ donation and transplantation derived from official sources, as well as information on legal and organizational aspects. The objectives are to collect, analyse and disseminate this kind of information of the WHO Member States and to facilitate a network of focal persons in the field of transplantation. They are responsible for providing the legislative and organizational aspects and the annual activity practices through a specific questionnaire. 104 out of the 194 WHO Member States that cover the 90% of the global population contribute to this project.Although we know the numerous limitations and biases as a result of the different interpretations of the questions, based on cultural factors and language, there is no other similar approach to collect information on donation and transplantation practices all over the world. The knowledge of demand for transplantation, availability of deceased and living donor organs and the access to transplantation is essential to monitor global trends in transplantation needs and donor organ availability. Information regarding the existence of regulatory oversight is fundamental to ensure the ethical practice of organ donation and transplantation. PMID:23477800

  10. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.

  11. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods.

    PubMed

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis. PMID:26097422

  12. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods

    PubMed Central

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software “GFR estimation software” which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows® as operating system and Visual Basic 6.0 as the front end and Microsoft Access® as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis. PMID:26097422

  13. Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases

    PubMed Central

    Elgendi, Mohamed

    2013-01-01

    The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usually based on machine-learning approaches that require sufficient computational resources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically efficient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-efficient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital filter design. PMID:24066054

  14. A data-based comparison of flood frequency analysis methods used in France

    NASA Astrophysics Data System (ADS)

    Kochanek, K.; Renard, B.; Arnaud, P.; Aubert, Y.; Lang, M.; Cipriani, T.; Sauquet, E.

    2014-02-01

    Flood frequency analysis (FFA) aims at estimating quantiles with large return periods for an extreme discharge variable. Many FFA implementations are used in operational practice in France. These implementations range from the estimation of a pre-specified distribution to continuous simulation approaches using a rainfall simulator coupled with a rainfall-runoff model. This diversity of approaches raises questions regarding the limits of each implementation and calls for a nation-wide comparison of their predictive performances. This paper presents the results of a national comparison of the main FFA implementations used in France. More accurately, eight implementations are considered, corresponding to the local, regional and local-regional estimation of Gumbel and Generalized Extreme Value (GEV) distributions, as well as the local and regional versions of a continuous simulation approach. A data-based comparison framework is applied to these eight competitors to evaluate their predictive performances in terms of reliability and stability, using daily flow data from more than 1000 gauging stations in France. Results from this comparative exercise suggest that two implementations dominate their competitors in terms of predictive performances, namely the local version of the continuous simulation approach and the local-regional estimation of a GEV distribution. More specific conclusions include the following: (i) the Gumbel distribution is not suitable for Mediterranean catchments, since this distribution demonstrably leads to an underestimation of flood quantiles; (ii) the local estimation of a GEV distribution is not recommended, because the difficulty in estimating the shape parameter results in frequent predictive failures; (iii) all the purely regional implementations evaluated in this study displayed a quite poor reliability, suggesting that prediction in completely ungauged catchments remains a challenge.

  15. A data-based comparison of flood frequency analysis methods used in France

    NASA Astrophysics Data System (ADS)

    Kochanek, K.; Renard, B.; Arnaud, P.; Aubert, Y.; Lang, M.; Cipriani, T.; Sauquet, E.

    2013-09-01

    Many flood frequency analysis (FFA) implementations are used in operational practice in France. These implementations range from the estimation of a pre-specified distribution to continuous simulation approaches using a rainfall simulator coupled with a rainfall-runoff model. This diversity of approaches raises questions regarding the optimal ambits of each implementation and calls for a nation-wide comparison of their predictive performances. This paper presents the results of a national comparison of the main FFA implementations used in France. More accurately, eight implementations are considered, corresponding to the local, regional and local-regional estimation of Gumbel and Generalized Extreme Value (GEV) distributions, as well as the local and regional estimation of a continuous simulation approach eventually resulted in a local and a regional version. A data-based comparison framework is applied to these eight competitors to evaluate their predictive performances in terms of reliability and stability, using daily flow data data from more than one thousand gauging stations in France. Results from this comparative exercise suggest that two implementations dominate their competitors in terms of predictive performances, namely the local version of the continuous simulation approach and the local-regional estimation of a GEV distribution. More specific conclusions include the following: (i) the Gumbel distribution is not suitable for Mediterranean catchments, since this distribution demonstrably leads to an underestimation of flood quantiles; (ii) the local estimation of a GEV distribution is not recommended, because the difficulty in estimating the shape parameter results in frequent predictive failures; (iii) all the purely regional implementations evaluated in this study displayed a quite poor reliability, suggesting that prediction in completely ungauged catchments remains a challenge.

  16. Merging Children’s Oncology Group Data with an External Administrative Database Using Indirect Patient Identifiers: A Report from the Children’s Oncology Group

    PubMed Central

    Li, Yimei; Hall, Matt; Fisher, Brian T.; Seif, Alix E.; Huang, Yuan-Shung; Bagatell, Rochelle; Getz, Kelly D.; Alonzo, Todd A.; Gerbing, Robert B.; Sung, Lillian; Adamson, Peter C.; Gamis, Alan; Aplenc, Richard

    2015-01-01

    Purpose Clinical trials data from National Cancer Institute (NCI)-funded cooperative oncology group trials could be enhanced by merging with external data sources. Merging without direct patient identifiers would provide additional patient privacy protections. We sought to develop and validate a matching algorithm that uses only indirect patient identifiers. Methods We merged the data from two Phase III Children’s Oncology Group (COG) trials for de novo acute myeloid leukemia (AML) with the Pediatric Health Information Systems (PHIS). We developed a stepwise matching algorithm that used indirect identifiers including treatment site, gender, birth year, birth month, enrollment year and enrollment month. Results from the stepwise algorithm were compared against the direct merge method that used date of birth, treatment site, and gender. The indirect merge algorithm was developed on AAML0531 and validated on AAML1031. Results Of 415 patients enrolled on the AAML0531 trial at PHIS centers, we successfully matched 378 (91.1%) patients using the indirect stepwise algorithm. Comparison to the direct merge result suggested that 362 (95.7%) matches identified by the indirect merge algorithm were concordant with the direct merge result. When validating the indirect stepwise algorithm using the AAML1031 trial, we successfully matched 157 out of 165 patients (95.2%) and 150 (95.5%) of the indirectly merged matches were concordant with the directly merged matches. Conclusions These data demonstrate that patients enrolled on COG clinical trials can be successfully merged with PHIS administrative data using a stepwise algorithm based on indirect patient identifiers. The merged data sets can be used as a platform for comparative effectiveness and cost effectiveness studies. PMID:26606521

  17. A noninvasive method for evaluating portal circulation by administration of /sup 201/Tl per rectum

    SciTech Connect

    Tonami, N.; Nakajima, K.; Hisada, K.; Tanaka, N.; Kobayashi, K.

    1982-11-01

    A new method for evaluating portal systemic circulation by administration of /sup 201/Tl per rectum was performed in 13 control subjects and in 65 patients with various liver diseases. In normal controls, the liver was visualized on the 0--5-min image whereas the images of other organs such as the heart, spleen, and lungs were very poor. In patients with liver cirrhosis associated with portal-systemic shunt, and in many other patients with hepatocellular damage, the liver was not so clearly visualized, whereas radioactivity in other organs, especially the heart, became evident. The heart-to-liver uptake ratio at 20 min after administration (H/L ratio) was significantly higher in liver cirrhosis than in normals and patients with chronic hepatitis (p less than 0.001). The patients with esophageal varices showed a significantly higher H/L ratio compared with that in cirrhotic patients without esophageal varices (p less than 0.001). The H/L ratio also showed a significant difference (p less than 0.01) between Stage 1 and Stage 3 esophageal varices. Since there were many other patients with hepatocellular damage who had high H/L ratios similar to those in liver cirrhosis, the effect that hepatocellular damage has on the liver uptake of /sup 201/Tl is also considered. Our present data suggest that this noninvasive method seems to be useful in evaluating portal-to-systemic shunting.

  18. A noninvasive method for evaluating portal circulation by administration of Tl-201 per rectum

    SciTech Connect

    Tonami, N.; Nakajima, K.; Hisada, K.; Tanaka, N.; Kobayashi, K.

    1982-11-01

    A new method for evaluating portal systemic circulation by administration of Tl-201 per rectum was performed in 13 control subjects and in 65 patients with various liver diseases. In normal controls, the liver was visualized on the 0-5-min image whereas the images of other organs such as the heart, spleen, and lungs were very poor. In patients with liver cirrhosis associated with portal-systemic shunt, and in many other patients with hepatocellular damage, the liver was not so clearly visualized, whereas radioactivity in other organs, especially the heart, became evident. The heart-to-liver uptake ratio at 20 min after administration (H/L ratio) was significantly higher in liver cirrhosis than in normals and patients with chronic hepatitis (p<0.001). The patients with esophageal varices showed a significantly higher H/L ratio compared with that in cirrhotic patients without esophageal varices (p<0.001). The H/L ratio also showed a significant difference (p<0.01) between Stage 1 and Stage 3 esophageal varices. Since there were many other patients with hepatocellular damage who had high H/L ratios similar to those in liver cirrhosis, the effect that hepatocellular damage has on the liver uptake of T1-201 is also considered. Our present data suggest that this noninvasive method seems to be useful in evaluating portal-to-systemic shunting.

  19. Investigation of photodynamic therapy optimization for port wine stain using modulation of photosensitizer administration methods.

    PubMed

    Wang, Ying; Zuo, Zhaohui; Liao, Xiaohua; Gu, Ying; Qiu, Haixia; Zeng, Jing

    2013-12-01

    To raise photosensitizer concentration level during the photodynamic therapy process, two new methods of photosensitizer administration were investigated. The first method involves the slow intravenous injection of photosensitizer throughout the first 15 min of irradiation, and the second method involves 30 min fomentation before photosensitizer injection and irradiation. The fluorescence spectra of port wine stain skin were monitored and the therapeutic effect correlated index was calculated with a previously published spectral algorithm. Thirty cases were divided into group A (slow injection of photosensitizer during the first 15 min), group B (fomentation), and group C (control group, traditional injection method), with 10 cases in each group. To analyze the effect of these two new methods, the change of therapeutic effect correlated index values of two photodynamic therapy sessions for each patient were calculated, and the photodynamic therapy outcome was compared. The results showed that the change of therapeutic effect correlated index in group A was slightly more remarkable than that in the control group. The change of therapeutic effect correlated index in group B was similar to that in the control group. Slow injection of photosensitizer during photodynamic therapy has a potential to increase photosensitizer concentration level during photodynamic therapy. However, fomentation before photodynamic therapy has no such potential. There is a need for new methods to be attempted.

  20. Data Rods: High Speed, Time-Series Analysis of Massive Cryospheric Data Sets Using Object-Oriented Database Methods

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Gallaher, D. W.; Grant, G.; Lv, Q.

    2011-12-01

    Change over time, is the central driver of climate change detection. The goal is to diagnose the underlying causes, and make projections into the future. In an effort to optimize this process we have developed the Data Rod model, an object-oriented approach that provides the ability to query grid cell changes and their relationships to neighboring grid cells through time. The time series data is organized in time-centric structures called "data rods." A single data rod can be pictured as the multi-spectral data history at one grid cell: a vertical column of data through time. This resolves the long-standing problem of managing time-series data and opens new possibilities for temporal data analysis. This structure enables rapid time- centric analysis at any grid cell across multiple sensors and satellite platforms. Collections of data rods can be spatially and temporally filtered, statistically analyzed, and aggregated for use with pattern matching algorithms. Likewise, individual image pixels can be extracted to generate multi-spectral imagery at any spatial and temporal location. The Data Rods project has created a series of prototype databases to store and analyze massive datasets containing multi-modality remote sensing data. Using object-oriented technology, this method overcomes the operational limitations of traditional relational databases. To demonstrate the speed and efficiency of time-centric analysis using the Data Rods model, we have developed a sea ice detection algorithm. This application determines the concentration of sea ice in a small spatial region across a long temporal window. If performed using traditional analytical techniques, this task would typically require extensive data downloads and spatial filtering. Using Data Rods databases, the exact spatio-temporal data set is immediately available No extraneous data is downloaded, and all selected data querying occurs transparently on the server side. Moreover, fundamental statistical

  1. Comparing subjective image quality measurement methods for the creation of public databases

    NASA Astrophysics Data System (ADS)

    Redi, Judith; Liu, Hantao; Alers, Hani; Zunino, Rodolfo; Heynderickx, Ingrid

    2010-01-01

    The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples, hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available. The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR approach seems promising from the viewpoint of both consistency and repeatability.

  2. Modelos para la Unificacion de Conceptos, Metodos y Procedimientos Administrativos (Guidelines for Uniform Administrative Concepts, Methods, and Procedures).

    ERIC Educational Resources Information Center

    Serrano, Jorge A., Ed.

    These documents, discussed and approved during the first meeting of the university administrators affiliated with the Federation of Private Universities of Central America and Panama (FUPAC), seek to establish uniform administrative concepts, methods, and procedures, particularly with respect to budgetary matters. The documents define relevant…

  3. La Administradora: A Mixed Methods Study of the Resilience of Mexican American Women Administrators at Hispanic Serving Institutions

    ERIC Educational Resources Information Center

    Sanchez-Zamora, Sabrina Suzanne

    2013-01-01

    This mixed methods study explored the resilience of Mexican American women administrators at Hispanic Serving Institutions (HSIs). The women administrators that were considered in this study included department chairs, deans, and vice presidents in a four-year public HSI. There is an underrepresentation of Mexican American women in higher…

  4. Comparative study of multimodal intra-subject image registration methods on a publicly available database

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Ghayoor, Ali; Johnson, Hans J.; Sonka, Milan

    2016-03-01

    This work reports on a comparative study between five manual and automated methods for intra-subject pair-wise registration of images from different modalities. The study includes a variety of inter-modal image registrations (MR-CT, PET-CT, PET-MR) utilizing different methods including two manual point-based techniques using rigid and similarity transformations, one automated point-based approach based on Iterative Closest Point (ICP) algorithm, and two automated intensity-based methods using mutual information (MI) and normalized mutual information (NMI). These techniques were employed for inter-modal registration of brain images of 9 subjects from a publicly available dataset, and the results were evaluated qualitatively via checkerboard images and quantitatively using root mean square error and MI criteria. In addition, for each inter-modal registration, a paired t-test was performed on the quantitative results in order to find any significant difference between the results of the studied registration techniques.

  5. A new database of source time functions (STFs) extracted from the SCARDEC method

    NASA Astrophysics Data System (ADS)

    Vallée, Martin; Douet, Vincent

    2016-08-01

    SCARDEC method (Vallée et al., 2011) offers a natural access to the earthquakes source time functions (STFs), together with the 1st order earthquake source parameters (seismic moment, depth and focal mechanism). This article first aims at presenting some new approaches and related implementations done in order to automatically provide broadband STFs with the SCARDEC method, both for moderate and very large earthquakes. The updated method has been applied to all earthquakes above magnitude 5.8 contained in the NEIC-PDE catalog since 1992, providing a new consistent catalog of source parameters associated with STFs. This represents today a large catalog (2782 events on 2014/12/31) that we plan to update on a regular basis. It is made available through a web interface whose functionalities are described here.

  6. Open Rotor Tone Shielding Methods for System Noise Assessments Using Multiple Databases

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Thomas, Russell H.; Lopes, Leonard V.; Burley, Casey L.; Van Zante, Dale E.

    2014-01-01

    Advanced aircraft designs such as the hybrid wing body, in conjunction with open rotor engines, may allow for significant improvements in the environmental impact of aviation. System noise assessments allow for the prediction of the aircraft noise of such designs while they are still in the conceptual phase. Due to significant requirements of computational methods, these predictions still rely on experimental data to account for the interaction of the open rotor tones with the hybrid wing body airframe. Recently, multiple aircraft system noise assessments have been conducted for hybrid wing body designs with open rotor engines. These assessments utilized measured benchmark data from a Propulsion Airframe Aeroacoustic interaction effects test. The measured data demonstrated airframe shielding of open rotor tonal and broadband noise with legacy F7/A7 open rotor blades. Two methods are proposed for improving the use of these data on general open rotor designs in a system noise assessment. The first, direct difference, is a simple octave band subtraction which does not account for tone distribution within the rotor acoustic signal. The second, tone matching, is a higher-fidelity process incorporating additional physical aspects of the problem, where isolated rotor tones are matched by their directivity to determine tone-by-tone shielding. A case study is conducted with the two methods to assess how well each reproduces the measured data and identify the merits of each. Both methods perform similarly for system level results and successfully approach the experimental data for the case study. The tone matching method provides additional tools for assessing the quality of the match to the data set. Additionally, a potential path to improve the tone matching method is provided.

  7. Rate of bleeding-related episodes in adult patients with primary immune thrombocytopenia: a retrospective cohort study using a large administrative medical claims database in the US

    PubMed Central

    Altomare, Ivy; Cetin, Karynsa; Wetten, Sally; Wasser, Jeffrey S

    2016-01-01

    Background Immune thrombocytopenia (ITP) is a rare disorder characterized by low platelet counts and an increased tendency to bleed. The goal of ITP therapy is to treat or prevent bleeding. Actual rates of bleeding are unknown. Clinical trial data may not reflect real-world bleeding rates because of the inclusion of highly refractory patients and more frequent use of rescue therapy. Methods We used administrative medical claims data in the US to examine the occurrence of bleeding-related episodes (BREs) – a composite end point including bleeding and/or rescue therapy use – in adults diagnosed with primary ITP (2008–2012). BRE rates were calculated overall and by ITP phase and splenectomy status. Patients were followed from ITP diagnosis until death, disenrollment from the health plan, or June 30, 2013, whichever came first. Results We identified 6,651 adults diagnosed with primary ITP over the study period (median age: 53 years; 59% female). During 13,064 patient-years of follow-up, 3,768 patients (57%) experienced ≥1 BRE (1.08 BREs per patient-year; 95% confidence interval: 1.06–1.10). The majority (58%) of BREs consisted of rescue therapy use only. Common bleeding types were gastrointestinal hemorrhage, hematuria, ecchymosis, and epistaxis. Intracranial hemorrhage was reported in 74 patients (1%). Just over 7% of patients underwent splenectomy. Newly diagnosed and splenectomized patients had elevated BRE rates. Conclusion We provide current real-world estimates of BRE rates in adults with primary ITP. The majority of ITP patients experienced ≥1 BRE, and over half were defined by rescue therapy use alone. This demonstrates the importance of examining both bleeding and rescue therapy use to fully assess disease burden. PMID:27382333

  8. The EUROCARE-4 database on cancer survival in Europe: data standardisation, quality control and methods of statistical analysis.

    PubMed

    De Angelis, Roberta; Francisci, Silvia; Baili, Paolo; Marchesi, Francesca; Roazzi, Paolo; Belot, Aurélien; Crocetti, Emanuele; Pury, Pierre; Knijn, Arnold; Coleman, Michel; Capocaccia, Riccardo

    2009-04-01

    This paper describes the collection, standardisation and checking of cancer survival data included in the EUROCARE-4 database. Methods for estimating relative survival are also described. Incidence and vital status data on newly diagnosed European cancer cases were received from 93 cancer registries in 23 countries, covering 151,400,000 people (35% of the participating country population). The third revision of the International Classification of Diseases for Oncology was used to specify tumour topography and morphology. Records were extensively checked for consistency and compatibility using multiple routines; flagged records were sent back for correction. An algorithm assigned standardised sequence numbers to multiple cancers. Only first malignant cancers were used to estimate relative survival from registry, year, sex and age-specific life tables. Age-adjusted and Europe-wide survival were also estimated. The database contains 13,814,573 cases diagnosed in 1978-2002; 92% malignant. A negligible proportion of records was excluded for major errors. Of 5,753,934 malignant adult cases diagnosed in 1995-2002, 5.3% were second or later cancers, 2.7% were known from death certificates only and 0.4% were discovered at autopsy. The remaining 5,278,670 cases entered the survival analyses, 90% of these had microscopic confirmation and 1.3% were censored alive after less than five years' follow-up. These indicators suggest satisfactory data quality that has improved since EUROCARE-3.

  9. Analysis and comparison of 2D fingerprints: insights into database screening performance using eight fingerprint methods.

    PubMed

    Duan, Jianxin; Dixon, Steven L; Lowrie, Jeffrey F; Sherman, Woody

    2010-09-01

    Virtual screening is a widely used strategy in modern drug discovery and 2D fingerprint similarity is an important tool that has been successfully applied to retrieve active compounds from large datasets. However, it is not always straightforward to select an appropriate fingerprint method and associated settings for a given problem. Here, we applied eight different fingerprint methods, as implemented in the new cheminformatics package Canvas, on a well-validated dataset covering five targets. The fingerprint methods include Linear, Dendritic, Radial, MACCS, MOLPRINT2D, Pairwise, Triplet, and Torsion. We find that most fingerprints have similar retrieval rates on average; however, each has special characteristics that distinguish its performance on different query molecules and ligand sets. For example, some fingerprints exhibit a significant ligand size dependency whereas others are more robust with respect to variations in the query or active compounds. In cases where little information is known about the active ligands, MOLPRINT2D fingerprints produce the highest average retrieval actives. When multiple queries are available, we find that a fingerprint averaged over all query molecules is generally superior to fingerprints derived from single queries. Finally, a complementarity metric is proposed to determine which fingerprint methods can be combined to improve screening results.

  10. A Comparison of Bibliographic Instruction Methods on CD-ROM Databases.

    ERIC Educational Resources Information Center

    Davis, Dorothy F.

    1993-01-01

    Describes a study of four methods used to instruct college students on searching PsycLIT on CD-ROM: (1) lecture/demonstration; (2) lecture/demonstration using a liquid crystal display (LCD) panel; (3) video; and (4) a computer-based tutorial. Performance data are analyzed, and factors to consider when developing a CD-ROM bibliographic instruction…

  11. Computer networks for financial activity management, control and statistics of databases of economic administration at the Joint Institute for Nuclear Research

    NASA Astrophysics Data System (ADS)

    Tyupikova, T. V.; Samoilov, V. N.

    2003-04-01

    Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research.

  12. Description of two waterborne disease outbreaks in France: a comparative study with data from cohort studies and from health administrative databases.

    PubMed

    Mouly, D; Van Cauteren, D; Vincent, N; Vaissiere, E; Beaudeau, P; Ducrot, C; Gallay, A

    2016-02-01

    Waterborne disease outbreaks (WBDO) of acute gastrointestinal illness (AGI) are a public health concern in France. Their occurrence is probably underestimated due to the lack of a specific surveillance system. The French health insurance database provides an interesting opportunity to improve the detection of these events. A specific algorithm to identify AGI cases from drug payment reimbursement data in the health insurance database has been previously developed. The purpose of our comparative study was to retrospectively assess the ability of the health insurance data to describe WBDO. Data from the health insurance database was compared with the data from cohort studies conducted in two WBDO in 2010 and 2012. The temporal distribution of cases, the day of the peak and the duration of the epidemic, as measured using the health insurance data, were similar to the data from one of the two cohort studies. However, health insurance data accounted for 54 cases compared to the estimated 252 cases accounted for in the cohort study. The accuracy of using health insurance data to describe WBDO depends on the medical consultation rate in the impacted population. As this is never the case, data analysis underestimates the total number of AGI cases. However this data source can be considered for the development of a detection system of a WBDO in France, given its ability to describe an epidemic signal.

  13. A new database of Source Time Functions (STFs) extracted from the SCARDEC method

    NASA Astrophysics Data System (ADS)

    Vallée, Martin; Douet, Vincent

    2016-04-01

    SCARDEC method (Vallée et al., 2011) offers a natural access to the earthquakes source time functions (STFs), together with the first order earthquake source parameters (seismic moment, depth and focal mechanism). We first present here some new approaches and related implementations done in order to automatically provide broadband STFs with the SCARDEC method, both for moderate (down to magnitude 5.8) and very large earthquakes. The updated method has been applied to all the earthquakes since 1992, providing a new consistent catalog of source parameters associated with STFs. Applications are expected to be various, as STFs offer quantitative information on the source process, helping fundamental research on earthquake mechanics or more applied studies related to seismic hazard. On the other hand, they can be also seen as a tool for Earth structure analyses, where the excitation of the medium at the source has to be known. The catalog now contains 2889 events (including earthquakes till 2014/12/31), and we plan to update it on a regular basis. It is made available through a web interface whose functionalities are described here.

  14. Facilitators and Barriers to Safe Medication Administration to Hospital Inpatients: A Mixed Methods Study of Nurses’ Medication Administration Processes and Systems (the MAPS Study)

    PubMed Central

    McLeod, Monsey; Barber, Nicholas; Franklin, Bryony Dean

    2015-01-01

    Context Research has documented the problem of medication administration errors and their causes. However, little is known about how nurses administer medications safely or how existing systems facilitate or hinder medication administration; this represents a missed opportunity for implementation of practical, effective, and low-cost strategies to increase safety. Aim To identify system factors that facilitate and/or hinder successful medication administration focused on three inter-related areas: nurse practices and workarounds, workflow, and interruptions and distractions. Methods We used a mixed-methods ethnographic approach involving observational fieldwork, field notes, participant narratives, photographs, and spaghetti diagrams to identify system factors that facilitate and/or hinder successful medication administration in three inpatient wards, each from a different English NHS trust. We supplemented this with quantitative data on interruptions and distractions among other established medication safety measures. Findings Overall, 43 nurses on 56 drug rounds were observed. We identified a median of 5.5 interruptions and 9.6 distractions per hour. We identified three interlinked themes that facilitated successful medication administration in some situations but which also acted as barriers in others: (1) system configurations and features, (2) behaviour types among nurses, and (3) patient interactions. Some system configurations and features acted as a physical constraint for parts of the drug round, however some system effects were partly dependent on nurses’ inherent behaviour; we grouped these behaviours into ‘task focused’, and ‘patient-interaction focused’. The former contributed to a more streamlined workflow with fewer interruptions while the latter seemed to empower patients to act as a defence barrier against medication errors by being: (1) an active resource of information, (2) a passive information resource, and/or (3) a

  15. Identification of crystals deposited in brain and kidney after xylitol administration by biochemical, histochemical, and electron diffraction methods

    PubMed Central

    Evans, G. W.; Phillips, Gael; Mukherjee, T. M.; Snow, M. R.; Lawrence, J. R.; Thomas, D. W.

    1973-01-01

    The positive identification of crystals of calcium oxalate occurring in brain and kidney after xylitol administration is described. Biochemical, histochemical, conventional light and electron microscopical methods, including selected area electron diffraction, were used to characterize the crystals. Images PMID:4693896

  16. Business Architecture Development at Public Administration - Insights from Government EA Method Engineering Project in Finland

    NASA Astrophysics Data System (ADS)

    Valtonen, Katariina; Leppänen, Mauri

    Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.

  17. Expanded image database of pistachio x-ray images and classification by conventional methods

    NASA Astrophysics Data System (ADS)

    Keagy, Pamela M.; Schatzki, Thomas F.; Le, Lan Chau; Casasent, David P.; Weber, David

    1996-12-01

    In order to develop sorting methods for insect damaged pistachio nuts, a large data set of pistachio x-ray images (6,759 nuts) was created. Both film and linescan sensor images were acquired, nuts dissected and internal conditions coded using the U.S. Grade standards and definitions for pistachios. A subset of 1199 good and 686 insect damaged nuts was used to calculate and test discriminant functions. Statistical parameters of image histograms were evaluated for inclusion by forward stepwise discrimination. Using three variables in the discriminant function, 89% of test set nuts were correctly identified. Comparable data for 6 human subjects ranged from 67 to 92%. If the loss of good nuts is held to 1% by requiring a high probability to discard a nut as insect damaged, approximately half of the insect damage present in clean pistachio nuts may be detected and removed by x-ray inspection.

  18. Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database

    PubMed Central

    Turner, Erick H.; Knoepflmacher, Daniel; Shapley, Lee

    2012-01-01

    Background Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy. Methods and Findings FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant. Conclusions The magnitude of publication bias found for antipsychotics was less than that found

  19. Data-based methods and algorithms for the analysis of sandbar behavior with exogenous variables

    NASA Astrophysics Data System (ADS)

    Múnera, Sebastián; Osorio, Andrés F.; Velásquez, Juan D.

    2014-11-01

    Sandbars are natural features generated in the nearshore zones by the interaction between the sea and the coast. The short-term processes that drive sandbar behavior are waves and sediment transport. The interaction between waves and the coast is highly nonlinear and, traditionally, process-based models (e.g., evolution models) have been used for modelling and analyzing sandbar behavior in the short term. However, medium- to long-term predictions are not always possible with these models due to their exponential error accumulation. Data-driven models emerge as an alternative to process-based models as they do not need insight on the physical knowledge of the model, but they extract knowledge from patterns found in the data. In this paper, we apply the data-driven techniques: EMD (empirical mode decomposition) method and ARNN (autoregressive neural networks) on sandbar and wave time series from the coast of Cartagena de Indias, Colombia. The former is used for analyzing the relationship between sandbar and wave conditions in a graphical way; and the latter is used for deriving nonlinear simple/partial cross/auto-correlation coefficients. Evidence of nonlinear dependencies is detected between the present state of sandbar location and the past states of wave conditions.

  20. Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database.

    PubMed

    van Ginneken, Bram; Stegmann, Mikkel B; Loog, Marco

    2006-02-01

    The task of segmenting the lung fields, the heart, and the clavicles in standard posterior-anterior chest radiographs is considered. Three supervised segmentation methods are compared: active shape models, active appearance models and a multi-resolution pixel classification method that employs a multi-scale filter bank of Gaussian derivatives and a k-nearest-neighbors classifier. The methods have been tested on a publicly available database of 247 chest radiographs, in which all objects have been manually segmented by two human observers. A parameter optimization for active shape models is presented, and it is shown that this optimization improves performance significantly. It is demonstrated that the standard active appearance model scheme performs poorly, but large improvements can be obtained by including areas outside the objects into the model. For lung field segmentation, all methods perform well, with pixel classification giving the best results: a paired t-test showed no significant performance difference between pixel classification and an independent human observer. For heart segmentation, all methods perform comparably, but significantly worse than a human observer. Clavicle segmentation is a hard problem for all methods; best results are obtained with active shape models, but human performance is substantially better. In addition, several hybrid systems are investigated. For heart segmentation, where the separate systems perform comparably, significantly better performance can be obtained by combining the results with majority voting. As an application, the cardio-thoracic ratio is computed automatically from the segmentation results. Bland and Altman plots indicate that all methods perform well when compared to the gold standard, with confidence intervals from pixel classification and active appearance modeling very close to those of a human observer. All results, including the manual segmentations, have been made publicly available to facilitate

  1. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    PubMed Central

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  2. Threshold detection for the generalized Pareto distribution: Review of representative methods and application to the NOAA NCDC daily rainfall database

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto

    2016-04-01

    In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.

  3. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  4. Updating the 2001 National Land Cover Database land cover classification to 2006 by using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, G.; Homer, C.; Fry, J.

    2009-01-01

    The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline. ?? 2009 Elsevier Inc.

  5. Administrative Support and Alternatively Certified Teachers: A Mixed Methods Study on New Teacher Support and Retention

    ERIC Educational Resources Information Center

    Anderson, Erin M.

    2012-01-01

    A non-experimental study was conducted to examine the perceived administrative support needs of alternatively certified teachers and determine their impact on teacher retention. The study sought to identify the most valued administrative support needs of alternatively-certified teachers; to compare those needs by gender and tier teaching level;…

  6. Superovulation with a single administration of FSH in aluminum hydroxide gel: a novel superovulation method for cattle

    PubMed Central

    KIMURA, Koji

    2016-01-01

    Superovulation (SOV) is a necessary technique to produce large numbers of embryos for embryo transfer. In the conventional methods, follicular stimulating hormone (FSH) is administered to donor cattle twice daily for 3 to 4 days. As this method is labor intensive and stresses cattle, improving this method has been desired. We previously developed a novel and simple SOV method, in which the intramuscular injection of a single dose of FSH in aluminum hydroxide gel (AH-gel) induced the growth of multiple follicles, ovulation and the production of multiple embryos. Here we show that AH-gel can efficiently adsorb FSH and release it effectively in the presence of BSA, a major interstitial protein. When a single intramuscular administration of the FSH and AH-gel mixture was performed to cattle, multiple follicular growth, ovulation and embryo production were induced. However, the treatments caused indurations at the administration sites in the muscle. To reduce the muscle damage, we investigated alternative administration routes and different amounts of aluminum in the gel. By administering the FSH in AH-gel subcutaneously rather than intramuscularly, the amount of aluminum in the gel could be reduced, thus reducing the size of the induration. Moreover, repeated administrations of FSH with AH-gel did not affect the superovulatory response. These results indicate that a single administration of FSH with AH-gel is an effective, novel and practical method for SOV treatment. PMID:27396385

  7. Databases and in silico tools for vaccine design.

    PubMed

    He, Yongqun; Xiang, Zuoshuang

    2013-01-01

    In vaccine design, databases and in silico tools play different but complementary roles. Databases collect experimentally verified vaccines and vaccine components, and in silico tools provide computational methods to predict and design new vaccines and vaccine components. Vaccine-related databases include databases of vaccines and vaccine components. In the USA, the Food and Drug Administration (FDA) maintains a database of licensed human vaccines, and the US Department of Agriculture keeps a database of licensed animal vaccines. Databases of vaccine clinical trials and vaccines in research also exist. The important vaccine components include vaccine antigens, vaccine adjuvants, vaccine vectors, and -vaccine preservatives. The vaccine antigens can be whole proteins or immune epitopes. Various in silico vaccine design tools are also available. The Vaccine Investigation and Online Information Network (VIOLIN; http://www.violinet.org ) is a comprehensive vaccine database and analysis system. The VIOLIN database includes various types of vaccines and vaccine components. VIOLIN also includes Vaxign, a Web-based in silico vaccine design program based on the reverse vaccinology strategy. Vaccine information and resources can be integrated with Vaccine Ontology (VO). This chapter introduces databases and in silico tools that facilitate vaccine design, especially those in the VIOLIN system.

  8. Database in Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  9. Different profiles of quercetin metabolites in rat plasma: comparison of two administration methods.

    PubMed

    Kawai, Yoshichika; Saito, Satomi; Nishikawa, Tomomi; Ishisaka, Akari; Murota, Kaeko; Terao, Junji

    2009-03-23

    The bioavailability of polyphenols in human and rodents has been discussed regarding their biological activity. We found different metabolite profiles of quercetin in rat plasma between two administration procedures. A single intragastric administration (50 mg/kg) resulted in the appearance of a variety of metabolites in the plasma, whereas only a major fraction was detected by free access (1% quercetin). The methylated/non-methylated metabolites ratio was much higher in the free access group. Mass spectrometric analyses showed that the fraction from free access contained highly conjugated quercetin metabolites such as sulfo-glucuronides of quercetin and methylquercetin. The metabolite profile of human plasma after an intake of onion was similar to that with intragastric administration in rats. In vitro oxidation of human low-density lipoprotein showed that methylation of the catechol moiety of quercetin significantly attenuated the antioxidative activity. These results might provide information about the bioavailability of quercetin when conducting animal experiments. PMID:19270373

  10. Different profiles of quercetin metabolites in rat plasma: comparison of two administration methods.

    PubMed

    Kawai, Yoshichika; Saito, Satomi; Nishikawa, Tomomi; Ishisaka, Akari; Murota, Kaeko; Terao, Junji

    2009-03-23

    The bioavailability of polyphenols in human and rodents has been discussed regarding their biological activity. We found different metabolite profiles of quercetin in rat plasma between two administration procedures. A single intragastric administration (50 mg/kg) resulted in the appearance of a variety of metabolites in the plasma, whereas only a major fraction was detected by free access (1% quercetin). The methylated/non-methylated metabolites ratio was much higher in the free access group. Mass spectrometric analyses showed that the fraction from free access contained highly conjugated quercetin metabolites such as sulfo-glucuronides of quercetin and methylquercetin. The metabolite profile of human plasma after an intake of onion was similar to that with intragastric administration in rats. In vitro oxidation of human low-density lipoprotein showed that methylation of the catechol moiety of quercetin significantly attenuated the antioxidative activity. These results might provide information about the bioavailability of quercetin when conducting animal experiments.

  11. Development of the method and U.S. normalization database for Life Cycle Impact Assessment and sustainability metrics.

    PubMed

    Bare, Jane; Gloria, Thomas; Norris, Gregory

    2006-08-15

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well. PMID:16955915

  12. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  13. Faculty and Administrator Perspectives of Merit Pay Compensation Systems in Private Higher Education: A Mixed Methods Analysis

    ERIC Educational Resources Information Center

    Power, Anne L.

    2013-01-01

    The purpose of this explanatory sequential mixed methods study is to explore faculty and administrator perspectives of faculty merit pay compensation systems in private, higher education institutions. The study focuses on 10 small, private, four-year institutions which are religiously affiliated. All institutions are located in Nebraska, Iowa, and…

  14. The Relative Value of Skills, Knowledge, and Teaching Methods in Explaining Master of Business Administration (MBA) Program Return on Investment

    ERIC Educational Resources Information Center

    van Auken, Stuart; Wells, Ludmilla Gricenko; Chrysler, Earl

    2005-01-01

    In this article, the authors provide insight into alumni perceptions of Master of Business Administration (MBA) program return on investment (ROI). They sought to assess the relative value of skills, knowledge, and teaching methods in explaining ROI. By developing insight into the drivers of ROI, the real utility of MBA program ingredients can be…

  15. Administrators' Methods of Upward Influence and Perceptions of Their Supervisors' Leadership Styles.

    ERIC Educational Resources Information Center

    Chacko, Harsha E.

    Just as superiors influence subordinates, subordinates may influence superiors. This study scrutinizes the relationship between the techniques of influence used by subordinates and the way in which the subordinates view their superiors' leadership styles. Questionnaires were sent to 250 randomly selected administrators of hospitality education…

  16. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  17. A Novel Forensic Tool for the Characterization and Comparison of Printing Ink Evidence: Development and Evaluation of a Searchable Database Using Data Fusion of Spectrochemical Methods.

    PubMed

    Trejos, Tatiana; Torrione, Peter; Corzo, Ruthmara; Raeva, Ana; Subedi, Kiran; Williamson, Rhett; Yoo, Jong; Almirall, Jose

    2016-05-01

    A searchable printing ink database was designed and validated as a tool to improve the chemical information gathered from the analysis of ink evidence. The database contains 319 samples from printing sources that represent some of the global diversity in toner, inkjet, offset, and intaglio inks. Five analytical methods were used to generate data to populate the searchable database including FTIR, SEM-EDS, LA-ICP-MS, DART-MS, and Py-GC-MS. The search algorithm based on partial least-squares discriminant analysis generates a similarity "score" used for the association between similar samples. The performance of a particular analytical method to associate similar inks was found to be dependent on the ink type with LA-ICP-MS performing best, followed by SEM-EDS and DART-MS methods, while FTIR and Py-GC-MS were less useful in association but were still useful for classification purposes. Data fusion of data collected from two complementary methods (i.e., LA-ICP-MS and DART-MS) improves the classification and association of similar inks. PMID:27122411

  18. A Novel Forensic Tool for the Characterization and Comparison of Printing Ink Evidence: Development and Evaluation of a Searchable Database Using Data Fusion of Spectrochemical Methods.

    PubMed

    Trejos, Tatiana; Torrione, Peter; Corzo, Ruthmara; Raeva, Ana; Subedi, Kiran; Williamson, Rhett; Yoo, Jong; Almirall, Jose

    2016-05-01

    A searchable printing ink database was designed and validated as a tool to improve the chemical information gathered from the analysis of ink evidence. The database contains 319 samples from printing sources that represent some of the global diversity in toner, inkjet, offset, and intaglio inks. Five analytical methods were used to generate data to populate the searchable database including FTIR, SEM-EDS, LA-ICP-MS, DART-MS, and Py-GC-MS. The search algorithm based on partial least-squares discriminant analysis generates a similarity "score" used for the association between similar samples. The performance of a particular analytical method to associate similar inks was found to be dependent on the ink type with LA-ICP-MS performing best, followed by SEM-EDS and DART-MS methods, while FTIR and Py-GC-MS were less useful in association but were still useful for classification purposes. Data fusion of data collected from two complementary methods (i.e., LA-ICP-MS and DART-MS) improves the classification and association of similar inks.

  19. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  20. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    SciTech Connect

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  1. A method to implement fine-grained access control for personal health records through standard relational database queries.

    PubMed

    Sujansky, Walter V; Faus, Sam A; Stone, Ethan; Brennan, Patricia Flatley

    2010-10-01

    Online personal health records (PHRs) enable patients to access, manage, and share certain of their own health information electronically. This capability creates the need for precise access-controls mechanisms that restrict the sharing of data to that intended by the patient. The authors describe the design and implementation of an access-control mechanism for PHR repositories that is modeled on the eXtensible Access Control Markup Language (XACML) standard, but intended to reduce the cognitive and computational complexity of XACML. The authors implemented the mechanism entirely in a relational database system using ANSI-standard SQL statements. Based on a set of access-control rules encoded as relational table rows, the mechanism determines via a single SQL query whether a user who accesses patient data from a specific application is authorized to perform a requested operation on a specified data object. Testing of this query on a moderately large database has demonstrated execution times consistently below 100ms. The authors include the details of the implementation, including algorithms, examples, and a test database as Supplementary materials.

  2. [A method for identifying people with a high level of frailty by using a population database, Varese, Italy].

    PubMed

    Pisani, Salvatore; Gambino, Maria; Balconi, Lorena; Degli Stefani, Cristina; Speziali, Sabina; Bonarrigo, Domenico

    2016-01-01

    Since over 10 years, the Lombardy Region (Italy) has developed a system for classifying all persons registered with the healthcare system (database of persons registered with a general practitioner), according to their use of major healthcare services (hospitalizations, outpatient consultations, pharmaceutical) and whether they are exempt from copayment fees for disease-specific medications and healthcare services. The present study was conducted by the local health authorities of the province of Varese (Lombardy region, Italy) with 894.039 persons registered in the database of whom 258.770 (28.9%) with at least one chronic condition, 104.731 (11.7%) with multiple chronic conditions and 195.296 (21.8%) elderly persons. The aim was to evaluate death rates in different subgroups of patients entered in the database, including persons with chronic diseases and elderly persons. Standardized mortality rates were calculated for the year 2012. Compared with the general population, relative risk for mortality was 4,1 (95% confidence Intervals 4,0-4,2) in the elderly and 1,3 (95% confidence intervals 1,3-1,4) in chronic patients. This confirms that elderly persons have a higher level of frailty with respect to patients with chronic conditions. Mortality was found to be 28 times higher in elderly persons over 74 years of age, affected by high cost conditions such as cancer and cardiac disease, with respect to the general population.

  3. Pediatric immunization-related safety incidents in primary care: A mixed methods analysis of a national database

    PubMed Central

    Rees, Philippa; Edwards, Adrian; Powell, Colin; Evans, Huw Prosser; Carter, Ben; Hibbert, Peter; Makeham, Meredith; Sheikh, Aziz; Donaldson, Liam; Carson-Stevens, Andrew

    2015-01-01

    Background Children are scheduled to receive 18–20 immunizations before their 18th birthday in England and Wales; this approximates to 13 million vaccines administered per annum. Each immunization represents a potential opportunity for immunization-related error and effective immunization is imperative to maintain the public health benefit from immunization. Using data from a national reporting system, this study aimed to characterize pediatric immunization-related safety incident reports from primary care in England and Wales between 2002 and 2013. Methods A cross-sectional mixed methods study was undertaken. This comprised reading the free-text of incident reports and applying codes to describe incident type, potential contributory factors, harm severity, and incident outcomes. A subsequent thematic analysis was undertaken to interpret the most commonly occurring codes, such as those describing the incident, events leading up to it and reported contributory factors, within the contexts they were described. Results We identified 1745 reports and most (n = 1077, 61.7%) described harm outcomes including three deaths, 67 reports of moderate harm and 1007 reports of low harm. Failure of timely vaccination was the potential cause of three child deaths from meningitis and pneumonia, and described in a further 113 reports. Vaccine administration incidents included the wrong number of doses (n = 476, 27.3%), wrong timing (n = 294, 16.8%), and wrong vaccine (n = 249, 14.3%). Documentation failures were frequently implicated. Socially and medically vulnerable children were commonly described. Conclusion This is the largest examination of reported contributory factors for immunization-related patient safety incidents in children. Our findings suggest investments in IT infrastructure to support data linkage and identification of risk predictors, development of consultation models that promote the role of parents in mitigating safety incidents, and improvement

  4. Comparison of INSURE method with conventional mechanical ventilation after surfactant administration in preterm infants with respiratory distress syndrome: therapeutic challenge.

    PubMed

    Nayeri, Fatemeh Sadat; Esmaeilnia Shirvani, Tahereh; Aminnezhad, Majid; Amini, Elaheh; Dalili, Hossein; Moghimpour Bijani, Faezeh

    2014-01-01

    Administration of endotracheal surfactant is potentially the main treatment for neonates suffering from RDS (Respiratory Distress Syndrome), which is followed by mechanical ventilation. Late and severe complications may develop as a consequence of using mechanical ventilation. In this study, conventional methods for treatment of RDS are compared with surfactant administration, use of mechanical ventilation for a brief period and NCPAP (Nasal Continuous Positive Airway Pressure), (INSURE method ((Intubation, Surfactant administration and extubation)). A randomized clinical trial study was performed, including all newborn infants with diagnosed RDS and a gestational age of 35 weeks or less, who were admitted in NICU of Valiasr hospital. The patients were then divided randomly into two CMV (Conventional Mechanical Ventilation) and INSURE groups. Surfactant administration and consequent long-term mechanical ventilation were done in the first group (CMV group). In the second group (INSURE group), surfactant was administered followed by a short-term period of mechanical ventilation. The infants were then extubated, and NCPAP was embedded. The comparison included crucial duration of mechanical ventilation and oxygen therapy, IVH (Intraventricular Hemorrhage), PDA (Patent Ductus Arteriosus), air-leak syndromes, BPD (Broncho-Pulmonary Dysplasia) and mortality rate. The need for mechanical ventilation in 5th day of admission was 43% decreased (P=0.005) in INSURE group in comparison to CMV group. A decline (P=0.01) in the incidence of IVH and PDA was also achieved. Pneumothorax, chronic pulmonary disease and mortality rates, were not significantly different among two groups. (P=0.25, P=0.14, P=0.25, respectively). This study indicated that INSURE method in the treatment of RDS decreases the need for mechanical ventilation and oxygen-therapy in preterm neonates. Moreover, relevant complications as IVH and PDA were observed to be reduced. Thus, it seems rationale to perform

  5. Development of an Aerodynamic Analysis Method and Database for the SLS Service Module Panel Jettison Event Utilizing Inviscid CFD and MATLAB

    NASA Technical Reports Server (NTRS)

    Applebaum, Michael P.; Hall, Leslie, H.; Eppard, William M.; Purinton, David C.; Campbell, John R.; Blevins, John A.

    2015-01-01

    This paper describes the development, testing, and utilization of an aerodynamic force and moment database for the Space Launch System (SLS) Service Module (SM) panel jettison event. The database is a combination of inviscid Computational Fluid Dynamic (CFD) data and MATLAB code written to query the data at input values of vehicle/SM panel parameters and return the aerodynamic force and moment coefficients of the panels as they are jettisoned from the vehicle. The database encompasses over 5000 CFD simulations with the panels either in the initial stages of separation where they are hinged to the vehicle, in close proximity to the vehicle, or far enough from the vehicle that body interference effects are neglected. A series of viscous CFD check cases were performed to assess the accuracy of the Euler solutions for this class of problem and good agreement was obtained. The ultimate goal of the panel jettison database was to create a tool that could be coupled with any 6-Degree-Of-Freedom (DOF) dynamics model to rapidly predict SM panel separation from the SLS vehicle in a quasi-unsteady manner. Results are presented for panel jettison simulations that utilize the database at various SLS flight conditions. These results compare favorably to an approach that directly couples a 6-DOF model with the Cart3D Euler flow solver and obtains solutions for the panels at exact locations. This paper demonstrates a method of using inviscid CFD simulations coupled with a 6-DOF model that provides adequate fidelity to capture the physics of this complex multiple moving-body panel separation event.

  6. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  7. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  8. A General Method for Evaluating Deep Brain Stimulation Effects on Intravenous Methamphetamine Self-Administration

    PubMed Central

    Batra, Vinita; Guerin, Glenn F.; Goeders, Nicholas E.; Wilden, Jessica A.

    2016-01-01

    Substance use disorders, particularly to methamphetamine, are devastating, relapsing diseases that disproportionally affect young people. There is a need for novel, effective and practical treatment strategies that are validated in animal models. Neuromodulation, including deep brain stimulation (DBS) therapy, refers to the use of electricity to influence pathological neuronal activity and has shown promise for psychiatric disorders, including drug dependence. DBS in clinical practice involves the continuous delivery of stimulation into brain structures using an implantable pacemaker-like system that is programmed externally by a physician to alleviate symptoms. This treatment will be limited in methamphetamine users due to challenging psychosocial situations. Electrical treatments that can be delivered intermittently, non-invasively and remotely from the drug-use setting will be more realistic. This article describes the delivery of intracranial electrical stimulation that is temporally and spatially separate from the drug-use environment for the treatment of IV methamphetamine dependence. Methamphetamine dependence is rapidly developed in rodents using an operant paradigm of intravenous (IV) self-administration that incorporates a period of extended access to drug and demonstrates both escalation of use and high motivation to obtain drug. PMID:26863392

  9. The use of databases and registries to enhance colonoscopy quality.

    PubMed

    Logan, Judith R; Lieberman, David A

    2010-10-01

    Administrative databases, registries, and clinical databases are designed for different purposes and therefore have different advantages and disadvantages in providing data for enhancing quality. Administrative databases provide the advantages of size, availability, and generalizability, but are subject to constraints inherent in the coding systems used and from data collection methods optimized for billing. Registries are designed for research and quality reporting but require significant investment from participants for secondary data collection and quality control. Electronic health records contain all of the data needed for quality research and measurement, but that data is too often locked in narrative text and unavailable for analysis. National mandates for electronic health record implementation and functionality will likely change this landscape in the near future. PMID:20889074

  10. Estimating the administrative cost of regulatory noncompliance: a pilot method for quantifying the value of prevention.

    PubMed

    Emery, R J; Charlton, M A; Mathis, J L

    2000-05-01

    Routine regulatory inspections provide a valuable independent quality assurance review of radiation protection programs that ultimately serves to improve overall program performance. But when an item of non-compliance is noted, regardless of its significance or severity the ensuing notice of violation (NOV) results in an added cost to both the permit holder and the regulatory authority. Such added costs may be tangible, in the form of added work to process and resolve the NOV, or intangible, in the form of damage to organizational reputation or worker morale. If the portion of the tangible costs incurred by a regulatory agency for issuing NOVs could be quantified, the analysis could aid in the identification of agency resources that might be dedicated to other areas such as prevention. Ideally, any prevention activities would reduce the overall number of NOVs issued without impacting the routine inspection process. In this study, the administrative costs of NOV issuance and resolution was estimated by obtaining data from the professional staff of the Texas Department of Health, Bureau of Radiation Control (TDH-BRC). Based a focus group model, the data indicate that approximately $106,000 in TDH-BRC personnel resources were expended to process and resolve the 6,800 NOVs issued in Texas during 1997 inspection activities. The study's findings imply that an incremental decrease in the number of NOVs issued would result in corresponding savings of agency resources. Suggested prevention activities that might be financed through any resource savings include the dissemination of common violation data to permit holders or training for improving correspondence with regulatory agencies. The significance of this exercise is that any savings experienced by an agency could enhance permittee compliance without impacting the routine inspection process.

  11. 76 FR 74804 - Federal Housing Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... Relay Service at (800) 877-8339. SUPPLEMENTARY INFORMATION: On July 15, 2010, at 75 FR 41255, HUD... Look Sales Method (see 75 FR 41226, first column). Through this notice, HUD announces that it has... Stabilization Programs (NSP) Technical Assistance: Availability of Universal Name and Address...

  12. Advanced Methods in Distance Education: Applications and Practices for Educators, Administrators and Learners

    ERIC Educational Resources Information Center

    Dooley, Kim E.; Lindner, James R.; Dooley, Larry M.

    2005-01-01

    Courses and programs being delivered at a distance require a unique set of professional competencies. Whether using synchronous or asynchronous methods of instruction, systematic instructional design can help stimulate motivation, increase interaction and social presence, and authenticate learning outcomes. Principles of adult learning, including…

  13. A Case Study of Qualitative Research: Methods and Administrative Impact. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schoen, Jane; Warner, Sean

    A case study in program evaluation that demonstrates the effectiveness of qualitative research methods is presented. Over a 5-year period, the Union for Experimenting Colleges and Universities in Ohio offered a baccalaureate program (University Without Walls) to local employees of a national manufacturing firm. The institutional research office…

  14. Perception of Teachers and Administrators on the Teaching Methods That Influence the Acquisition of Generic Skills

    ERIC Educational Resources Information Center

    Audu, R.; Bin Kamin, Yusri; Bin Musta'amal, Aede Hatib; Bin Saud, Muhammad Sukri; Hamid, Mohd. Zolkifli Abd.

    2014-01-01

    This study is designed to identify the most significant teaching methods that influence the acquisition of generic skills of mechanical engineering trades students at technical college level. Descriptive survey research design was utilized in carrying out the study. One hundred and ninety (190) respondents comprised of mechanical engineering…

  15. A global database of lake surface temperatures collected by in situ and satellite methods from 1985–2009

    USGS Publications Warehouse

    Sharma, Sapna; Gray, Derek; Read, Jordan S.; O'Reilly, Catherine; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie; Hook, Simon; Lenters, John; Livingstone, David M.; McIntyre, Peter B.; Adrian, Rita; Allan, Mathew; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John E.; Baron, Jill S.; Brookes, Justin D; Chen, Yuwei; Daly, Robert; Ewing, Kye; de Eyto, Elvira; Dokulil, Martin; Hamilton, David B.; Havens, Karl; Haydon, Shane; Hetzenaeur, Harald; Heneberry, Jocelyn; Hetherington, Amy; Higgins, Scott; Hixson, Eric; Izmest'eva, Lyubov; Jones, Benjamin M.; Kangur, Kulli; Kasprzak, Peter; Kraemer, Benjamin; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Dörthe Müller-Navarra,; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Pius Niederhauser,; North, Ryan P.; Andrew Paterson,; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Lars Rudstam,; Rusak, James A.; Salmaso, Nico; Samal, Nihar R.; Daniel E. Schindler,; Geoffrey Schladow,; Schmidt, Silke R.; Tracey Schultz,; Silow, Eugene A.; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A.; Craig E. Williamson,; Kara H. Woo,

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985–2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues.

  16. A global database of lake surface temperatures collected by in situ and satellite methods from 1985–2009

    PubMed Central

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O’Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest’eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985–2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues. PMID:25977814

  17. A global database of lake surface temperatures collected by in situ and satellite methods from 1985-2009.

    PubMed

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O'Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest'eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985-2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues.

  18. A global database of lake surface temperatures collected by in situ and satellite methods from 1985-2009.

    PubMed

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O'Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest'eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985-2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues. PMID:25977814

  19. Databases as an information service

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  20. Effect of nicotine and tobacco administration method on the mechanical properties of healing bone following closed fracture.

    PubMed

    Hastrup, Sidsel Gaarn; Chen, Xinqian; Bechtold, Joan E; Kyle, Richard F; Rahbek, Ole; Keyler, Daniel E; Skoett, Martin; Soeballe, Kjeld

    2010-09-01

    We previously showed different effects of tobacco and nicotine on fracture healing, but due to pump reservoir limits, maximum exposure period was 4 weeks. To allow flexibility in pre- and post-fracture exposure periods, the objective of this study was to compare a new oral administration route for nicotine to the established pump method. Four groups were studied: (1) pump saline, (2) pump saline + oral tobacco, (3) pump saline/nicotine + oral tobacco, and (4) pump saline + oral nicotine/tobacco. Sprague-Dawley rats (n = 84) received a transverse femoral fracture stabilized with an intramedullary pin 1 week after initiating dosing. After 3 weeks, no difference was found in torsional strength or stiffness between oral nicotine/tobacco or pump nicotine + tobacco, while energy absorption with oral nicotine/tobacco was greater than pump nicotine + tobacco (p < 0.05). Compared to saline control, strength for oral nicotine/tobacco was higher than control (p < 0.05), and stiffnesses for pump nicotine + tobacco and oral nicotine/tobacco were higher than control (p < 0.05). No differences in energy were found for either nicotine-tobacco group compared to saline control. Mean serum cotinine (stable nicotine metabolite) was different between pump and oral nicotine at 1 and 4 weeks, but all groups were in the range of 1-2 pack/day smokers. In summary, relevant serum cotinine levels can be reached in rats with oral nicotine, and, in the presence of tobacco, nicotine can influence mechanical aspects of fracture healing, dependent on administration method. Caution should be exercised when comparing results of fracture healing studies using different methods of nicotine administration.

  1. Drinking Water Treatability Database (Database)

    EPA Science Inventory

    The drinking Water Treatability Database (TDB) will provide data taken from the literature on the control of contaminants in drinking water, and will be housed on an interactive, publicly-available USEPA web site. It can be used for identifying effective treatment processes, rec...

  2. Challenges of the Administrative Consultation Wiki Research Project as a Learning and Competences Development Method for MPA Students

    ERIC Educational Resources Information Center

    Kovac, Polonca; Stare, Janez

    2015-01-01

    Administrative Consultation Wiki (ACW) is a project run under the auspices of the Faculty of Administration and the Ministry of Public Administration in Slovenia since 2009. A crucial component thereof is the involvement of students of Master of Public Administration (MPA) degree programs to offer them an opportunity to develop competences in…

  3. Treatment Patterns, Costs, and Survival among Medicare-Enrolled Elderly Patients Diagnosed with Advanced Stage Gastric Cancer: Analysis of a Linked Population-Based Cancer Registry and Administrative Claims Database

    PubMed Central

    Karve, Sudeep; Liepa, Astra M; Hess, Lisa M; Kaye, James A; Calingaert, Brian

    2015-01-01

    Purpose To assess real-world treatment patterns, health care utilization, costs, and survival among Medicare enrollees with locally advanced/unresectable or metastatic gastric cancer receiving standard first-line chemotherapy. Materials and Methods This was a retrospective analysis of the Surveillance, Epidemiology, and End Results-Medicare linked database (2000~2009). The inclusion criteria were as follows: (1) first diagnosed with locally advanced/unresectable or metastatic gastric cancer between July 1, 2000 and December 31, 2007 (first diagnosis defined the index date); (2) ≥65 years of age at index; (3) continuously enrolled in Medicare Part A and B from 6 months before index through the end of follow-up, defined by death or the database end date (December 31, 2009), whichever occurred first; and (4) received first-line treatment with fluoropyrimidine and/or a platinum chemo-therapy agent. Results In total, 2,583 patients met the inclusion criteria. The mean age at index was 74.8±6.0 years. Over 90% of patients died during follow-up, with a median survival of 361 days for the overall post-index period and 167 days for the period after the completion of first-line chemotherapy. The mean total gastric cancer-related cost per patient over the entire post-index follow-up period was United States dollar (USD) 70,808±56,620. Following the completion of first-line chemotherapy, patients receiving further cancer-directed treatment had USD 25,216 additional disease-related costs versus patients receiving supportive care only (P<0.001). Conclusions The economic burden of advanced gastric cancer is substantial. Extrapolating based on published incidence estimates and staging distributions, the estimated total disease-related lifetime cost to Medicare for the roughly 22,200 patients expected to be diagnosed with this disease in 2014 approaches USD 300 millions. PMID:26161282

  4. A novel method for local administration of strontium from implant surfaces.

    PubMed

    Forsgren, Johan; Engqvist, Håkan

    2010-05-01

    This study proves that a film of Strontianite (SrCO(3)) successfully can be formed on a bioactive surface of sodium titanate when exposed to a strontium acetate solution. This Strontianite film is believed to enable local release of strontium ions from implant surfaces and thus stimulate bone formation in vivo. Depending on the method, different types of films were achieved with different release rates of strontium ions, and the results points at the possibility to tailor the rate and amount of strontium that is to be released from the surface. Strontium has earlier been shown to be highly involved in the formation of new bone as it stimulates the replication of osteoblasts and decreases the activity of osteoclasts. The benefit of strontium has for example been proved in studies where the number of vertebral compression fractures in osteoporotic persons was drastically reduced in patients receiving therapeutical doses of strontium. Therefore, it is here suggested that the bone healing process around an implant may be improved if strontium is administered locally at the site of the implant. The films described in this paper were produced by a simple immersion process where alkali treated titanium was exposed to an aqueous solution containing strontium acetate. By heating the samples at different times during the process, different release rates of strontium ions were achieved when the samples were exposed to simulated body fluid. The strontium containing films also promoted precipitation of bone like apatite when exposed to a simulated body fluid. PMID:20162327

  5. A Mixed Method Study Measuring the Perceptions of Administrators, Classroom Teachers and Professional Staff on the Use of iPads in a Midwest School District

    ERIC Educational Resources Information Center

    Beckerle, Andrea Laux

    2013-01-01

    The purpose of this mixed methods study was to assess the perceptions of classroom teachers, administrators and professional support staff in one Midwest school district regarding the usefulness and effectiveness of the iPad device as an instructional and support tool within the classroom. The need to address classroom teacher, administrator and…

  6. A novel method for detecting inpatient pediatric asthma encounters using administrative data.

    PubMed

    Knighton, Andrew J; Flood, Andrew; Harmon, Brian; Smith, Patti; Crosby, Carrie; Payne, Nathaniel R

    2014-08-01

    Multiple methods for detecting asthma encounters are used today in public surveillance, quality reporting, and clinical research. Failure to detect asthma encounters can make it difficult to measure the scope and effectiveness of hospital or community-based interventions important in comparative effectiveness research and accountable care. Given the pairing of asthma with certain respiratory conditions, the objective of this study was to develop and test an asthma detection algorithm with specificity and sensitivity using 2 criteria: (1) principal discharge diagnosis and (2) asthma diagnosis code position. A medical record review was conducted (n=191) as the gold standard for identifying asthma encounters given objective criteria. The study team observed that for certain principal respiratory diagnoses (n=110), the observed odds ratio that encounters were for asthma when asthma was coded in the second or third code position was not significantly different than when asthma was coded as the principal diagnosis, 0.36 (P=0.42) and 0.18 (P=0.14), respectively. In contrast, the observed odds ratio was significantly different when asthma was coded in the fourth or fifth positions (P<.001). This difference remained after adjusting for covariates. Including encounters with asthma in 1 of the 3 first positions increased the detection sensitivity to 0.84 [95% confidence interval (CI): 0.76-0.92] while increasing the false positive rate to 0.19 [95% CI: 0.07-0.31]. Use of the proposed algorithm significantly improved the reporting accuracy [0.83 95%CI:0.76-0.90] over use of (1) the principal diagnosis alone [0.55 95% CI:0.46-0.64] or (2) all encounters with asthma 0.66 [95% CI:0.57-0.75]. Bed days resulting from asthma encounters increased 64% over use of the principal diagnosis alone. Given these findings, an algorithm using certain respiratory principal diagnoses and asthma diagnosis code position can reliably improve asthma encounter detection for population-based health

  7. Towards an enhanced use of soil databases for assessing water availability in (sub)tropical regions using fractal-based methods

    NASA Astrophysics Data System (ADS)

    Botula Manyala, Y.; Baetens, J.; Baert, G.; Van Ranst, E.; Cornelis, W.

    2012-12-01

    Following the completion of numerous elaborate soil surveys in many (sub)tropical regions of the African continent during the past decades, vast databases with soil properties of the prevailing soil orders in these regions have been assembled in order to support agricultural stakeholders throughout crucial decision-making processes. Unfortunately, even though soil hydraulic properties are of primary interest for designing sustainable farming practices, guiding crop choice and irrigation scheduling, a substantial share of the soil surveys is restricted to the collection of soil chemical properties. This bias principally originates from the fact that soil chemical characteristics like pH, organic carbon/matter (OC/OM), cation exchange capacity (CEC), base saturation (BS) can be determined readily. On the other hand, determination of the hydraulic properties of a soil on the field or in the lab, is much more time consuming, particularly the soil-water retention curve (SWRC) which is generally considered as one of the most important physical property since it constitutes the footprint of a soil. Owing to the incompleteness of most soil databases in (sub)tropical regions, either much valuable information is discarded because the assessment of meaningful indices in land evaluation such as the soil available water capacity (AWC), the hydraulic conductivity are merely based upon those soil samples for which hydraulic properties were measured, or one has to resort to pedotransfer functions (PTFs). The latter are equations for deducing hydraulic properties of a soil from physico-chemical data that are commonly available in soil survey reports (sand, silt, clay, OC/OM, CEC, etc.). Yet, such PTFs are only locally applicable because their derivation rests on statistical or machine learning techniques and has no physical basis. Recently, however, physically-based, and hence globally applicable, fractal methods have been put forward for assessing a soil's SWRC based upon its

  8. Water pollution in the pulp and paper industry: Treatment methods. (Latest citations from the NTIS database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning waste treatment methods for the pulp and paper industry. Some of these methods are: sedimentation, flotation, filtration, coagulation, adsorption, and general concentration processes. (Contains a minimum of 142 citations and includes a subject term index and title list.)

  9. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  10. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  11. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  12. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  13. Databases and data mining

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  14. [Changes in features of diabetes care in Hungary in the period of years 2001-2014. Aims and methods of the database analysis of the National Health Insurance Fund].

    PubMed

    Jermendy, György; Kempler, Péter; Abonyi-Tóth, Zsolt; Rokszin, György; Wittmann, István

    2016-08-01

    In the last couple of years, database analyses have become increasingly popular among clinical-epidemiological investigations. In Hungary, the National Health Insurance Fund serves as central database of all medical attendances in state departments and purchases of drug prescriptions in pharmacies. Data from in- and outpatient departments as well as those from pharmacies are regularly collected in this database which is public and accessible on request. The aim of this retrospective study was to investigate the database of the National Health Insurance Fund in order to analyze the diabetes-associated morbidity and mortality in the period of years 2001-2014. Moreover, data of therapeutic costs, features of hospitalizations and practice of antidiabetic treatment were examined. The authors report now on the method of the database analysis. It is to be hoped that the upcoming results of this investigation will add some new data to recent knowledge about diabetes care in Hungary. Orv. Hetil., 2016, 157(32), 1259-1265.

  15. Quality assurance of specialised treatment of eating disorders using large-scale Internet-based collection systems: methods, results and lessons learned from designing the Stepwise database.

    PubMed

    Birgegård, Andreas; Björck, Caroline; Clinton, David

    2010-01-01

    Computer-based quality assurance of specialist eating disorder (ED) care is a possible way of meeting demands for evaluating the real-life effectiveness of treatment, in a large-scale, cost-effective and highly structured way. The Internet-based Stepwise system combines clinical utility for patients and practitioners, and provides research-quality naturalistic data. Stepwise was designed to capture relevant variables concerning EDs and general psychiatric status, and the database can be used for both clinical and research purposes. The system comprises semi-structured diagnostic interviews, clinical ratings and self-ratings, automated follow-up schedules, as well as administrative functions to facilitate registration compliance. As of June 2009, the system is in use at 20 treatment units and comprises 2776 patients. Diagnostic distribution (including subcategories of eating disorder not otherwise specified) and clinical characteristics are presented, as well as data on registration compliance. Obstacles and keys to successful implementation of the Stepwise system are discussed, including possible gains and on-going challenges inherent in large-scale, Internet-based quality assurance.

  16. Quality assurance of specialised treatment of eating disorders using large-scale Internet-based collection systems: methods, results and lessons learned from designing the Stepwise database.

    PubMed

    Birgegård, Andreas; Björck, Caroline; Clinton, David

    2010-01-01

    Computer-based quality assurance of specialist eating disorder (ED) care is a possible way of meeting demands for evaluating the real-life effectiveness of treatment, in a large-scale, cost-effective and highly structured way. The Internet-based Stepwise system combines clinical utility for patients and practitioners, and provides research-quality naturalistic data. Stepwise was designed to capture relevant variables concerning EDs and general psychiatric status, and the database can be used for both clinical and research purposes. The system comprises semi-structured diagnostic interviews, clinical ratings and self-ratings, automated follow-up schedules, as well as administrative functions to facilitate registration compliance. As of June 2009, the system is in use at 20 treatment units and comprises 2776 patients. Diagnostic distribution (including subcategories of eating disorder not otherwise specified) and clinical characteristics are presented, as well as data on registration compliance. Obstacles and keys to successful implementation of the Stepwise system are discussed, including possible gains and on-going challenges inherent in large-scale, Internet-based quality assurance. PMID:20589767

  17. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  18. Development and characterization of amphotericin B nanosuspensions for oral administration through a simple top-down method.

    PubMed

    Yang, Zhiwen; Liu, Min; Chen, Jian; Fang, Weijun; Zhang, Yanli; Yuan, Man; Gao, Jie

    2014-01-01

    Amphotericin B (AmB) is regarded as a life-saving drug in treating severe systemic fungal infections. However, the poor solubility and permeability limits its oral administration. The main purpose of this study is to evaluate AmB nanosuspensions in enhancing its solubility for the oral application. Magnetic stirring method with very low energy input, as a top-down technology, was firstly used to prepare the drug nanosuspensions. Sodium deoxycholate and carbomer were screened as the stabilizers through a single-factor experiment. Under the optimum conditions, AmB nanosuspensions were spherically shaped with the average particle size of 348.9±21.2 nm. X-ray diffraction analysis and differential scanning calorimetry confirmed that the initial crystalline state was preserved after particle size reduction. Saturated solubility and dissolution rate of AmB nanosuspensions exhibited the better dissolution properties compared with raw drug. Stability study demonstrated that AmB nanosuspensions maintained a good stability at 4oC for 30 days. In conclusion, AmB nanosuspensions potentially improved the oral absorption and magnetic stirring method could be acted as an effective method to produce AmB nanosuspensions.

  19. Analysing factors related to slipping, stumbling, and falling accidents at work: Application of data mining methods to Finnish occupational accidents and diseases statistics database.

    PubMed

    Nenonen, Noora

    2013-03-01

    The utilisation of data mining methods has become common in many fields. In occupational accident analysis, however, these methods are still rarely exploited. This study applies methods of data mining (decision tree and association rules) to the Finnish national occupational accidents and diseases statistics database to analyse factors related to slipping, stumbling, and falling (SSF) accidents at work from 2006 to 2007. SSF accidents at work constitute a large proportion (22%) of all accidents at work in Finland. In addition, they are more likely to result in longer periods of incapacity for work than other workplace accidents. The most important factor influencing whether or not an accident at work is related to SSF is the specific physical activity of movement. In addition, the risk of SSF accidents at work seems to depend on the occupation and the age of the worker. The results were in line with previous research. Hence the application of data mining methods was considered successful. The results did not reveal anything unexpected though. Nevertheless, because of the capability to illustrate a large dataset and relationships between variables easily, data mining methods were seen as a useful supplementary method in analysing occupational accident data.

  20. Validation of White-Matter Lesion Change Detection Methods on a Novel Publicly Available MRI Image Database.

    PubMed

    Lesjak, Žiga; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2016-10-01

    Changes of white-matter lesions (WMLs) are good predictors of the progression of neurodegenerative diseases like multiple sclerosis (MS). Based on longitudinal magnetic resonance (MR) imaging the changes can be monitored, while the need for their accurate and reliable quantification led to the development of several automated MR image analysis methods. However, an objective comparison of the methods is difficult, because publicly unavailable validation datasets with ground truth and different sets of performance metrics were used. In this study, we acquired longitudinal MR datasets of 20 MS patients, in which brain regions were extracted, spatially aligned and intensity normalized. Two expert raters then delineated and jointly revised the WML changes on subtracted baseline and follow-up MR images to obtain ground truth WML segmentations. The main contribution of this paper is an objective, quantitative and systematic evaluation of two unsupervised and one supervised intensity based change detection method on the publicly available datasets with ground truth segmentations, using common pre- and post-processing steps and common evaluation metrics. Besides, different combinations of the two main steps of the studied change detection methods, i.e. dissimilarity map construction and its segmentation, were tested to identify the best performing combination.

  1. Validation of White-Matter Lesion Change Detection Methods on a Novel Publicly Available MRI Image Database.

    PubMed

    Lesjak, Žiga; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2016-10-01

    Changes of white-matter lesions (WMLs) are good predictors of the progression of neurodegenerative diseases like multiple sclerosis (MS). Based on longitudinal magnetic resonance (MR) imaging the changes can be monitored, while the need for their accurate and reliable quantification led to the development of several automated MR image analysis methods. However, an objective comparison of the methods is difficult, because publicly unavailable validation datasets with ground truth and different sets of performance metrics were used. In this study, we acquired longitudinal MR datasets of 20 MS patients, in which brain regions were extracted, spatially aligned and intensity normalized. Two expert raters then delineated and jointly revised the WML changes on subtracted baseline and follow-up MR images to obtain ground truth WML segmentations. The main contribution of this paper is an objective, quantitative and systematic evaluation of two unsupervised and one supervised intensity based change detection method on the publicly available datasets with ground truth segmentations, using common pre- and post-processing steps and common evaluation metrics. Besides, different combinations of the two main steps of the studied change detection methods, i.e. dissimilarity map construction and its segmentation, were tested to identify the best performing combination. PMID:27207310

  2. Mapping the literature of nursing administration

    PubMed Central

    Galganski, Carol J.

    2006-01-01

    Objectives: As part of Phase I of a project to map the literature of nursing, sponsored by the Nursing and Allied Health Resources Section of the Medical Library Association, this study identifies the core literature cited in nursing administration and the indexing services that provide access to the core journals. The results of this study will assist librarians and end users searching for information related to this nursing discipline, as well as database producers who might consider adding specific titles to their indexing services. Methods: Using the common methodology described in the overview article, five source journals for nursing administration were identified and selected for citation analysis over a three-year period, 1996 to 1998, to identify the most frequently cited titles according to Bradford's Law of Scattering. From this core of most productive journal titles, the bibliographic databases that provide the best access to these titles were identified. Results: Results reveal that nursing administration literature relies most heavily on journal articles and on those titles identified as core nursing administrative titles. When the indexing coverage of nine services is compared, PubMed/MEDLINE and CINAHL provide the most comprehensive coverage of this nursing discipline. Conclusions: No one indexing service adequately covers this nursing discipline. Researchers needing comprehensive coverage in this area must search more than one database to effectively research their projects. While PubMed/MEDLINE and CINAHL provide more coverage for this discipline than the other indexing services, none is sufficiently broad in scope to provide indexing of nursing, health care management, and medical literature in a single file. Nurse administrators using the literature to research current work issues need to review not only the nursing titles covered by CINAHL but should also include the major weekly medical titles, core titles in health care administration, and

  3. Performance evaluation of an automatic segmentation method of cerebral arteries in MRA images by use of a large image database

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru

    2009-02-01

    The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.

  4. Continuous and passive environmental radon monitoring: Measuring methods and health effects. (Latest citations from the INSPEC database). Published Search

    SciTech Connect

    1996-05-01

    The bibliography contains citations concerning continuous and passive radon (Rn) monitoring, measurement methods and equipment, and health effects from Rn concentration in air, water, and soils. Citations discuss the design, development, and evaluation of monitoring and detection devices, including alpha spectroscopy and dosimetry, track detecting and scintillation, thermoluminescent, electret, and electrode collection. Sources of Rn concentration levels found in building materials, ventilation systems, soils, and ground water are examined. Lung cancer-associated risks from Rn radiation exposure are explored. Radon monitoring in mining operations is excluded. (Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  5. A LC-MS-MS method for determination of low doxazosin concentrations in plasma after oral administration to dogs.

    PubMed

    Erceg, Marijana; Cindric, Mario; Pozaic Frketic, Lidija; Vertzoni, Maria; Cetina-Cizmek, Biserka; Reppas, Christos

    2010-02-01

    A rapid and sensitive reversed phase liquid chromatography- tandem mass spectrometry (LC-MS-MS) method is developed for the determination of doxazosin in canine plasma. The samples are prepared by precipitation of proteins using a mixture of methanol and acetonitrile, followed by freezing and evaporation of the organic solvent. The remaining dry residue is redissolved in mobile phase and analyzed by LC-MS-MS with positive electrospray ionization using the selected reactions monitoring mode. An XTerra MS C(18) column, a mobile phase composed of acetonitrile and 2mM ammonium acetate with gradient elution, and a flow rate of 400 microL/min are employed. The elution times for prazosin (internal standard) and doxazosin are approximately 8 and 10 min, respectively. Calibration curves are linear in the 1-20 ng/mL concentration range. Limits of detection and quantification are 0.4 ng/mL and 1.2 ng/mL, respectively. Recovery is higher than 94%. Intra- and inter-day relative standard deviations are below 7% and 8%, respectively. The method is applied for the determination of doxazosin plasma levels following a single administration of doxazosin base and doxazosin mesylate tablets (2 mg dose) to dogs in the fed state. The results indicate possible superiority of the mesylate salt on the plasma input rates of doxazosin.

  6. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  7. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  8. Tri-party agreement databases, access mechanism and procedures. Revision 2

    SciTech Connect

    Brulotte, P.J.

    1996-01-01

    This document contains the information required for the Washington State Department of Ecology (Ecology) and the U.S. Environmental Protection Agency (EPA) to access databases related to the Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement). It identifies the procedure required to obtain access to the Hanford Site computer networks and the Tri-Party Agreement related databases. It addresses security requirements, access methods, database availability dates, database access procedures, and the minimum computer hardware and software configurations required to operate within the Hanford Site networks. This document supersedes any previous agreements including the Administrative Agreement to Provide Computer Access to U.S. Environmental Protection Agency (EPA) and the Administrative Agreement to Provide Computer Access to Washington State Department of Ecology (Ecology), agreements that were signed by the U.S. Department of Energy (DOE), Richland Operations Office (RL) in June 1990, Access approval to EPA and Ecology is extended by RL to include all Tri-Party Agreement relevant databases named in this document via the documented access method and date. Access to databases and systems not listed in this document will be granted as determined necessary and negotiated among Ecology, EPA, and RL through the Tri-Party Agreement Project Managers. The Tri-Party Agreement Project Managers are the primary points of contact for all activities to be carried out under the Tri-Party Agreement. Action Plan. Access to the Tri-Party Agreement related databases and systems does not provide or imply any ownership on behalf of Ecology or EPA whether public or private of either the database or the system. Access to identified systems and databases does not include access to network/system administrative control information, network maps, etc.

  9. 78 FR 8684 - Fifteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint with EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Federal Aviation Administration Fifteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  10. 78 FR 25134 - Sixteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Federal Aviation Administration Sixteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  11. 78 FR 51809 - Seventeenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-21

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  12. School Administrator and Parent Perceptions of School, Family, and Community Partnerships in Middle School: A Mixed Methods Study

    ERIC Educational Resources Information Center

    LeBlanc, Jackie

    2011-01-01

    The purpose of this research was to identify, analyze, and compare the perceptions of parents and school administrators in regard to school-family partnerships in three middle schools in the State of Louisiana. The study investigated the similarities and dissimilarities between parent and school administrator perceptions, probed to determine…

  13. Method for the Compound Annotation of Conjugates in Nontargeted Metabolomics Using Accurate Mass Spectrometry, Multistage Product Ion Spectra and Compound Database Searching.

    PubMed

    Ogura, Tairo; Bamba, Takeshi; Tai, Akihiro; Fukusaki, Eiichiro

    2015-01-01

    Owing to biotransformation, xenobiotics are often found in conjugated form in biological samples such as urine and plasma. Liquid chromatography coupled with accurate mass spectrometry with multistage collision-induced dissociation provides spectral information concerning these metabolites in complex materials. Unfortunately, compound databases typically do not contain a sufficient number of records for such conjugates. We report here on the development of a novel protocol, referred to as ChemProphet, to annotate compounds, including conjugates, using compound databases such as PubChem and ChemSpider. The annotation of conjugates involves three steps: 1. Recognition of the type and number of conjugates in the sample; 2. Compound search and annotation of the deconjugated form; and 3. In silico evaluation of the candidate conjugate. ChemProphet assigns a spectrum to each candidate by automatically exploring the substructures corresponding to the observed product ion spectrum. When finished, it annotates the candidates assigning a rank for each candidate based on the calculated score that ranks its relative likelihood. We assessed our protocol by annotating a benchmark dataset by including the product ion spectra for 102 compounds, annotating the commercially available standard for quercetin 3-glucuronide, and by conducting a model experiment using urine from mice that had been administered a green tea extract. The results show that by using the ChemProphet approach, it is possible to annotate not only the deconjugated molecules but also the conjugated molecules using an automatic interpretation method based on deconjugation that involves multistage collision-induced dissociation and in silico calculated conjugation.

  14. Overlap in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Hood, William W.; Wilson, Concepcion S.

    2003-01-01

    Examines the topic of Fuzzy Set Theory to determine the overlap of coverage in bibliographic databases. Highlights include examples of comparisons of database coverage; frequency distribution of the degree of overlap; records with maximum overlap; records unique to one database; intra-database duplicates; and overlap in the top ten databases.…

  15. Estimation of National Colorectal-Cancer Incidence Using Claims Databases

    PubMed Central

    Quantin, C.; Benzenine, E.; Hägi, M.; Auverlot, B.; Abrahamowicz, M.; Cottenet, J.; Fournier, E.; Binquet, C.; Compain, D.; Monnet, E.; Bouvier, A. M.; Danzon, A.

    2012-01-01

    Background. The aim of the study was to assess the accuracy of the colorectal-cancer incidence estimated from administrative data. Methods. We selected potential incident colorectal-cancer cases in 2004-2005 French administrative data, using two alternative algorithms. The first was based only on diagnostic and procedure codes, whereas the second considered the past history of the patient. Results of both methods were assessed against two corresponding local cancer registries, acting as “gold standards.” We then constructed a multivariable regression model to estimate the corrected total number of incident colorectal-cancer cases from the whole national administrative database. Results. The first algorithm provided an estimated local incidence very close to that given by the regional registries (646 versus 645 incident cases) and had good sensitivity and positive predictive values (about 75% for both). The second algorithm overestimated the incidence by about 50% and had a poor positive predictive value of about 60%. The estimation of national incidence obtained by the first algorithm differed from that observed in 14 registries by only 2.34%. Conclusion. This study shows the usefulness of administrative databases for countries with no national cancer registry and suggests a method for correcting the estimates provided by these data. PMID:22792103

  16. Applicability of large databases in outcomes research.

    PubMed

    Malay, Sunitha; Shauver, Melissa J; Chung, Kevin C

    2012-07-01

    Outcomes research serves as a mechanism to assess the quality of care, cost effectiveness of treatment, and other aspects of health care. The use of administrative databases in outcomes research is increasing in all medical specialties, including hand surgery. However, the real value of databases can be maximized with a thorough understanding of their contents, advantages, and limitations. We performed a literature review pertaining to databases in medical, surgical, and epidemiologic research, with special emphasis on orthopedic and hand surgery. This article provides an overview of the available database resources for outcomes research, their potential value to hand surgeons, and suggestions to improve their effective use. PMID:22522104

  17. Applicability of large databases in outcomes research.

    PubMed

    Malay, Sunitha; Shauver, Melissa J; Chung, Kevin C

    2012-07-01

    Outcomes research serves as a mechanism to assess the quality of care, cost effectiveness of treatment, and other aspects of health care. The use of administrative databases in outcomes research is increasing in all medical specialties, including hand surgery. However, the real value of databases can be maximized with a thorough understanding of their contents, advantages, and limitations. We performed a literature review pertaining to databases in medical, surgical, and epidemiologic research, with special emphasis on orthopedic and hand surgery. This article provides an overview of the available database resources for outcomes research, their potential value to hand surgeons, and suggestions to improve their effective use.

  18. Designing and Using Databases for School Improvement.

    ERIC Educational Resources Information Center

    Bernhardt, Victoria L.

    This guide is designed for school and district administrators and teachers who want to use data to improve their schools, for college and university instructors who teach school administrators, and for support personnel who teach graduate-level education courses. The guide explains how to define the scope of a needed database, how to ready the…

  19. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  20. Investigating potential for effects of environmental endocrine disrupters on wild populations of amphibians in UK and Japan: status of historical databases and review of methods.

    PubMed

    Pickford, Daniel B; Larroze, Severine; Takase, Minoru; Mitsui, Naoko; Tooi, Osamu; Santo, Noriaki

    2007-01-01

    Concern over global declines among amphibians has resulted in increased interest in the effects of environmental contaminants on amphibian populations, and more recently, this has stimulated research on the potential adverse effects of environmental endocrine disrupters in amphibians. Laboratory studies of the effects of single chemicals on endocrine-relevant endpoints in amphibian, mainly anuran, models are valuable in characterizing sensitivity at the individual level and may yield useful bioassays for screening chemicals for endocrine toxicity (for example, thyroid disrupting activity). Nevertheless, in the UK and Japan as in many other countries, it has yet to be demonstrated unequivocally that the exposure of native amphibians to endocrine disrupting environmental contaminants results in adverse effects at the population level. Assessing the potential of such effects is likely to require an ecoepidemiological approach to investigate associations between predicted or actual exposure of amphibians to (endocrine disrupting) environmental contaminants and biologically meaningful responses at the population level. In turn, this demands recent but relatively long-term population trend data. We review two potential sources of such data for widespread UK anurans that could be used in such investigations: records for common frogs and common toads in several databases maintained by the Biological Records Centre (UK Government Centre for Ecology and Hydrology), and adult toad count data from 'Toads on Roads' schemes registered with the UK wildlife charity 'Froglife'. There were little abundance data in the BRC databases that could be used for this purpose, while count data from the Toads on Roads schemes is potentially confounded by the effects of local topology on the detection probabilities and operation of nonchemical anthropogenic stressors. For Japan, local and regional surveys of amphibians and national ecological censuses gathering amphibian data were reviewed to

  1. Database Research for Pediatric Infectious Diseases.

    PubMed

    Kronman, Matthew P; Gerber, Jeffrey S; Newland, Jason G; Hersh, Adam L

    2015-06-01

    Multiple electronic and administrative databases are available for the study of pediatric infectious diseases. In this review, we identify research questions well suited to investigations using these databases and highlight their advantages, including their relatively low cost, efficiency, and ability to detect rare outcomes. We discuss important limitations, including those inherent in observational study designs and the potential for misclassification of exposures and outcomes, and identify strategies for addressing these limitations. We provide examples of commonly used databases and discuss methodologic considerations in undertaking studies using large databases. Last, we propose a checklist for use in planning or evaluating studies of pediatric infectious diseases that employ electronic databases, and we outline additional practical considerations regarding the cost of and how to access commonly used databases. PMID:26407414

  2. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  3. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  4. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  5. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of...

  6. Software Application for Supporting the Education of Database Systems

    ERIC Educational Resources Information Center

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  7. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of...

  8. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of...

  9. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  10. Fast decision tree-based method to index large DNA-protein sequence databases using hybrid distributed-shared memory programming model.

    PubMed

    Jaber, Khalid Mohammad; Abdullah, Rosni; Rashid, Nur'Aini Abdul

    2014-01-01

    In recent times, the size of biological databases has increased significantly, with the continuous growth in the number of users and rate of queries; such that some databases have reached the terabyte size. There is therefore, the increasing need to access databases at the fastest rates possible. In this paper, the decision tree indexing model (PDTIM) was parallelised, using a hybrid of distributed and shared memory on resident database; with horizontal and vertical growth through Message Passing Interface (MPI) and POSIX Thread (PThread), to accelerate the index building time. The PDTIM was implemented using 1, 2, 4 and 5 processors on 1, 2, 3 and 4 threads respectively. The results show that the hybrid technique improved the speedup, compared to a sequential version. It could be concluded from results that the proposed PDTIM is appropriate for large data sets, in terms of index building time.

  11. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  12. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  13. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  14. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  15. MCS Systems Administration Toolkit

    2001-09-30

    This package contains a number of systems administration utilities to assist a team of system administrators in managing a computer environment by automating routine tasks and centralizing information. Included are utilities to help install software on a network of computers and programs to make an image of a disk drive, to manage and distribute configuration files for a number of systems, and to run self-testss on systems, as well as an example of using amore » database to manage host information and various utilities.« less

  16. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  17. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  18. [Changes in features of diabetes care in Hungary in the period of years 2001-2014. Aims and methods of the database analysis of the National Health Insurance Fund].

    PubMed

    Jermendy, György; Kempler, Péter; Abonyi-Tóth, Zsolt; Rokszin, György; Wittmann, István

    2016-08-01

    In the last couple of years, database analyses have become increasingly popular among clinical-epidemiological investigations. In Hungary, the National Health Insurance Fund serves as central database of all medical attendances in state departments and purchases of drug prescriptions in pharmacies. Data from in- and outpatient departments as well as those from pharmacies are regularly collected in this database which is public and accessible on request. The aim of this retrospective study was to investigate the database of the National Health Insurance Fund in order to analyze the diabetes-associated morbidity and mortality in the period of years 2001-2014. Moreover, data of therapeutic costs, features of hospitalizations and practice of antidiabetic treatment were examined. The authors report now on the method of the database analysis. It is to be hoped that the upcoming results of this investigation will add some new data to recent knowledge about diabetes care in Hungary. Orv. Hetil., 2016, 157(32), 1259-1265. PMID:27499284

  19. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  20. Data mining in forensic image databases

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien

    2002-07-01

    Forensic Image Databases appear in a wide variety. The oldest computer database is with fingerprints. Other examples of databases are shoeprints, handwriting, cartridge cases, toolmarks drugs tablets and faces. In these databases searches are conducted on shape, color and other forensic features. There exist a wide variety of methods for searching in images in these databases. The result will be a list of candidates that should be compared manually. The challenge in forensic science is to combine the information acquired. The combination of the shape of a partial shoe print with information on a cartridge case can result in stronger evidence. It is expected that searching in the combination of these databases with other databases (e.g. network traffic information) more crimes will be solved. Searching in image databases is still difficult, as we can see in databases of faces. Due to lighting conditions and altering of the face by aging, it is nearly impossible to find a right face from a database of one million faces in top position by a image searching method, without using other information. The methods for data mining in images in databases (e.g. MPEG-7 framework) are discussed, and the expectations of future developments are presented in this study.

  1. Alternative method of oral administration by peanut butter pellet formulation results in target engagement of BACE1 and attenuation of gavage-induced stress responses in mice.

    PubMed

    Gonzales, C; Zaleska, M M; Riddell, D R; Atchison, K P; Robshaw, A; Zhou, H; Sukoff Rizzo, S J

    2014-11-01

    Development of novel therapeutic agents aimed at treating neurodegenerative disorders such as Alzheimer's and Parkinson's diseases require chronic and preferentially oral dosing in appropriate preclinical rodent models. Since many of these disease models involve transgenic mice that are frequently aged and fragile, the commonly used oro-gastric gavage method of drug administration often confounds measured outcomes due to repeated stress and high attrition rates caused by esophageal complications. We employed a novel drug formulation in a peanut butter (PB) pellet readily consumed by mice and compared the stress response as measured by plasma corticosterone levels relative to oral administration via traditional gavage. Acute gavage produced significant elevations in plasma corticosterone comparable to those observed in mice subjected to stress-induced hyperthermia. In contrast, corticosterone levels following consumption of PB pellets were similar to levels in naive mice and significantly lower than in mice subjected to traditional gavage. Following sub-chronic administration, corticosterone levels remained significantly higher in mice subjected to gavage, relative to mice administered PB pellets or naive controls. Furthermore, chronic 30day dosing of a BACE inhibitor administered via PB pellets to PSAPP mice resulted in expected plasma drug exposure and Aβ40 lowering consistent with drug treatment demonstrating target engagement. Taken together, this alternative method of oral administration by drug formulated in PB pellets results in the expected pharmacokinetics and pharmacodynamics with attenuated stress levels, and is devoid of the detrimental effects of repetitive oral gavage. PMID:25242810

  2. A Curated Database of Rodent Uterotrophic Bioactivity

    PubMed Central

    Kleinstreuer, Nicole C.; Ceger, Patricia C.; Allen, David G.; Strickland, Judy; Chang, Xiaoqing; Hamm, Jonathan T.; Casey, Warren M.

    2015-01-01

    Background: Novel in vitro methods are being developed to identify chemicals that may interfere with estrogen receptor (ER) signaling, but the results are difficult to put into biological context because of reliance on reference chemicals established using results from other in vitro assays and because of the lack of high-quality in vivo reference data. The Organisation for Economic Co-operation and Development (OECD)-validated rodent uterotrophic bioassay is considered the “gold standard” for identifying potential ER agonists. Objectives: We performed a comprehensive literature review to identify and evaluate data from uterotrophic studies and to analyze study variability. Methods: We reviewed 670 articles with results from 2,615 uterotrophic bioassays using 235 unique chemicals. Study descriptors, such as species/strain, route of administration, dosing regimen, lowest effect level, and test outcome, were captured in a database of uterotrophic results. Studies were assessed for adherence to six criteria that were based on uterotrophic regulatory test guidelines. Studies meeting all six criteria (458 bioassays on 118 unique chemicals) were considered guideline-like (GL) and were subsequently analyzed. Results: The immature rat model was used for 76% of the GL studies. Active outcomes were more prevalent across rat models (74% active) than across mouse models (36% active). Of the 70 chemicals with at least two GL studies, 18 (26%) had discordant outcomes and were classified as both active and inactive. Many discordant results were attributable to differences in study design (e.g., injection vs. oral dosing). Conclusions: This uterotrophic database provides a valuable resource for understanding in vivo outcome variability and for evaluating the performance of in vitro assays that measure estrogenic activity. Citation: Kleinstreuer NC, Ceger PC, Allen DG, Strickland J, Chang X, Hamm JT, Casey WM. 2016. A curated database of rodent uterotrophic bioactivity. Environ

  3. OSSI-PET: Open-Access Database of Simulated [(11)C]Raclopride Scans for the Inveon Preclinical PET Scanner: Application to the Optimization of Reconstruction Methods for Dynamic Studies.

    PubMed

    Garcia, Marie-Paule; Charil, Arnaud; Callaghan, Paul; Wimberley, Catriona; Busso, Florian; Gregoire, Marie-Claude; Bardies, Manuel; Reilhac, Anthonin

    2016-07-01

    A wide range of medical imaging applications benefits from the availability of realistic ground truth data. In the case of positron emission tomography (PET), ground truth data is crucial to validate processing algorithms and assessing their performances. The design of such ground truth data often relies on Monte-Carlo simulation techniques. Since the creation of a large dataset is not trivial both in terms of computing time and realism, we propose the OSSI-PET database containing 350 simulated [(11)C]Raclopride dynamic scans for rats, created specifically for the Inveon pre-clinical PET scanner. The originality of this database lies on the availability of several groups of scans with controlled biological variations in the striata. Besides, each group consists of a large number of realizations (i.e., noise replicates). We present the construction methodology of this database using rat pharmacokinetic and anatomical models. A first application using the OSSI-PET database is presented. Several commonly used reconstruction techniques were compared in terms of image quality, accuracy and variability of the activity estimates and of the computed kinetic parameters. The results showed that OP-OSEM3D iterative reconstruction method outperformed the other tested methods. Analytical methods such as FBP2D and 3DRP also produced satisfactory results. However, FORE followed by OSEM2D reconstructions should be avoided. Beyond the illustration of the potential of the database, this application will help scientists to understand the different sources of noise and bias that can occur at the different steps in the processing and will be very useful for choosing appropriate reconstruction methods and parameters. PMID:26863655

  4. OSSI-PET: Open-Access Database of Simulated [(11)C]Raclopride Scans for the Inveon Preclinical PET Scanner: Application to the Optimization of Reconstruction Methods for Dynamic Studies.

    PubMed

    Garcia, Marie-Paule; Charil, Arnaud; Callaghan, Paul; Wimberley, Catriona; Busso, Florian; Gregoire, Marie-Claude; Bardies, Manuel; Reilhac, Anthonin

    2016-07-01

    A wide range of medical imaging applications benefits from the availability of realistic ground truth data. In the case of positron emission tomography (PET), ground truth data is crucial to validate processing algorithms and assessing their performances. The design of such ground truth data often relies on Monte-Carlo simulation techniques. Since the creation of a large dataset is not trivial both in terms of computing time and realism, we propose the OSSI-PET database containing 350 simulated [(11)C]Raclopride dynamic scans for rats, created specifically for the Inveon pre-clinical PET scanner. The originality of this database lies on the availability of several groups of scans with controlled biological variations in the striata. Besides, each group consists of a large number of realizations (i.e., noise replicates). We present the construction methodology of this database using rat pharmacokinetic and anatomical models. A first application using the OSSI-PET database is presented. Several commonly used reconstruction techniques were compared in terms of image quality, accuracy and variability of the activity estimates and of the computed kinetic parameters. The results showed that OP-OSEM3D iterative reconstruction method outperformed the other tested methods. Analytical methods such as FBP2D and 3DRP also produced satisfactory results. However, FORE followed by OSEM2D reconstructions should be avoided. Beyond the illustration of the potential of the database, this application will help scientists to understand the different sources of noise and bias that can occur at the different steps in the processing and will be very useful for choosing appropriate reconstruction methods and parameters.

  5. Administrative Synergy

    ERIC Educational Resources Information Center

    Hewitt, Kimberly Kappler; Weckstein, Daniel K.

    2012-01-01

    One of the biggest obstacles to overcome in creating and sustaining an administrative professional learning community (PLC) is time. Administrators are constantly deluged by the tyranny of the urgent. It is a Herculean task to carve out time for PLCs, but it is imperative to do so. In this article, the authors describe how an administrative PLC…

  6. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... section, the term “Administrator” shall refer to each of the TRS Numbering administrator, the administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... entity that is impartial and not an affiliate of any Internet-based TRS provider. (2) Neither...

  7. Use of administrative data in healthcare research.

    PubMed

    Mazzali, Cristina; Duca, Piergiorgio

    2015-06-01

    Health research based on administrative data and the availability of regional or national administrative databases has been increasing in recent years. We will discuss the general characteristics of administrative data and specific aspects of their use for health research purposes, indicating their advantages and disadvantages. Some fields of application will be discussed and described through examples.

  8. The space transportation resources (STR) database description

    NASA Astrophysics Data System (ADS)

    Sandubrae, Jeffrey A.; Roberts, Heather A.; Lee Varnado, C.

    1999-01-01

    This paper describes the on-going development of the Space Transportation Resources (STR) database. In January 1998, the Advanced Space Transportation Program Office of the National Aeronautics and Space Administration (NASA) Marshall Space Flight Center (MSFC) tasked Science Applications International Corporation (SAIC) to develop an interactive database of space transportation resources. A summary of important milestones and the requirements associated with the database development is provided. Descriptions of the types of data contained in the database, and synopses of procedures for viewing and searching data are presented. SAIC has a continuing responsibility to update data to insure currency, and modify interactive routines to reflect customer direction and appropriate user comments. The Space Transportation Resources Database is available in two forms, on CD-ROM and accessible on the worldwide web at http://str.saic.com.

  9. Evaluation of a revised U.S. Food and Drug Administration method for the detection of Cronobacter in powdered infant formula: a collaborative study.

    PubMed

    Chen, Yi; Noe, Kathy E; Thompson, Sandra; Elems, Carol A; Brown, Eric W; Lampel, Keith A; Hammack, Thomas S

    2012-06-01

    A revised U.S. Food and Drug Administration (FDA) method for the isolation and detection of Cronobacter from powdered infant formula was recently developed, which combines real-time PCR, chromogenic agars, and RAPID ID 32E biochemical tests. This method provides an expedient analysis within 24 to 48 h. A collaborative validation study involving four different laboratories was conducted to compare the revised FDA method with the reference FDA method using casein- and soy-based powdered infant formula inoculated with different Cronobacter strains. Valid results from 216 test portions and controls from collaborating laboratories were obtained and showed that the revised FDA method performed significantly better than the reference FDA method. Newly revised PCR protocols and VITEK 2 were also evaluated to be integrated into the complete detection procedure.

  10. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  11. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  12. Household Products Database: Pesticides

    MedlinePlus

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2015 by DeLima Associates. All rights reserved. ...

  13. Peroral administration of 5-bromo-2-deoxyuridine in drinking water is not a reliable method for labeling proliferating S-phase cells in rats.

    PubMed

    Ševc, Juraj; Matiašová, Anna; Smoleková, Ivana; Jendželovský, Rastislav; Mikeš, Jaromír; Tomášová, Lenka; Kútna, Viera; Daxnerová, Zuzana; Fedoročko, Peter

    2015-01-01

    In rodents, peroral (p.o.) administration of 5-bromo-2'-deoxyuridine (BrdU) dissolved in drinking water is a widely used method for labeling newly formed cells over a prolonged time-period. Despite the broad applicability of this method, the pharmacokinetics of BrdU in rats or mice after p.o. administration remains unknown. Moreover, the p.o. route of administration may be limited by the relatively low amount of BrdU consumed over 24h and the characteristic drinking pattern of rats, with water intake being observed predominantly during the dark phase. Therefore, we investigated the reliability of staining proliferating S-phase cells with BrdU after p.o. administration (1mg/ml) to rats using both in vitro and in vivo conditions. Flow cytometric analysis of tumor cells co-cultivated with sera from experimental animals exposed to BrdU dissolved in drinking water or 25% orange juice revealed that the concentration of BrdU in the blood sera of rats throughout the day was below the detection limits of our assay. Ingested BrdU was only sufficient to label approximately 4.2±0.3% (water) or 4.2±0.3% (25% juice) of all S-phase cells. Analysis of data from in vivo conditions indicates that only 7.6±3.3% or 15.5±2.3% of all S-phase cells in the dentate gyrus of the hippocampus was labeled in animals administered drinking water containing BrdU during the light and dark phases of the day. In addition, the intensity of BrdU-positive nuclei in animals receiving p.o. administration of BrdU was significantly lower than in control animals intraperitoneally injected with BrdU. Our data indicate that the conventional approach of p.o. administration of BrdU in the drinking water to rats provides strongly inaccurate information about the number of proliferating cells in target tissues. Therefore other administration routes, such as osmotic mini pumps, should be considered for labeling of proliferating cells over a prolonged time-period.

  14. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  15. Scopus database: a review

    PubMed Central

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. PMID:16522216

  16. Scopus database: a review.

    PubMed

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  17. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  18. Leadership Styles of Nursing Home Administrators and Their Association with Staff Turnover

    ERIC Educational Resources Information Center

    Donoghue, Christopher; Castle, Nicholas G.

    2009-01-01

    Purpose: The purpose of this study was to examine the associations between nursing home administrator (NHA) leadership style and staff turnover. Design and Methods: We analyzed primary data from a survey of 2,900 NHAs conducted in 2005. The Online Survey Certification and Reporting database and the Area Resource File were utilized to extract…

  19. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  20. Ketoconazole ion-exchange fiber complex: a novel method to reduce the individual difference of bioavailability in oral administration caused by gastric anacidity.

    PubMed

    Xin, Che; Li-hong, Wang; Jing, Yuan; Yang, Yang; Yue, Yuan; Qi-fang, Wang; San-ming, Li

    2013-01-01

    Water insoluble faintly alkaline drugs often have potential absorption problem in gastrointestinal tract in oral administration for patients with gastric anacidity. The purpose of the present study is to develop a novel method to improve the absorption of the water insoluble faintly alkaline drug in peroral administration. This method is based on ion exchange of ion-exchange fibers. Water-insoluble faintly alkaline drug ketoconazole was used as a model drug. Ketoconazole and the active groups of the ion-exchange fibers combined into ion pairs based on the acid-base reaction. This drug carrier did not release drugs in deionized water, but in water solution containing other ions it would release the drugs into the solution by ion exchange. Confirmed by the X-ray diffraction and the differential scanning calorimetry (DSC), the ketoconazole combined onto the ion-exchange fibers was in a highly molecular level dispersed state. The improved dissolution of ketoconazole ion-exchange fiber complexes is likely to originate from this ketoconazole's highly dispersed state. Furthermore, due to this ketoconazole's highly dispersed state, ketoconazole ion-exchange fiber complexes significantly decreased the individual difference of absorption in oral administration of ketoconazole caused by the fluctuation of the acid degree in the gastric fluid.

  1. Database and knowledge base integration in decision support systems.

    PubMed Central

    Johansson, B.; Shahsavar, N.; Ahlfeldt, H.; Wigertz, O.

    1996-01-01

    Since decision support systems (DSS) in medicine often are linked to clinical databases it is important to find methods that facilitate the work for DSS developers to implement database queries in the knowledge base (KB). This paper presents a method for linking clinical databases to a KB with Arden Syntax modules. The method is based on a query meta database including templates for SQL queries. During knowledge module authoring the medical expert only refers to a code in the query meta database. Our method uses standard tools so it can be implemented on different platforms and linked to different clinical databases. PMID:8947666

  2. ORNL RAIL & BARGE DB. Network Database

    SciTech Connect

    Johnson, P.

    1991-07-01

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database. The database consists of 96 subnetworks. Each of the subnetworks represent an individual railroad, a waterway system, or a composite group of small railroads. Two subnetworks represent waterways; one being barge/intercoastal, and the other coastal merchant marine with access through the Great Lakes/Saint Lawrence Seaway, Atlantic and Gulf Coasts, the Panama Canal, and Pacific Coast. Two other subnetworks represent small shortline railroads and terminal railroad operations. One subnetwork is maintained for the representation of Amtrak operations. The remaining 91 subnetworks represent individual or corporate groups of railroads. Coordinate locations are included as part of the database. The rail portion of the database is similar to the original FRA rail network. The waterway coordinates are greatly enhanced in the current release. Inland waterway representation was extracted from the 1:2,000,000 United States Geological Survey data. An important aspect of the database is the transfer file. This file identifies where two railroads interline traffic between their systems. Also included are locations where rail/waterway intermodal transfers could occur. Other files in the database include a translation table between Association of American Railroad (AAR) codes to the 96 subnetworks in the database, a list of names of the 96 subnetworks, and a file of names for a large proportion of the nodes in the network.

  3. ORNL RAIL & BARGE DB. Network Database

    SciTech Connect

    Johnson, P.

    1992-03-16

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database. The database consists of 96 subnetworks. Each of the subnetworks represent an individual railroad, a waterway system, or a composite group of small railroads. Two subnetworks represent waterways; one being barge/intercoastal, and the other coastal merchant marine with access through the Great Lakes/Saint Lawrence Seaway, Atlantic and Gulf Coasts, the Panama Canal, and Pacific Coast. Two other subnetworks represent small shortline railroads and terminal railroad operations. One subnetwork is maintained for the representation of Amtrak operations. The remaining 91 subnetworks represent individual or corporate groups of railroads. Coordinate locations are included as part of the database. The rail portion of the database is similar to the original FRA rail network. The waterway coordinates are greatly enhanced in the current release. Inland waterway representation was extracted from the 1:2,000,000 United States Geological Survey data. An important aspect of the database is the transfer file. This file identifies where two railroads interline traffic between their systems. Also included are locations where rail/waterway intermodal transfers could occur. Other files in the database include a translation table between Association of American Railroad (AAR) codes to the 96 subnetworks in the database, a list of names of the 96 subnetworks, and a file of names for a large proportion of the nodes in the network.

  4. The NCBI Taxonomy database.

    PubMed

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  5. Cloud point extraction-HPLC method for the determination and pharmacokinetic study of aristolochic acids in rat plasma after oral administration of Aristolochiae Fructus.

    PubMed

    Ren, Gang; Huang, Qun; Wu, Jiangang; Yuan, Jinbin; Yang, Gaihong; Yan, Zhihong; Yao, Shouzhuo

    2014-03-15

    Based on cloud-point extraction (CPE), a high performance liquid chromatography method (HPLC) was developed and validated for the determination of aristolochic acids (AAs) in rat plasma after oral administration of Aristolochiae Fructus (AF). Non-ionic surfactant Genapol X-080, an environmentally friendly solvent, was used for the micelle-mediated extraction. Various influencing factors on CPE process were investigated and optimized. AAs were extracted from rat plasma after adding 1ml of 4.5% (v/v) surfactant in the presence of 0.2mol/l HCl and 20mg NaCl, and the incubation temperature and time were 50°C and 10min, respectively. Base-line separation was obtained for the AAs in rat plasma with the optimized chromatography conditions. The detection limits (LOD) reached downward 10ng/ml. The intra-day and inter-day precisions were less than 7.8%, the accuracies were within ±5.5%, and the average recovery factors were in the range of 94.5-105.4%. In comparison with liquid-liquid extraction, the CPE method has a considerable LOD and higher recoveries. The proposed CPE-HPLC method was specific, sensitive and reliable, and could be an effective tool for the determination of AAs in biological matrixes. With the method the pharmacokinetics of AAs were investigated successfully after oral administration of AF by rats.

  6. Administrative Ecology

    ERIC Educational Resources Information Center

    McGarity, Augustus C., III; Maulding, Wanda

    2007-01-01

    This article discusses how all four facets of administrative ecology help dispel the claims about the "impossibility" of the superintendency. These are personal ecology, professional ecology, organizational ecology, and community ecology. Using today's superintendency as an administrative platform, current literature describes a preponderance of…

  7. Administrative Support.

    ERIC Educational Resources Information Center

    Doran, Dorothy; And Others

    This guide is intended to assist business education teachers in administrative support courses. The materials presented are based on the Arizona validated occupational competencies and tasks for the occupations of receptionist, secretary, and administrative assistant. Word processing skills have been infused into each of the three sections. The…

  8. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  9. Databases and registers: useful tools for research, no studies.

    PubMed

    Curbelo, Rafael J; Loza, Estíbaliz; de Yébenes, Maria Jesús García; Carmona, Loreto

    2014-04-01

    There are many misunderstandings about databases. Database is a commonly misused term in reference to any set of data entered into a computer. However, true databases serve a main purpose, organising data. They do so by establishing several layers of relationships; databases are hierarchical. Databases commonly organise data over different levels and over time, where time can be measured as the time between visits, or between treatments, or adverse events, etc. In this sense, medical databases are closely related to longitudinal observational studies, as databases allow the introduction of data on the same patient over time. Basically, we could establish four types of databases in medicine, depending on their purpose: (1) administrative databases, (2) clinical databases, (3) registers, and (4) study-oriented databases. But a database is a useful tool for a large variety of studies, not a type of study itself. Different types of databases serve very different purposes, and a clear understanding of the different research designs mentioned in this paper would prevent many of the databases we launch from being just a lot of work and very little science. PMID:24509895

  10. A Chronostratigraphic Relational Database Ontology

    NASA Astrophysics Data System (ADS)

    Platon, E.; Gary, A.; Sikora, P.

    2005-12-01

    A chronostratigraphic research database was donated by British Petroleum to the Stratigraphy Group at the Energy and Geoscience Institute (EGI), University of Utah. These data consists of over 2,000 measured sections representing over three decades of research into the application of the graphic correlation method. The data are global and includes both microfossil (foraminifera, calcareous nannoplankton, spores, pollen, dinoflagellate cysts, etc) and macrofossil data. The objective of the donation was to make the research data available to the public in order to encourage additional chronostratigraphy studies, specifically regarding graphic correlation. As part of the National Science Foundation's Cyberinfrastructure for the Geosciences (GEON) initiative these data have been made available to the public at http://css.egi.utah.edu. To encourage further research using the graphic correlation method, EGI has developed a software package, StrataPlot that will soon be publicly available from the GEON website as a standalone software download. The EGI chronostratigraphy research database, although relatively large, has many data holes relative to some paleontological disciplines and geographical areas, so the challenge becomes how do we expand the data available for chronostratigrahic studies using graphic correlation. There are several public or soon-to-be public databases available to chronostratigraphic research, but they have their own data structures and modes of presentation. The heterogeneous nature of these database schemas hinders their integration and makes it difficult for the user to retrieve and consolidate potentially valuable chronostratigraphic data. The integration of these data sources would facilitate rapid and comprehensive data searches, thus helping advance studies in chronostratigraphy. The GEON project will host a number of databases within the geology domain, some of which contain biostratigraphic data. Ontologies are being developed to provide

  11. Comparison of arterial pressure and plasma ANG II responses to three methods of subcutaneous ANG II administration

    PubMed Central

    Kuroki, Marcos T.; Fink, Gregory D.

    2014-01-01

    Angiotensin II (ANG II)-induced hypertension is a commonly studied model of experimental hypertension, particularly in rodents, and is often generated by subcutaneous delivery of ANG II using Alzet osmotic minipumps chronically implanted under the skin. We have observed that, in a subset of animals subjected to this protocol, mean arterial pressure (MAP) begins to decline gradually starting the second week of ANG II infusion, resulting in a blunting of the slow pressor response and reduced final MAP. We hypothesized that this variability in the slow pressor response to ANG II was mainly due to factors unique to Alzet pumps. To test this, we compared the pressure profile and changes in plasma ANG II levels during subcutaneous ANG II administration (150 ng·kg−1·min−1) using either Alzet minipumps, iPrecio implantable pumps, or a Harvard external infusion pump. At the end of 14 days of ANG II, MAP was highest in the iPrecio group (156 ± 3 mmHg) followed by Harvard (140 ± 3 mmHg) and Alzet (122 ± 3 mmHg) groups. The rate of the slow pressor response, measured as daily increases in pressure averaged over days 2–14 of ANG II, was similar between iPrecio and Harvard groups (2.7 ± 0.4 and 2.2 ± 0.4 mmHg/day) but was significantly blunted in the Alzet group (0.4 ± 0.4 mmHg/day) due to a gradual decline in MAP in a subset of rats. We also found differences in the temporal profile of plasma ANG II between infusion groups. We conclude that the gradual decline in MAP observed in a subset of rats during ANG II infusion using Alzet pumps is mainly due to pump-dependent factors when applied in this particular context. PMID:24993045

  12. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    SciTech Connect

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from

  13. Searching NCBI databases using Entrez.

    PubMed

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-06-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  14. Searching NCBI databases using Entrez.

    PubMed

    Baxevanis, Andreas D

    2008-12-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two Basic Protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An Alternate Protocol builds upon the first Basic Protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The Support Protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  15. A Web-based database for pathology faculty effort reporting.

    PubMed

    Dee, Fred R; Haugen, Thomas H; Wynn, Philip A; Leaven, Timothy C; Kemp, John D; Cohen, Michael B

    2008-04-01

    To ensure appropriate mission-based budgeting and equitable distribution of funds for faculty salaries, our compensation committee developed a pathology-specific effort reporting database. Principles included the following: (1) measurement should be done by web-based databases; (2) most entry should be done by departmental administration or be relational to other databases; (3) data entry categories should be aligned with funding streams; and (4) units of effort should be equal across categories of effort (service, teaching, research). MySQL was used for all data transactions (http://dev.mysql.com/downloads), and scripts were constructed using PERL (http://www.perl.org). Data are accessed with forms that correspond to fields in the database. The committee's work resulted in a novel database using pathology value units (PVUs) as a standard quantitative measure of effort for activities in an academic pathology department. The most common calculation was to estimate the number of hours required for a specific task, divide by 2080 hours (a Medicare year) and then multiply by 100. Other methods included assigning a baseline PVU for program, laboratory, or course directorship with an increment for each student or staff in that unit. With these methods, a faculty member should acquire approximately 100 PVUs. Some outcomes include (1) plotting PVUs versus salary to identify outliers for salary correction, (2) quantifying effort in activities outside the department, (3) documenting salary expenditure for unfunded research, (4) evaluating salary equity by plotting PVUs versus salary by sex, and (5) aggregating data by category of effort for mission-based budgeting and long-term planning.

  16. Effectiveness of different corticosterone administration methods to elevate corticosterone serum levels, induce depressive-like behavior, and affect neurogenesis levels in female rats.

    PubMed

    Kott, J M; Mooney-Leber, S M; Shoubah, F A; Brummelte, S

    2016-01-15

    High levels of chronic stress or stress hormones are associated with depressive-like behavior in animal models. However, slight elevations in corticosterone (CORT) - the major stress hormone in rodents - have also been associated with improved performances, albeit in a sex-dependent manner. Some of the discrepancies in the literature regarding the effects of high CORT levels may be due to different administrations methods. The current study aims to compare the effects of ∼40mg/kg given either via subcutaneous injection, through an implanted pellet, or in the drinking water, for ∼21days on CORT serum levels, depressive-like behavior in the forced swim test (FST), and neurogenesis levels in the dentate gyrus (DG) in adult female rats. We found that animals exposed to the daily injections showed elevated CORT levels throughout the administration period, while the pellet animals showed only a transient increase, and drinking water animals revealed no elevation in CORT in serum. In addition, only the injection group exhibited higher levels of immobility in the FST. Interestingly, animals receiving CORT via injection or drinking water had lower numbers of doublecortin-positive cells in the ventral DG one week after the last CORT administration compared to animals implanted with a CORT pellet. These results will contribute to the growing literature on the effects of chronic CORT exposure and may help to clarify some of the discrepancies among previous studies, particularly in females.

  17. Implementing a Microcomputer Database Management System.

    ERIC Educational Resources Information Center

    Manock, John J.; Crater, K. Lynne

    1985-01-01

    Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…

  18. Development and Validation of a HPLC Method for the Determination of Cyclosporine A in New Bioadhesive Nanoparticles for Oral Administration

    PubMed Central

    Pecchio, M.; Salman, H.; Irache, J. M.; Renedo, M. J.; Dios-Viéitez, M. C.

    2014-01-01

    A simple and reliable high performance liquid chromatography method was developed and validated for the rapid determination of cyclosporine A in new pharmaceutical dosage forms based on the use of poly (methylvinylether-co-maleic anhydride) nanoparticles. The chromatographic separation was achieved using Ultrabase C18 column (250×4.6 mm, 5 μm), which was kept at 75°. The gradient mobile phase consisted of acetonitrile and water with a flow rate of 1 ml/min. The effluent was monitored at 205 nm using diode array detector. The method exhibited linearity over the assayed concentration range (22-250 μg/ml) and demonstrated good intraday and interday precision and accuracy (relative standard deviations were less than 6.5% and the deviation from theoretical values is below 5.5%). The detection limit was 1.36 μg/ml. This method was also applied for quantitative analysis of cyclosporine A released from poly (methylvinylether-co-maleic anhydride) nanoparticles. PMID:24843186

  19. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  20. 2010 Worldwide Gasification Database

    DOE Data Explorer

    The 2010 Worldwide Gasification Database describes the current world gasification industry and identifies near-term planned capacity additions. The database lists gasification projects and includes information (e.g., plant location, number and type of gasifiers, syngas capacity, feedstock, and products). The database reveals that the worldwide gasification capacity has continued to grow for the past several decades and is now at 70,817 megawatts thermal (MWth) of syngas output at 144 operating plants with a total of 412 gasifiers.

  1. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  2. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.

  3. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application. PMID:27197511

  4. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  5. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  6. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  7. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  8. An Introduction to Database Management Systems.

    ERIC Educational Resources Information Center

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  9. Physics-Based GOES Satellite Product for Use in NREL's National Solar Radiation Database: Preprint

    SciTech Connect

    Sengupta, M.; Habte, A.; Gotseff, P.; Weekley, A.; Lopez, A.; Molling, C.; Heidinger, A.

    2014-07-01

    The National Renewable Energy Laboratory (NREL), University of Wisconsin, and National Oceanic Atmospheric Administration are collaborating to investigate the integration of the Satellite Algorithm for Shortwave Radiation Budget (SASRAB) products into future versions of NREL's 4-km by 4-km gridded National Solar Radiation Database (NSRDB). This paper describes a method to select an improved clear-sky model that could replace the current SASRAB global horizontal irradiance and direct normal irradiances reported during clear-sky conditions.

  10. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  11. Automated tools for cross-referencing large databases. Final report

    SciTech Connect

    Clapp, N E; Green, P L; Bell, D

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  12. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved

  13. HPLC method for comparative study on tissue distribution in rat after oral administration of salvianolic acid B and phenolic acids from Salvia miltiorrhiza.

    PubMed

    Xu, Man; Fu, Gang; Qiao, Xue; Wu, Wan-Ying; Guo, Hui; Liu, Ai-Hua; Sun, Jiang-Hao; Guo, De-An

    2007-10-01

    A sensitive and selective high-performance liquid chromatography method was developed and validated to determine the prototype of salvianolic acid B and the metabolites of phenolic acids (protocatechuic acid, vanillic acid and ferulic acid) in rat tissues after oral administration of total phenolic acids and salvianolic acid B extracted from the roots of Salvia miltiorrhiza, respectively. The tissue samples were treated with a simple liquid-liquid extraction prior to HPLC. Analysis of the extract was performed on a reverse-phase C(18) column with a mobile phase consisting of acetonitrile and 0.05% trifluoracetic acid. The calibration curves for the four phenolic acids were linear in the given concentration ranges. The intra-day and inter-day relative standard deviations in the measurement of quality control samples were less than 10% and the accuracies were in the range of 88-115%. The average recoveries of all the tissues ranged from 78.0 to 111.8%. This method was successfully applied to evaluate the distribution of the four phenolic acids in rat tissues after oral administration of total phenolic acids of Salvia miltiorrhiza or salvianolic acid B and the possible metabolic pathway was illustrated. PMID:17549679

  14. Creating a VAPEPS database: A VAPEPS tutorial

    NASA Technical Reports Server (NTRS)

    Graves, George

    1989-01-01

    A procedural method is outlined for creating a Vibroacoustic Payload Environment Prediction System (VAPEPS) Database. The method of presentation employs flowcharts of sequential VAPEPS Commands used to create a VAPEPS Database. The commands are accompanied by explanatory text to the right of the command in order to minimize the need for repetitive reference to the VAPEPS user's manual. The method is demonstrated by examples of varying complexity. It is assumed that the reader has acquired a basic knowledge of the VAPEPS software program.

  15. Video Databases: An Emerging Tool in Business Education

    ERIC Educational Resources Information Center

    MacKinnon, Gregory; Vibert, Conor

    2014-01-01

    A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…

  16. Improved method for calibrating the visible and near-infrared channels of the National Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer.

    PubMed

    Che, N; Price, J C

    1993-12-20

    Two procedures are used to establish calibration of the visible and near-infrared channels of the National Oceanic and Atmospheric Administration-11 (NOAA-II) Advanced Very High Resolution Radiometer (AVHRR). The first procedure for visible spectra, uses satellite data, ground measurements of atmospheric conditions during satellite overpass, and historical surface reflectance values at White Sands Missile Range (WSMR) in New Mexico. The second procedure, for the near-infrared, uses knowledge of the reflective properties at the WSMR and of a low-reflectance area, as determined from the first method, that yields satellite-gain values without a requirement for ground measurements of atmospheric conditions. The accuracy of gain values is estimated at ±7% for the two methods. The WSMR combines accessibility, a wide range of surface reflectances, and generally good observing conditions, making it a desirable location for satellite calibration.

  17. BioImaging Database

    SciTech Connect

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  18. Biological Macromolecule Crystallization Database

    National Institute of Standards and Technology Data Gateway

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  19. Online Database Searching Workbook.

    ERIC Educational Resources Information Center

    Littlejohn, Alice C.; Parker, Joan M.

    Designed primarily for use by first-time searchers, this workbook provides an overview of online searching. Following a brief introduction which defines online searching, databases, and database producers, five steps in carrying out a successful search are described: (1) identifying the main concepts of the search statement; (2) selecting a…

  20. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  1. HIV Structural Database

    National Institute of Standards and Technology Data Gateway

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  2. Morchella MLST database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Welcome to the Morchella MLST database. This dedicated database was set up at the CBS-KNAW Biodiversity Center by Vincent Robert in February 2012, using BioloMICS software (Robert et al., 2011), to facilitate DNA sequence-based identifications of Morchella species via the Internet. The current datab...

  3. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  4. First Look: TRADEMARKSCAN Database.

    ERIC Educational Resources Information Center

    Fernald, Anne Conway; Davidson, Alan B.

    1984-01-01

    Describes database produced by Thomson and Thomson and available on Dialog which contains over 700,000 records representing all active federal trademark registrations and applications for registrations filed in United States Patent and Trademark Office. A typical record, special features, database applications, learning to use TRADEMARKSCAN, and…

  5. Dictionary as Database.

    ERIC Educational Resources Information Center

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  6. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  7. Build Your Own Database.

    ERIC Educational Resources Information Center

    Jacso, Peter; Lancaster, F. W.

    This book is intended to help librarians and others to produce databases of better value and quality, especially if they have had little previous experience in database construction. Drawing upon almost 40 years of experience in the field of information retrieval, this book emphasizes basic principles and approaches rather than in-depth and…

  8. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…

  9. Database Searching by Managers.

    ERIC Educational Resources Information Center

    Arnold, Stephen E.

    Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…

  10. Assignment to database industy

    NASA Astrophysics Data System (ADS)

    Abe, Kohichiroh

    Various kinds of databases are considered to be essential part in future large sized systems. Information provision only by databases is also considered to be growing as the market becomes mature. This paper discusses how such circumstances have been built and will be developed from now on.

  11. Extending SGML to Accommodate Database Functions: A Methodological Overview.

    ERIC Educational Resources Information Center

    Sengupta, Arijit; Dillon, Andrew

    1997-01-01

    Presents a method for augmenting a Standard Generalized Markup Language (SGML) document repository with database functionality. Introduces an implementation method for a complex-object modeling technique, and describes interface techniques tailored for text databases and concepts for a Structured Document Database Management System (SDDBMS)…

  12. Cascadia Tsunami Deposit Database

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce; Gelfenbaum, Guy; Peterson, Curt

    2003-01-01

    The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have been compiled from 52 studies, documenting 59 sites from northern California to Vancouver Island, British Columbia that contain known or potential tsunami deposits. Bibliographical references are provided for all sites included in the database. Cascadia tsunami deposits are usually seen as anomalous sand layers in coastal marsh or lake sediments. The studies cited in the database use numerous criteria based on sedimentary characteristics to distinguish tsunami deposits from sand layers deposited by other processes, such as river flooding and storm surges. Several studies cited in the database contain evidence for more than one tsunami at a site. Data categories include age, thickness, layering, grainsize, and other sedimentological characteristics of Cascadia tsunami deposits. The database documents the variability observed in tsunami deposits found along the Cascadia margin.

  13. A dynamic two-dimensional polyacrylamide gel electrophoresis database: the mycobacterial proteome via Internet.

    PubMed

    Mollenkopf, H J; Jungblut, P R; Raupach, B; Mattow, J; Lamer, S; Zimny-Arndt, U; Schaible, U E; Kaufmann, S H

    1999-08-01

    Proteome analysis by two-dimensional polyacrylamide gel electrophoresis (2-D PAGE) and mass spectrometry, in combination with protein chemical methods, is a powerful approach for the analysis of the protein composition of complex biological samples. Data organization is imperative for efficient handling of the vast amount of information generated. Thus we have constructed a 2-D PAGE database to store and compare protein patterns of cell-associated and culture-supernatant proteins of different mycobacterial strains. In accordance with the guidelines for federated 2-DE databases, we developed a program that generates a dynamic 2-D PAGE database for the World-Wide-Web to organise and publish, via the internet, our results from proteome analysis of different Mycobacterium tuberculosis as well as Mycobacterium bovis BCG strains. The uniform resource locator for the database is http://www.mpiib-berlin.mpg.de/2D-PAGE and can be read with a Java compatible browser. The interactive hypertext markup language documents displayed are generated dynamically in each individual session from a rational data file, a 2-D gel image file and a map file describing the protein spots as polygons. The program consists of common gateway interface scripts written in PERL, minimizing the administrative workload of the database. Furthermore, the database facilitates not only interactive use, but also worldwide active participation of other scientific groups with their own data, requiring only minimal computer hardware and knowledge of information technology.

  14. Computer Developments and the Administrator, Part II.

    ERIC Educational Resources Information Center

    Findlay, A. W.

    1979-01-01

    The computer is seen as a powerful tool that offers great potential for administrators in tertiary institutions. The impact of the computer on processing data, database concept, computer communication, utilization of operations data for analytical reports, use of management science tools, and evolution in administrative data processing are…

  15. Hazard Analysis Database Report

    SciTech Connect

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  16. Hazard Analysis Database Report

    SciTech Connect

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  17. Database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1991-01-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  18. Paleoepidemiologic investigation of Legionnaires disease at Wadsworth Veterans Administration Hospital by using three typing methods for comparison of legionellae from clinical and environmental sources.

    PubMed Central

    Edelstein, P H; Nakahama, C; Tobin, J O; Calarco, K; Beer, K B; Joly, J R; Selander, R K

    1986-01-01

    Multilocus enzyme electrophoresis, monoclonal antibody typing for Legionella pneumophila serogroup 1, and plasmid analysis were used to type 89 L. pneumophila strains isolated from nosocomial cases of Legionnaires disease at the Veterans Administration Wadsworth Medical Center (VAWMC) and from the hospital environment. Twelve L. pneumophila clinical isolates, obtained from patients at non-VAWMC hospitals, were also typed by the same methods to determine typing specificity. Seventy-nine percent of 33 VAWMC L. pneumophila serogroup 1 clinical isolates and 70% of 23 environmental isolates were found in only one of the five monoclonal subgroups. Similar clustering was found for the other two typing methods, with excellent correlation between all methods. Enzyme electrophoretic typing divided the isolates into the greatest number of distinct groups, resulting in the identification of 10 different L. pneumophila types and 5 types not belonging to L. pneumophila, which probably constitute an undescribed Legionella species; 7 clinical and 34 environmental VAWMC isolates and 2 non-VAWMC clinical isolates were found to be members of the new species. Twelve different plasmid patterns were found; 95% of VAWMC clinical isolates contained plasmids. Major VAWMC epidemic-bacterial types were common in the hospital potable-water distribution system and cooling towers. Strains of L. pneumophila which persisted after disinfection of contaminated environmental sites were of a different type from the prechlorination strains. All three typing methods were useful in the epidemiologic analysis of the VAWMC outbreak. PMID:3711303

  19. Administrative IT

    ERIC Educational Resources Information Center

    Grayson, Katherine, Ed.

    2006-01-01

    When it comes to Administrative IT solutions and processes, best practices range across the spectrum. Enterprise resource planning (ERP), student information systems (SIS), and tech support are prominent and continuing areas of focus. But widespread change can also be accomplished via the implementation of campuswide document imaging and sharing,…

  20. ADMINISTRATIVE CLIMATE.

    ERIC Educational Resources Information Center

    BRUCE, ROBERT L.; CARTER, G.L., JR.

    IN THE COOPERATIVE EXTENSION SERVICE, STYLES OF LEADERSHIP PROFOUNDLY AFFECT THE QUALITY OF THE SERVICE RENDERED. ACCORDINGLY, MAJOR INFLUENCES ON ADMINISTRATIVE CLIMATE AND EMPLOYEE PRODUCTIVITY ARE EXAMINED IN ESSAYS ON (1) SOURCES OF JOB SATISFACTION AND DISSATISFACTION, (2) MOTIVATIONAL THEORIES BASED ON JOB-RELATED SATISFACTIONS AND NEEDS,…

  1. Engineering Administration.

    ERIC Educational Resources Information Center

    Naval Personnel Program Support Activity, Washington, DC.

    This book is intended to acquaint naval engineering officers with their duties in the engineering department. Standard shipboard organizations are analyzed in connection with personnel assignments, division operations, and watch systems. Detailed descriptions are included for the administration of directives, ship's bills, damage control, training…

  2. Method and system for normalizing biometric variations to authenticate users from a public database and that ensures individual biometric data privacy

    SciTech Connect

    Strait, R.S.; Pearson, P.K.; Sengupta, S.K.

    2000-03-14

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a reference value. To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced value is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.

  3. Method and system for normalizing biometric variations to authenticate users from a public database and that ensures individual biometric data privacy

    DOEpatents

    Strait, Robert S.; Pearson, Peter K.; Sengupta, Sailes K.

    2000-01-01

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a "reference value." To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced value is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.

  4. International Comparisions Database

    National Institute of Standards and Technology Data Gateway

    International Comparisions Database (Web, free access)   The International Comparisons Database (ICDB) serves the U.S. and the Inter-American System of Metrology (SIM) with information based on Appendices B (International Comparisons), C (Calibration and Measurement Capabilities) and D (List of Participating Countries) of the Comit� International des Poids et Mesures (CIPM) Mutual Recognition Arrangement (MRA). The official source of the data is The BIPM key comparison database. The ICDB provides access to results of comparisons of measurements and standards organized by the consultative committees of the CIPM and the Regional Metrology Organizations.

  5. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  6. JICST Factual Database

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuaki; Shimura, Kazuki; Monma, Yoshio; Sakamoto, Masao; Morishita, Hiroshi; Kanazawa, Kenji

    The Japan Information Center of Science and Technology (JICST) has started the on-line service of JICST/NRIM Materials Strength Database for Engineering Steels and Alloys (JICST ME) in this March (1990). This database has been developed under the joint research between JICST and the National Research Institute for Metals (NRIM). It provides material strength data (creep, fatigue, etc.) of engineering steels and alloys. It is able to search and display on-line, and to analyze the searched data statistically and plot the result on graphic display. The database system and the data in JICST ME are described.

  7. Hybrid Terrain Database

    NASA Technical Reports Server (NTRS)

    Arthur, Trey

    2006-01-01

    A prototype hybrid terrain database is being developed in conjunction with other databases and with hardware and software that constitute subsystems of aerospace cockpit display systems (known in the art as synthetic vision systems) that generate images to increase pilots' situation awareness and eliminate poor visibility as a cause of aviation accidents. The basic idea is to provide a clear view of the world around an aircraft by displaying computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information.

  8. Knowledge Abstraction in Chinese Chess Endgame Databases

    NASA Astrophysics Data System (ADS)

    Chen, Bo-Nian; Liu, Pangfeng; Hsu, Shun-Chin; Hsu, Tsan-Sheng

    Retrograde analysis is a well known approach to construct endgame databases. However, the size of the endgame databases are too large to be loaded into the main memory of a computer during tournaments. In this paper, a novel knowledge abstraction strategy is proposed to compress endgame databases. The goal is to obtain succinct knowledge for practical endgames. A specialized goal-oriented search method is described and applied on the important endgame KRKNMM. The method of combining a search algorithm with a small size of knowledge is used to handle endgame positions up to a limited depth, but with a high degree of correctness.

  9. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  10. Time Dependent Antinociceptive Effects of Morphine and Tramadol in the Hot Plate Test: Using Different Methods of Drug Administration in Female Rats

    PubMed Central

    Gholami, Morteza; Saboory, Ehsan; Mehraban, Sogol; Niakani, Afsaneh; Banihabib, Nafiseh; Azad, Mohamad-Reza; Fereidoni, Javid

    2015-01-01

    Morphine and tramadol which have analgesic effects can be administered acutely or chronically. This study tried to investigate the effect of these drugs at various times by using different methods of administration (intraperitoneal, oral, acute and chronic). Sixty adult female rats were divided into six groups. They received saline, morphine or tramadol (20 to 125 mg/Kg) daily for 15 days. A hot plate test was performed for the rats at the 1st, 8th and 15th days. After drug withdrawal, the hot plate test was repeated at the 17th, 19th, and 22nd days. There was a significant correlation between the day, drug, group, and their interaction (P<0.001). At 1st day (d1), both morphine, and tramadol caused an increase in the hot plate time comparing to the saline groups (P<0.001), while there was no correlation between drug administration methods of morphine and/or tramadol. At the 8th day (d8), morphine and tramadol led to the most powerful analgesic effect comparing to the other experimental days (P<0.001). At the 15th day (d15), their effects diminished comparing to the d8. After drug withdrawal, analgesic effect of morphine, and tramadol disappeared. It can be concluded that the analgesic effect of morphine and tramadol increases with the repeated use of them. Thereafter, it may gradually decrease and reach to a level compatible to d1. The present data also indicated that although the analgesic effect of morphine and tramadol is dose-and-time dependent, but chronic exposure to them may not lead to altered nociceptive responses later in life. PMID:25561936

  11. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  12. Nuclear Science References Database

    SciTech Connect

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  13. Hawaii bibliographic database

    USGS Publications Warehouse

    Wright, T.L.; Takahashi, T.J.

    1998-01-01

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  14. Navigating public microarray databases.

    PubMed

    Penkett, Christopher J; Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources.

  15. Chemical Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  16. TREATABILITY DATABASE DESCRIPTION

    EPA Science Inventory

    The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities, first responders to spills or emergencies, treatment process designers, research organizations, academics, regulato...

  17. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  18. Requirements Management Database

    2009-08-13

    This application is a simplified and customized version of the RBA and CTS databases to capture federal, site, and facility requirements, link to actions that must be performed to maintain compliance with their contractual and other requirements.

  19. Open access intrapartum CTG database

    PubMed Central

    2014-01-01

    Background Cardiotocography (CTG) is a monitoring of fetal heart rate and uterine contractions. Since 1960 it is routinely used by obstetricians to assess fetal well-being. Many attempts to introduce methods of automatic signal processing and evaluation have appeared during the last 20 years, however still no significant progress similar to that in the domain of adult heart rate variability, where open access databases are available (e.g. MIT-BIH), is visible. Based on a thorough review of the relevant publications, presented in this paper, the shortcomings of the current state are obvious. A lack of common ground for clinicians and technicians in the field hinders clinically usable progress. Our open access database of digital intrapartum cardiotocographic recordings aims to change that. Description The intrapartum CTG database consists in total of 552 intrapartum recordings, which were acquired between April 2010 and August 2012 at the obstetrics ward of the University Hospital in Brno, Czech Republic. All recordings were stored in electronic form in the OB TraceVue®;system. The recordings were selected from 9164 intrapartum recordings with clinical as well as technical considerations in mind. All recordings are at most 90 minutes long and start a maximum of 90 minutes before delivery. The time relation of CTG to delivery is known as well as the length of the second stage of labor which does not exceed 30 minutes. The majority of recordings (all but 46 cesarean sections) is – on purpose – from vaginal deliveries. All recordings have available biochemical markers as well as some more general clinical features. Full description of the database and reasoning behind selection of the parameters is presented in the paper. Conclusion A new open-access CTG database is introduced which should give the research community common ground for comparison of results on reasonably large database. We anticipate that after reading the paper, the reader will understand the

  20. Information Release Administration Database (IRAD), Software Design Description (SDD)

    SciTech Connect

    CAREY, D.S.

    2000-05-10

    The IRAD system is a client server system that is written in Paradox for DOS. This system will be replaced with a Visual Basic and SQL Server in order to update the technology, eliminate obsolete functions, as well as to automate the manual interfaces.

  1. Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  2. Database computing in HEP

    SciTech Connect

    Day, C.T.; Loken, S.; MacFarlane, J.F. ); May, E.; Lifka, D.; Lusk, E.; Price, L.E. ); Baden, A. . Dept. of Physics); Grossman, R.; Qin, X. . Dept. of Mathematics, Statistics and Computer Science); Cormell, L.; Leibold, P.; Liu, D

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples.

  3. Database computing in HEP

    NASA Technical Reports Server (NTRS)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  4. Querying genomic databases

    SciTech Connect

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  5. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  6. The Halophile protein database.

    PubMed

    Sharma, Naveen; Farooqi, Mohammad Samir; Chaturvedi, Krishna Kumar; Lal, Shashi Bhushan; Grover, Monendra; Rai, Anil; Pandey, Pankaj

    2014-01-01

    Halophilic archaea/bacteria adapt to different salt concentration, namely extreme, moderate and low. These type of adaptations may occur as a result of modification of protein structure and other changes in different cell organelles. Thus proteins may play an important role in the adaptation of halophilic archaea/bacteria to saline conditions. The Halophile protein database (HProtDB) is a systematic attempt to document the biochemical and biophysical properties of proteins from halophilic archaea/bacteria which may be involved in adaptation of these organisms to saline conditions. In this database, various physicochemical properties such as molecular weight, theoretical pI, amino acid composition, atomic composition, estimated half-life, instability index, aliphatic index and grand average of hydropathicity (Gravy) have been listed. These physicochemical properties play an important role in identifying the protein structure, bonding pattern and function of the specific proteins. This database is comprehensive, manually curated, non-redundant catalogue of proteins. The database currently contains 59 897 proteins properties extracted from 21 different strains of halophilic archaea/bacteria. The database can be accessed through link. Database URL: http://webapp.cabgrid.res.in/protein/

  7. Crude Oil Analysis Database

    DOE Data Explorer

    Shay, Johanna Y.

    The composition and physical properties of crude oil vary widely from one reservoir to another within an oil field, as well as from one field or region to another. Although all oils consist of hydrocarbons and their derivatives, the proportions of various types of compounds differ greatly. This makes some oils more suitable than others for specific refining processes and uses. To take advantage of this diversity, one needs access to information in a large database of crude oil analyses. The Crude Oil Analysis Database (COADB) currently satisfies this need by offering 9,056 crude oil analyses. Of these, 8,500 are United States domestic oils. The database contains results of analysis of the general properties and chemical composition, as well as the field, formation, and geographic location of the crude oil sample. [Taken from the Introduction to COAMDATA_DESC.pdf, part of the zipped software and database file at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the zipped file to your PC. When opened, it will contain PDF documents and a large Excel spreadsheet. It will also contain the database in Microsoft Access 2002.

  8. Open systems and databases

    SciTech Connect

    Martire, G.S. ); Nuttall, D.J.H. )

    1993-05-01

    This paper is part of a series of papers invited by the IEEE POWER CONTROL CENTER WORKING GROUP concerning the changing designs of modern control centers. Papers invited by the Working Group discuss the following issues: Benefits of Openness, Criteria for Evaluating Open EMS Systems, Hardware Design, Configuration Management, Security, Project Management, Databases, SCADA, Inter- and Intra-System Communications and Man-Machine Interfaces,'' The goal of this paper is to provide an introduction to the issues pertaining to Open Systems and Databases.'' The intent is to assist understanding of some of the underlying factors that effect choices that must be made when selecting a database system for use in a control room environment. This paper describes and compares the major database information models which are in common use for database systems and provides an overview of SQL. A case for the control center community to follow the workings of the non-formal standards bodies is presented along with possible uses and the benefits of commercially available databases within the control center. The reasons behind the emergence of industry supported standards organizations such as the Open Software Foundation (OSF) and SQL Access are presented.

  9. The comprehensive peptaibiotics database.

    PubMed

    Stoppacher, Norbert; Neumann, Nora K N; Burgstaller, Lukas; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2013-05-01

    Peptaibiotics are nonribosomally biosynthesized peptides, which - according to definition - contain the marker amino acid α-aminoisobutyric acid (Aib) and possess antibiotic properties. Being known since 1958, a constantly increasing number of peptaibiotics have been described and investigated with a particular emphasis on hypocrealean fungi. Starting from the existing online 'Peptaibol Database', first published in 1997, an exhaustive literature survey of all known peptaibiotics was carried out and resulted in a list of 1043 peptaibiotics. The gathered information was compiled and used to create the new 'The Comprehensive Peptaibiotics Database', which is presented here. The database was devised as a software tool based on Microsoft (MS) Access. It is freely available from the internet at http://peptaibiotics-database.boku.ac.at and can easily be installed and operated on any computer offering a Windows XP/7 environment. It provides useful information on characteristic properties of the peptaibiotics included such as peptide category, group name of the microheterogeneous mixture to which the peptide belongs, amino acid sequence, sequence length, producing fungus, peptide subfamily, molecular formula, and monoisotopic mass. All these characteristics can be used and combined for automated search within the database, which makes The Comprehensive Peptaibiotics Database a versatile tool for the retrieval of valuable information about peptaibiotics. Sequence data have been considered as to December 14, 2012. PMID:23681723

  10. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  11. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  12. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  13. Signal detection in FDA AERS database using Dirichlet process.

    PubMed

    Hu, Na; Huang, Lan; Tiwari, Ram C

    2015-08-30

    In the recent two decades, data mining methods for signal detection have been developed for drug safety surveillance, using large post-market safety data. Several of these methods assume that the number of reports for each drug-adverse event combination is a Poisson random variable with mean proportional to the unknown reporting rate of the drug-adverse event pair. Here, a Bayesian method based on the Poisson-Dirichlet process (DP) model is proposed for signal detection from large databases, such as the Food and Drug Administration's Adverse Event Reporting System (AERS) database. Instead of using a parametric distribution as a common prior for the reporting rates, as is the case with existing Bayesian or empirical Bayesian methods, a nonparametric prior, namely, the DP, is used. The precision parameter and the baseline distribution of the DP, which characterize the process, are modeled hierarchically. The performance of the Poisson-DP model is compared with some other models, through an intensive simulation study using a Bayesian model selection and frequentist performance characteristics such as type-I error, false discovery rate, sensitivity, and power. For illustration, the proposed model and its extension to address a large amount of zero counts are used to analyze statin drugs for signals using the 2006-2011 AERS data. PMID:25924820

  14. Linking Clinical Research Data to Population Databases

    PubMed Central

    Edelman, Linda S.; Guo, Jia-Wen; Fraser, Alison; Beck, Susan L.

    2014-01-01

    Background Most clinical nursing research is limited to funded study periods. Researchers can study relationships between study measures and long-term outcomes if clinical research data can be linked to population databases. Objectives The objective was to describe feasibility of linking research participant data to data from population databases in order to study long-term poststudy outcomes. As an exemplar, participants were linked from a completed oncology nursing research trial to outcomes data in two state population databases. Methods Participant data from a previously completed symptom management study were linked to the Utah Population Database and the Utah Emergency Department Database. The final dataset contained demographic, cancer diagnosis and treatment, and baseline data from the oncology study linked to poststudy long-term outcomes from the population databases. Results One hundred twenty-nine of 144 (89.6%) study participants were linked to their individual data in the population databases. Of those, 73% were linked to hospitalization records, 60% to emergency department visit records, and 28% were identified as having died. Discussion Study participant data were successfully linked to population databases data to describe poststudy emergency department visit and hospitalization numbers and mortality. The results suggest that data linkage success can be improved if researchers include linkage and human subjects protection plans related to linkage in the initial study design. PMID:24165220

  15. QIS: A Framework for Biomedical Database Federation

    PubMed Central

    Marenco, Luis; Wang, Tzuu-Yi; Shepherd, Gordon; Miller, Perry L.; Nadkarni, Prakash

    2004-01-01

    Query Integrator System (QIS) is a database mediator framework intended to address robust data integration from continuously changing heterogeneous data sources in the biosciences. Currently in the advanced prototype stage, it is being used on a production basis to integrate data from neuroscience databases developed for the SenseLab project at Yale University with external neuroscience and genomics databases. The QIS framework uses standard technologies and is intended to be deployable by administrators with a moderate level of technological expertise: It comes with various tools, such as interfaces for the design of distributed queries. The QIS architecture is based on a set of distributed network-based servers, data source servers, integration servers, and ontology servers, that exchange metadata as well as mappings of both metadata and data elements to elements in an ontology. Metadata version difference determination coupled with decomposition of stored queries is used as the basis for partial query recovery when the schema of data sources alters. PMID:15298995

  16. QIS: A framework for biomedical database federation.

    PubMed

    Marenco, Luis; Wang, Tzuu-Yi; Shepherd, Gordon; Miller, Perry L; Nadkarni, Prakash

    2004-01-01

    Query Integrator System (QIS) is a database mediator framework intended to address robust data integration from continuously changing heterogeneous data sources in the biosciences. Currently in the advanced prototype stage, it is being used on a production basis to integrate data from neuroscience databases developed for the SenseLab project at Yale University with external neuroscience and genomics databases. The QIS framework uses standard technologies and is intended to be deployable by administrators with a moderate level of technological expertise: It comes with various tools, such as interfaces for the design of distributed queries. The QIS architecture is based on a set of distributed network-based servers, data source servers, integration servers, and ontology servers, that exchange metadata as well as mappings of both metadata and data elements to elements in an ontology. Metadata version difference determination coupled with decomposition of stored queries is used as the basis for partial query recovery when the schema of data sources alters.

  17. Great Basin paleontological database

    USGS Publications Warehouse

    Zhang, N.; Blodgett, R.B.; Hofstra, A.H.

    2008-01-01

    The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.

  18. GOTTCHA Database, Version 1

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discoverymore » rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which

  19. GOTTCHA Database, Version 1

    SciTech Connect

    Freitas, Tracey; Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  20. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  1. Incomplete data sets: coping with inadequate databases.

    PubMed

    Albert, R H; Horwitz, W

    1995-01-01

    Three problems arise in handling numerical values in databases: bad data, missing data, and sloppy data. The effects of bad data are mitigated by using statistical subterfuges such as robust statistics or outlier removal. Missing data are replaced by creating a substitute through interpolation or by using statistics appropriate to unbalanced designs. Sloppy, semiquantitative data are relegated to innocuous positions by using nonparametric, rank, or attribute statistics. These techniques are illustrated by the telephone directory, a database of carcinogenicity test results, and a database of precision parameters derived from method performance (collaborative) studies.

  2. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  3. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  4. Using the Reactome Database

    PubMed Central

    Haw, Robin

    2012-01-01

    There is considerable interest in the bioinformatics community in creating pathway databases. The Reactome project (a collaboration between the Ontario Institute for Cancer Research, Cold Spring Harbor Laboratory, New York University Medical Center and the European Bioinformatics Institute) is one such pathway database and collects structured information on all the biological pathways and processes in the human. It is an expert-authored and peer-reviewed, curated collection of well-documented molecular reactions that span the gamut from simple intermediate metabolism to signaling pathways and complex cellular events. This information is supplemented with likely orthologous molecular reactions in mouse, rat, zebrafish, worm and other model organisms. This unit describes how to use the Reactome database to learn the steps of a biological pathway; navigate and browse through the Reactome database; identify the pathways in which a molecule of interest is involved; use the Pathway and Expression analysis tools to search the database for and visualize possible connections within user-supplied experimental data set and Reactome pathways; and the Species Comparison tool to compare human and model organism pathways. PMID:22700314

  5. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.

    2011-01-01

    With three missions outstanding, the Shuttle Hypervelocity Impact Database has nearly 3000 entries. The data is divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. Details and insights on the contents of the database including examples of descriptive statistics will be provided. Post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will also be discussed. Potential enhancements to the database structure and availability of the data for other researchers will be addressed in the Future Work section. A related database of returned surfaces from the International Space Station will also be introduced.

  6. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James I.; Christiansen, Eric I.; Lear, Dana M.

    2011-01-01

    With three flights remaining on the manifest, the shuttle impact hypervelocity database has over 2800 entries. The data is currently divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. The paper will provide details and insights on the contents of the database including examples of descriptive statistics using the impact data. A discussion of post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will be presented. Future work to be discussed will be possible enhancements to the database structure and availability of the data for other researchers. A related database of ISS returned surfaces that are under development will also be introduced.

  7. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... Federal Aviation Administration Passenger Facility Charge Database System for Air Carrier Reporting AGENCY... interested parties of the availability of the Passenger Facility Charge (PFC) database system to report PFC... public agency. The FAA has developed a national PFC database system in order to more easily track the...

  8. Computer Databases: A Survey; Part 1: General and News Databases.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1986-01-01

    Descriptions and evaluations of 13 databases devoted to computer information are presented by type under four headings: bibliographic databases; daily news services; online computer magazines; and specialized computer industry databases. Information on database producers, starting date of file, update frequency, vendors, and prices is summarized…

  9. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  10. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  11. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  12. Protein Structure Databases.

    PubMed

    Laskowski, Roman A

    2016-01-01

    Web-based protein structure databases come in a wide variety of types and levels of information content. Those having the most general interest are the various atlases that describe each experimentally determined protein structure and provide useful links, analyses, and schematic diagrams relating to its 3D structure and biological function. Also of great interest are the databases that classify 3D structures by their folds as these can reveal evolutionary relationships which may be hard to detect from sequence comparison alone. Related to these are the numerous servers that compare folds-particularly useful for newly solved structures, and especially those of unknown function. Beyond these are a vast number of databases for the more specialized user, dealing with specific families, diseases, structural features, and so on. PMID:27115626

  13. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  14. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  15. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  16. National Ambient Radiation Database

    SciTech Connect

    Dziuban, J.; Sears, R.

    2003-02-25

    The U.S. Environmental Protection Agency (EPA) recently developed a searchable database and website for the Environmental Radiation Ambient Monitoring System (ERAMS) data. This site contains nationwide radiation monitoring data for air particulates, precipitation, drinking water, surface water and pasteurized milk. This site provides location-specific as well as national information on environmental radioactivity across several media. It provides high quality data for assessing public exposure and environmental impacts resulting from nuclear emergencies and provides baseline data during routine conditions. The database and website are accessible at www.epa.gov/enviro/. This site contains (1) a query for the general public which is easy to use--limits the amount of information provided, but includes the ability to graph the data with risk benchmarks and (2) a query for a more technical user which allows access to all of the data in the database, (3) background information on ER AMS.

  17. Simultaneous quantification of flavonoids in blood plasma by a high-performance liquid chromatography method after oral administration of Blumea balsamifera leaf extracts in rats.

    PubMed

    Nessa, Fazilatun; Ismail, Zhari; Mohamed, Nornisah; Karupiah, Sundram

    2013-03-01

    The leaves of Blumea balsamifera are used as a folk medicine in kidney stone diseases in South-East Asia. Phytochemical investigation revealed leaves contained a number of flavonoids. In view of these, the present work was aimed to quantify and preliminary pharmacokinetic investigation of five flavonoids viz. dihydroquercetin-7,4¢-dimethyl ether (I), dihydroquercetin-4¢-methyl ether (II), 5,7,3¢,5¢-tetrahydroxyflavanone (III), blumeatin (IV) and quercetin (V) in rat plasma following oral administration (0.5g/Kg) of B. balsamifera leaf extract in rats. Quantification was achieved by using a validated, reproducible high-performance liquid chromatographic method. The mean recoveries of I, II, III, IV and V were 90.6, 93.4, 93.5, 91.2 and 90.3% respectively. The limit of quantification was 25 ng/mL for I and IV, 10 ng/mL for II and III and 100 ng/mL for V respectively. The within day and day-to-day precision for all the compounds were < 10%. The validated HPLC method herein was applied for pharmacokinetic studies and the main pharmacokinetic parameters were: t1/2 (hr) 5.8, 4.3, 2.9, 5.7 and 7.3, Cmax (ng/mL) 594.9, 1542.9 1659.9, 208.9 and 3040.4; Tmax (hr) 4.7, 1.0, 1.0, 3.5 and 2.3; AUC0-oo (ng hr/mL) 5040, 5893, 9260, 1064 and 27233 for I, II, III, IV and V respectively. The developed method was suitable for pharmacokinetic studies and this preliminary study also revealed significant absorption after oral dosing in rats. PMID:23455210

  18. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  19. Thermodynamics of Enzyme-Catalyzed Reactions Database

    National Institute of Standards and Technology Data Gateway

    SRD 74 Thermodynamics of Enzyme-Catalyzed Reactions Database (Web, free access)   The Thermodynamics of Enzyme-Catalyzed Reactions Database contains thermodynamic data on enzyme-catalyzed reactions that have been recently published in the Journal of Physical and Chemical Reference Data (JPCRD). For each reaction the following information is provided: the reference for the data, the reaction studied, the name of the enzyme used and its Enzyme Commission number, the method of measurement, the data and an evaluation thereof.

  20. Selective access and editing in a database

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Gawdiak, Yuri O. (Inventor)

    2010-01-01

    Method and system for providing selective access to different portions of a database by different subgroups of database users. Where N users are involved, up to 2.sup.N-1 distinguishable access subgroups in a group space can be formed, where no two access subgroups have the same members. Two or more members of a given access subgroup can edit, substantially simultaneously, a document accessible to each member.

  1. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  2. The Ribosomal Database Project.

    PubMed Central

    Maidak, B L; Larsen, N; McCaughey, M J; Overbeek, R; Olsen, G J; Fogel, K; Blandy, J; Woese, C R

    1994-01-01

    The Ribosomal Database Project (RDP) is a curated database that offers ribosome-related data, analysis services, and associated computer programs. The offerings include phylogenetically ordered alignments of ribosomal RNA (rRNA) sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (rdp.life.uiuc.edu), electronic mail (server/rdp.life.uiuc.edu) and gopher (rdpgopher.life.uiuc.edu). The electronic mail server also provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for chimeric nature of newly sequenced rRNAs, and automated alignment. PMID:7524021

  3. Database Management System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  4. The Genopolis Microarray Database

    PubMed Central

    Splendiani, Andrea; Brandizi, Marco; Even, Gael; Beretta, Ottavio; Pavelka, Norman; Pelizzola, Mattia; Mayhaus, Manuel; Foti, Maria; Mauri, Giancarlo; Ricciardi-Castagnoli, Paola

    2007-01-01

    Background Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood. Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions. Results The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip® platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users. Conclusion The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local

  5. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  6. Low-Budget Graphic Databases.

    ERIC Educational Resources Information Center

    Mahoney, Dan

    1994-01-01

    Explains the use of a standard text-based database program (i.e., dBase III) to run external programs that display graphic files during a database session and reduces costs normally encountered when preparing a computer to run a graphical database. An example is given of a simple database with two fields. (LRW)

  7. TREC Document Database: Disk 4

    National Institute of Standards and Technology Data Gateway

    NIST TREC Document Database: Disk 4 (PC database for purchase)   NIST TREC Document Databases (Special Database 22) are distributed for the development and testing of information retrieval (IR) systems and related natural language processing research. The document collections consist of the full text of various newspaper and newswire articles plus government proceedings.

  8. TREC Document Database: Disk 5

    National Institute of Standards and Technology Data Gateway

    NIST TREC Document Database: Disk 5 (PC database for purchase)   NIST TREC Document Databases (Special Database 23) are distributed for the development and testing of information retrieval (IR) systems and related natural language processing research. The document collections consist of the full text of various newspaper and newswire articles plus government proceedings.

  9. Barriers and Facilitators to Career Advancement by Top-Level, Entry-Level and Non-Administrative Women in Public School Districts: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Ahmed, Eman Ibrahim El-Desouki

    2011-01-01

    The purpose of this study was to investigate the barriers and facilitators to career advancement among women administrators occupying top-level positions, those occupying entry-level positions and those in non-administrative positions in both rural and urban public school districts in central Pennsylvania. The need to increase the awareness of the…

  10. Administrative Methods for Reducing Crime in Primary and Secondary Schools: A Regression Analysis of the U.S. Department of Education School Survey of Crime and Safety

    ERIC Educational Resources Information Center

    Noonan, James H.

    2011-01-01

    Since the 1999 Columbine High School shooting school administrators have been tasked with creating positive education environments while also maximizing the safety of the students and staff. However, limited resources require school administrators to only employ safety policies which are actually effective in reducing crime. In order to help…

  11. The peptaibiotics database--a comprehensive online resource.

    PubMed

    Neumann, Nora K N; Stoppacher, Norbert; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2015-05-01

    In this work, we present the 'Peptaibiotics Database' (PDB), a comprehensive online resource, which intends to cover all Aib-containing non-ribosomal fungal peptides currently described in scientific literature. This database shall extend and update the recently published 'Comprehensive Peptaibiotics Database' and currently consists of 1,297 peptaibiotic sequences. In a literature survey, a total of 235 peptaibiotic sequences published between January 2013 and June 2014 have been compiled, and added to the list of 1,062 peptides in the recently published 'Comprehensive Peptaibiotics Database'. The presented database is intended as a public resource freely accessible to the scientific community at peptaibiotics-database.boku.ac.at. The search options of the previously published repository and the presentation of sequence motif searches have been extended significantly. All of the available search options can be combined to create complex database queries. As a public repository, the presented database enables the easy upload of new peptaibiotic sequences or the correction of existing informations. In addition, an administrative interface for maintenance of the content of the database has been implemented, and the design of the database can be easily extended to store additional information to accommodate future needs of the 'peptaibiomics community'.

  12. Iodine in food- and dietary supplement-composition databases.

    PubMed

    Pehrsson, Pamela R; Patterson, Kristine Y; Spungen, Judith H; Wirtz, Mark S; Andrews, Karen W; Dwyer, Johanna T; Swanson, Christine A

    2016-09-01

    The US Food and Drug Administration (FDA) and the Nutrient Data Laboratory (NDL) of the USDA Agricultural Research Service have worked independently on determining the iodine content of foods and dietary supplements and are now harmonizing their efforts. The objective of the current article is to describe the harmonization plan and the results of initial iodine analyses accomplished under that plan. For many years, the FDA's Total Diet Study (TDS) has measured iodine concentrations in selected foods collected in 4 regions of the country each year. For more than a decade, the NDL has collected and analyzed foods as part of the National Food and Nutrient Analysis Program; iodine analysis is now being added to the program. The NDL recently qualified a commercial laboratory to conduct iodine analysis of foods by an inductively coupled plasma mass spectrometry (ICP-MS) method. Co-analysis of a set of samples by the commercial laboratory using the ICP-MS method and by the FDA laboratory using its standard colorimetric method yielded comparable results. The FDA recently reviewed historical TDS data for trends in the iodine content of selected foods, and the NDL analyzed samples of a limited subset of those foods for iodine. The FDA and the NDL are working to combine their data on iodine in foods and to produce an online database that can be used for estimating iodine intake from foods in the US population. In addition, the NDL continues to analyze dietary supplements for iodine and, in collaboration with the NIH Office of Dietary Supplements, to publish the data online in the Dietary Supplement Ingredient Database. The goal is to provide, through these 2 harmonized databases and the continuing TDS focus on iodine, improved tools for estimating iodine intake in population studies.

  13. Iodine in food- and dietary supplement-composition databases.

    PubMed

    Pehrsson, Pamela R; Patterson, Kristine Y; Spungen, Judith H; Wirtz, Mark S; Andrews, Karen W; Dwyer, Johanna T; Swanson, Christine A

    2016-09-01

    The US Food and Drug Administration (FDA) and the Nutrient Data Laboratory (NDL) of the USDA Agricultural Research Service have worked independently on determining the iodine content of foods and dietary supplements and are now harmonizing their efforts. The objective of the current article is to describe the harmonization plan and the results of initial iodine analyses accomplished under that plan. For many years, the FDA's Total Diet Study (TDS) has measured iodine concentrations in selected foods collected in 4 regions of the country each year. For more than a decade, the NDL has collected and analyzed foods as part of the National Food and Nutrient Analysis Program; iodine analysis is now being added to the program. The NDL recently qualified a commercial laboratory to conduct iodine analysis of foods by an inductively coupled plasma mass spectrometry (ICP-MS) method. Co-analysis of a set of samples by the commercial laboratory using the ICP-MS method and by the FDA laboratory using its standard colorimetric method yielded comparable results. The FDA recently reviewed historical TDS data for trends in the iodine content of selected foods, and the NDL analyzed samples of a limited subset of those foods for iodine. The FDA and the NDL are working to combine their data on iodine in foods and to produce an online database that can be used for estimating iodine intake from foods in the US population. In addition, the NDL continues to analyze dietary supplements for iodine and, in collaboration with the NIH Office of Dietary Supplements, to publish the data online in the Dietary Supplement Ingredient Database. The goal is to provide, through these 2 harmonized databases and the continuing TDS focus on iodine, improved tools for estimating iodine intake in population studies. PMID:27534627

  14. Neutron monitor database in real time

    NASA Astrophysics Data System (ADS)

    Kozlov, Valery; Kudela, Karel; Starodubtsev, Sergei; Turpanov, Alexey; Usoskin, Ilya; Yanke, Victor

    2003-09-01

    A first distributed Real Time Cosmic Ray Database using measurements of several neutron monitors is presented. The aim of the project is to develop a unified database with data from different neutron monitors collected together, in unified format and to provide a user with several commonly used data access methods. The database contains original cosmic ray as well as all housekeeping and technical data necessary for scientific data analysis. Currently the database includes Lomnicky Stit, Moscow, Oulu, Tixie Bay, Yakutsk stations and it is opened for other neutron monitors. The main database server is located in IKFIA SB RAS (Yakutsk) but there will be several mirrors of the database. The datbase and all its mirrors are updated on the nearly real-time (1 hour) basis. The data access software includes WWW-interface, Perl scipts and C library, which may be linked to a user program. Most of frequently used functions are implemented to make it operable to users without SQL language knowledge. A draft of the data representation standard is suggested, based on common practice of neutron monitor community. The database engine is freely distributed open-sourced PostgreSQL server coupled with a set of replication tools developed at Bioengineering division of the IRCCS E. Medea, Italy.

  15. Continuous and passive environmental radon monitoring: Measuring methods and health effects. (Latest citations from the INSPEC: Information services for the Physics and Engineering Communities database). Published Search

    SciTech Connect

    1993-08-01

    The bibliography contains citations concerning continuous and passive radon (Rn) monitoring, measurement methods and equipment, and health effects from Rn concentration in air, water, and soils. Citations discuss the design, development, and evaluation of monitoring and detection devices, including alpha spectroscopy and dosimetry, track detecting and scintillation, thermoluminescent, electret, and electrode collection. Sources of Rn concentration levels found in building materials, ventilation systems, soils, and ground water are examined. Lung cancer-associated risks from Rn radiation exposure are explored. Radon monitoring in mining operations is excluded. (Contains a minimum of 210 citations and includes a subject term index and title list.)

  16. The CEBAF Element Database

    SciTech Connect

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  17. Triatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 117 Triatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  18. Hydrocarbon Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  19. Diatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 114 Diatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.

  20. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  1. The Ribosomal Database Project

    NASA Technical Reports Server (NTRS)

    Olsen, G. J.; Overbeek, R.; Larsen, N.; Marsh, T. L.; McCaughey, M. J.; Maciukenas, M. A.; Kuan, W. M.; Macke, T. J.; Xing, Y.; Woese, C. R.

    1992-01-01

    The Ribosomal Database Project (RDP) complies ribosomal sequences and related data, and redistributes them in aligned and phylogenetically ordered form to its user community. It also offers various software packages for handling, analyzing and displaying sequences. In addition, the RDP offers (or will offer) certain analytic services. At present the project is in an intermediate stage of development.

  2. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…

  3. Patent Family Databases.

    ERIC Educational Resources Information Center

    Simmons, Edlyn S.

    1985-01-01

    Reports on retrieval of patent information online and includes definition of patent family, basic and equivalent patents, "parents and children" applications, designated states, patent family databases--International Patent Documentation Center, World Patents Index, APIPAT (American Petroleum Institute), CLAIMS (IFI/Plenum). A table noting country…

  4. LQTS gene LOVD database.

    PubMed

    Zhang, Tao; Moss, Arthur; Cong, Peikuan; Pan, Min; Chang, Bingxi; Zheng, Liangrong; Fang, Quan; Zareba, Wojciech; Robinson, Jennifer; Lin, Changsong; Li, Zhongxiang; Wei, Junfang; Zeng, Qiang; Qi, Ming

    2010-11-01

    The Long QT Syndrome (LQTS) is a group of genetically heterogeneous disorders that predisposes young individuals to ventricular arrhythmias and sudden death. LQTS is mainly caused by mutations in genes encoding subunits of cardiac ion channels (KCNQ1, KCNH2,SCN5A, KCNE1, and KCNE2). Many other genes involved in LQTS have been described recently(KCNJ2, AKAP9, ANK2, CACNA1C, SCNA4B, SNTA1, and CAV3). We created an online database(http://www.genomed.org/LOVD/introduction.html) that provides information on variants in LQTS-associated genes. As of February 2010, the database contains 1738 unique variants in 12 genes. A total of 950 variants are considered pathogenic, 265 are possible pathogenic, 131 are unknown/unclassified, and 292 have no known pathogenicity. In addition to these mutations collected from published literature, we also submitted information on gene variants, including one possible novel pathogenic mutation in the KCNH2 splice site found in ten Chinese families with documented arrhythmias. The remote user is able to search the data and is encouraged to submit new mutations into the database. The LQTS database will become a powerful tool for both researchers and clinicians. PMID:20809527

  5. LQTS gene LOVD database.

    PubMed

    Zhang, Tao; Moss, Arthur; Cong, Peikuan; Pan, Min; Chang, Bingxi; Zheng, Liangrong; Fang, Quan; Zareba, Wojciech; Robinson, Jennifer; Lin, Changsong; Li, Zhongxiang; Wei, Junfang; Zeng, Qiang; Qi, Ming

    2010-11-01

    The Long QT Syndrome (LQTS) is a group of genetically heterogeneous disorders that predisposes young individuals to ventricular arrhythmias and sudden death. LQTS is mainly caused by mutations in genes encoding subunits of cardiac ion channels (KCNQ1, KCNH2,SCN5A, KCNE1, and KCNE2). Many other genes involved in LQTS have been described recently(KCNJ2, AKAP9, ANK2, CACNA1C, SCNA4B, SNTA1, and CAV3). We created an online database(http://www.genomed.org/LOVD/introduction.html) that provides information on variants in LQTS-associated genes. As of February 2010, the database contains 1738 unique variants in 12 genes. A total of 950 variants are considered pathogenic, 265 are possible pathogenic, 131 are unknown/unclassified, and 292 have no known pathogenicity. In addition to these mutations collected from published literature, we also submitted information on gene variants, including one possible novel pathogenic mutation in the KCNH2 splice site found in ten Chinese families with documented arrhythmias. The remote user is able to search the data and is encouraged to submit new mutations into the database. The LQTS database will become a powerful tool for both researchers and clinicians.

  6. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  7. JDD, Inc. Database

    NASA Technical Reports Server (NTRS)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  8. Benchmark Database on Isolated Small Peptides Containing an Aromatic Side Chain: Comparison Between Wave Function and Density Functional Theory Methods and Empirical Force Field

    SciTech Connect

    Valdes, Haydee; Pluhackova, Kristyna; Pitonak, Michal; Rezac, Jan; Hobza, Pavel

    2008-03-13

    A detailed quantum chemical study on five peptides (WG, WGG, FGG, GGF and GFA) containing the residues phenylalanyl (F), glycyl (G), tryptophyl (W) and alanyl (A)—where F and W are of aromatic character—is presented. When investigating isolated small peptides, the dispersion interaction is the dominant attractive force in the peptide backbone–aromatic side chain intramolecular interaction. Consequently, an accurate theoretical study of these systems requires the use of a methodology covering properly the London dispersion forces. For this reason we have assessed the performance of the MP2, SCS-MP2, MP3, TPSS-D, PBE-D, M06-2X, BH&H, TPSS, B3LYP, tight-binding DFT-D methods and ff99 empirical force field compared to CCSD(T)/complete basis set (CBS) limit benchmark data. All the DFT techniques with a ‘-D’ symbol have been augmented by empirical dispersion energy while the M06-2X functional was parameterized to cover the London dispersion energy. For the systems here studied we have concluded that the use of the ff99 force field is not recommended mainly due to problems concerning the assignment of reliable atomic charges. Tight-binding DFT-D is efficient as a screening tool providing reliable geometries. Among the DFT functionals, the M06-2X and TPSS-D show the best performance what is explained by the fact that both procedures cover the dispersion energy. The B3LYP and TPSS functionals—not covering this energy—fail systematically. Both, electronic energies and geometries obtained by means of the wave-function theory methods compare satisfactorily with the CCSD(T)/CBS benchmark data.

  9. Tautomerism in large databases

    PubMed Central

    Sitzmann, Markus; Ihlenfeldt, Wolf-Dietrich

    2010-01-01

    We have used the Chemical Structure DataBase (CSDB) of the NCI CADD Group, an aggregated collection of over 150 small-molecule databases totaling 103.5 million structure records, to conduct tautomerism analyses on one of the largest currently existing sets of real (i.e. not computer-generated) compounds. This analysis was carried out using calculable chemical structure identifiers developed by the NCI CADD Group, based on hash codes available in the chemoinformatics toolkit CACTVS and a newly developed scoring scheme to define a canonical tautomer for any encountered structure. CACTVS’s tautomerism definition, a set of 21 transform rules expressed in SMIRKS line notation, was used, which takes a comprehensive stance as to the possible types of tautomeric interconversion included. Tautomerism was found to be possible for more than 2/3 of the unique structures in the CSDB. A total of 680 million tautomers were calculated from, and including, the original structure records. Tautomerism overlap within the same individual database (i.e. at least one other entry was present that was really only a different tautomeric representation of the same compound) was found at an average rate of 0.3% of the original structure records, with values as high as nearly 2% for some of the databases in CSDB. Projected onto the set of unique structures (by FICuS identifier), this still occurred in about 1.5% of the cases. Tautomeric overlap across all constituent databases in CSDB was found for nearly 10% of the records in the collection. PMID:20512400

  10. MEROPS: the peptidase database.

    PubMed

    Rawlings, N D; Barrett, A J

    1999-01-01

    The MEROPS database (http://www.bi.bbsrc.ac.uk/Merops/Merops.+ ++htm) provides a catalogue and structure-based classification of peptidases (i.e. all proteolytic enzymes). This is a large group of proteins (approximately 2% of all gene products) that is of particular importance in medicine and biotechnology. An index of the peptidases by name or synonym gives access to a set of files termed PepCards each of which provides information on a single peptidase. Each card file contains information on classification and nomenclature, and hypertext links to the relevant entries in online databases for human genetics, protein and nucleic acid sequence data and tertiary structure. Another index provides access to the PepCards by organism name so that the user can retrieve all known peptidases from a particular species. The peptidases are classified into families on the basis of statistically significant similarities between the protein sequences in the part termed the 'peptidase unit' that is most directly responsible for activity. Families that are thought to have common evolutionary origins and are known or expected to have similar tertiary folds are grouped into clans. The MEROPS database provides sets of files called FamCards and ClanCards describing the individual families and clans. Each FamCard document provides links to other databases for sequence motifs and secondary and tertiary structures, and shows the distribution of the family across the major kingdoms of living creatures. Release 3.03 of MEROPS contains 758 peptidases, 153 families and 22 clans. We suggest that the MEROPS database provides a model for a way in which a system of classification for a functional group of proteins can be developed and used as an organizational framework around which to assemble a variety of related information.

  11. Dental health services research utilizing comprehensive clinical databases and information technology.

    PubMed

    Hayden, W J

    1997-01-01

    Marketplace pressures for accountability in dentistry have made clear dental delivery systems' weaknesses in information generation, management, and analysis methods. Without this type of information, dentistry is unable to quantify and document the outcomes of the dental care services it provides. The Pew Health Professions Commission and the Institute of Medicine both suggest that dental schools should be among the leaders in the development and teaching of dental information capabilities, as well as the source of fundamental dental health services research. This paper argues that dental schools are the logical location for the development of valid, reliable, and acceptable health services research methods and databases. It describes the development of an insurance claims database to demonstrate the types of investigations possible, as well as the weaknesses and shortcomings of pure administrative data. PMID:9024342

  12. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  13. DVD Database Astronomical Manuscripts in Georgia

    NASA Astrophysics Data System (ADS)

    Simonia, I.; Simonia, Ts.; Abuladze, T.; Chkhikvadze, N.; Samkurashvili, L.; Pataridze, K.

    2016-06-01

    Little known and unknown Georgian, Persian, and Arabic astronomical manuscripts of IX-XIX centuries are kept in the centers, archives, and libraries of Georgia. These manuscripts has a form of treaties, handbooks, texts, tables, fragments, and comprises various theories, cosmological models, star catalogs, calendars, methods of observations. We investigated this large material and published DVD database Astronomical Manuscripts in Georgia. This unique database contains information about astronomical manuscripts as original works. It contains also descriptions of Georgian translations of Byzantine, Arabic and other sources. The present paper is dedicated to description of obtained results and DVD database. Copies of published DVD database are kept in collections of the libraries of: Ilia State University, Georgia; Royal Observatory, Edinburgh, UK; Congress of the USA, and in other centers.

  14. An Algorithm for Building an Electronic Database

    PubMed Central

    Cohen, Wess A.; Gayle, Lloyd B.

    2016-01-01

    Objective: We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. Methods: The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. Results: By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. Conclusion: We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software. PMID:26816555

  15. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  16. Open geochemical database

    NASA Astrophysics Data System (ADS)

    Zhilin, Denis; Ilyin, Vladimir; Bashev, Anton

    2010-05-01

    We regard "geochemical data" as data on chemical parameters of the environment, linked with the geographical position of the corresponding point. Boosting development of global positioning system (GPS) and measuring instruments allows fast collecting of huge amounts of geochemical data. Presently they are published in scientific journals in text format, that hampers searching for information about particular places and meta-analysis of the data, collected by different researchers. Part of the information is never published. To make the data available and easy to find, it seems reasonable to elaborate an open database of geochemical information, accessible via Internet. It also seems reasonable to link the data with maps or space images, for example, from GoogleEarth service. For this purpose an open geochemical database is being elaborating (http://maps.sch192.ru). Any user after registration can upload geochemical data (position, type of parameter and value of the parameter) and edit them. Every user (including unregistered) can (a) extract the values of parameters, fulfilling desired conditions and (b) see the points, linked to GoogleEarth space image, colored according to a value of selected parameter. Then he can treat extracted values any way he likes. There are the following data types in the database: authors, points, seasons and parameters. Author is a person, who publishes the data. Every author can declare his own profile. A point is characterized by its geographical position and type of the object (i.e. river, lake etc). Value of parameters are linked to a point, an author and a season, when they were obtained. A user can choose a parameter to place on GoogleEarth space image and a scale to color the points on the image according to the value of a parameter. Currently (December, 2009) the database is under construction, but several functions (uploading data on pH and electrical conductivity and placing colored points onto GoogleEarth space image) are

  17. Knowledge Discovery from Databases: An Introductory Review.

    ERIC Educational Resources Information Center

    Vickery, Brian

    1997-01-01

    Introduces new procedures being used to extract knowledge from databases and discusses rationales for developing knowledge discovery methods. Methods are described for such techniques as classification, clustering, and the detection of deviations from pre-established norms. Examines potential uses of knowledge discovery in the information field.…

  18. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  19. The apoptosis database.

    PubMed

    Doctor, K S; Reed, J C; Godzik, A; Bourne, P E

    2003-06-01

    The apoptosis database is a public resource for researchers and students interested in the molecular biology of apoptosis. The resource provides functional annotation, literature references, diagrams/images, and alternative nomenclatures on a set of proteins having 'apoptotic domains'. These are the distinctive domains that are often, if not exclusively, found in proteins involved in apoptosis. The initial choice of proteins to be included is defined by apoptosis experts and bioinformatics tools. Users can browse through the web accessible lists of domains, proteins containing these domains and their associated homologs. The database can also be searched by sequence homology using basic local alignment search tool, text word matches of the annotation, and identifiers for specific records. The resource is available at http://www.apoptosis-db.org and is updated on a regular basis.

  20. Medical Image Databases

    PubMed Central

    Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James

    1997-01-01

    Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338