Science.gov

Sample records for administrative databases methods

  1. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  2. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  3. CDS - Database Administrator's Guide

    NASA Astrophysics Data System (ADS)

    Day, J. P.

    This guide aims to instruct the CDS database administrator in: o The CDS file system. o The CDS index files. o The procedure for assimilating a new CDS tape into the database. It is assumed that the administrator has read SUN/79.

  4. Redis database administration tool

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  5. DRUG ENFORCEMENT ADMINISTRATION REGISTRATION DATABASE

    EPA Science Inventory

    The Drug Enforcement Administration (DEA), as part of its efforts to control the abuse and misuse of controlled substances and chemicals used in producing some over-the-counter drugs, maintains databases of individuals registered to handle these substances. These databases are av...

  6. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  7. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  8. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  9. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  10. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  11. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  12. TWRS information locator database system administrator`s manual

    SciTech Connect

    Knutson, B.J., Westinghouse Hanford

    1996-09-13

    This document is a guide for use by the Tank Waste Remediation System (TWRS) Information Locator Database (ILD) System Administrator. The TWRS ILD System is an inventory of information used in the TWRS Systems Engineering process to represent the TWRS Technical Baseline. The inventory is maintained in the form of a relational database developed in Paradox 4.5.

  13. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database...

  14. A Database System for Course Administration.

    ERIC Educational Resources Information Center

    Benbasat, Izak; And Others

    1982-01-01

    Describes a computer-assisted testing system which produces multiple-choice examinations for a college course in business administration. The system uses SPIRES (Stanford Public Information REtrieval System) to manage a database of questions and related data, mark-sense cards for machine grading tests, and ACL (6) (Audit Command Language) to…

  15. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  16. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  17. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  18. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit...

  19. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  20. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  1. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  2. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  3. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  4. VIEWCACHE: An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Sellis, Timoleon

    1991-01-01

    The objective is to illustrate the concept of incremental access to distributed databases. An experimental database management system, ADMS, which has been developed at the University of Maryland, in College Park, uses VIEWCACHE, a database access method based on incremental search. VIEWCACHE is a pointer-based access method that provides a uniform interface for accessing distributed databases and catalogues. The compactness of the pointer structures formed during database browsing and the incremental access method allow the user to search and do inter-database cross-referencing with no actual data movement between database sites. Once the search is complete, the set of collected pointers pointing to the desired data are dereferenced.

  5. Database Administration: Concepts, Tools, Experiences, and Problems.

    ERIC Educational Resources Information Center

    Leong-Hong, Belkis; Marron, Beatrice

    The concepts of data base administration, the role of the data base administrator (DBA), and computer software tools useful in data base administration are described in order to assist data base technologists and managers. A study of DBA's in the Federal Government is detailed in terms of the functions they perform, the software tools they use,…

  6. Creating a resource database for nursing service administration.

    PubMed

    Clougherty, J; McCloskey, J C; Johnson, M; Casula, M; Gardner, D; Kelly, K; Maas, M; Delaney, C; Blegen, M

    1991-01-01

    In response to the current information explosion in nursing service administration (NSA), the authors felt a need to collect and organize available resources for use by their faculty and graduate students. An electronic database was developed to facilitate the use of the collected print and software resources. This article describes the creation of the NSA Resource Database from the time the need for it was realized to its completion. There is discussion regarding the criteria used for writing the database, what the database screens look like and why and what the database contains. The article also discusses the use and users of the NSA Resource Database to date. PMID:2036589

  7. Identifying Primary Spontaneous Pneumothorax from Administrative Databases: A Validation Study

    PubMed Central

    Frechette, Eric; Guidolin, Keegan; Seyam, Ayman; Choi, Yun-Hee; Jones, Sarah; McClure, J. Andrew; Winick-Ng, Jennifer; Welk, Blayne; Malthaner, Richard A.

    2016-01-01

    Introduction. Primary spontaneous pneumothorax (PSP) is a disorder commonly encountered in healthy young individuals. There is no differentiation between PSP and secondary pneumothorax (SP) in the current version of the International Classification of Diseases (ICD-10). This complicates the conduct of epidemiological studies on the subject. Objective. To validate the accuracy of an algorithm that identifies cases of PSP from administrative databases. Methods. The charts of 150 patients who consulted the emergency room (ER) with a recorded main diagnosis of pneumothorax were reviewed to define the type of pneumothorax that occurred. The corresponding hospital administrative data collected during previous hospitalizations and ER visits were processed through the proposed algorithm. The results were compared over two different age groups. Results. There were 144 cases of pneumothorax correctly coded (96%). The results obtained from the PSP algorithm demonstrated a significantly higher sensitivity (97% versus 81%, p = 0.038) and positive predictive value (87% versus 46%, p < 0.001) in patients under 40 years of age than in older patients. Conclusions. The proposed algorithm is adequate to identify cases of PSP from administrative databases in the age group classically associated with the disease. This makes possible its utilization in large population-based studies.

  8. Regulatory administrative databases in FDA's Center for Biologics Evaluation and Research: convergence toward a unified database.

    PubMed

    Smith, Jeffrey K

    2013-04-01

    Regulatory administrative database systems within the Food and Drug Administration's (FDA) Center for Biologics Evaluation and Research (CBER) are essential to supporting its core mission, as a regulatory agency. Such systems are used within FDA to manage information and processes surrounding the processing, review, and tracking of investigational and marketed product submissions. This is an area of increasing interest in the pharmaceutical industry and has been a topic at trade association conferences (Buckley 2012). Such databases in CBER are complex, not for the type or relevance of the data to any particular scientific discipline but because of the variety of regulatory submission types and processes the systems support using the data. Commonalities among different data domains of CBER's regulatory administrative databases are discussed. These commonalities have evolved enough to constitute real database convergence and provide a valuable asset for business process intelligence. Balancing review workload across staff, exploring areas of risk in review capacity, process improvement, and presenting a clear and comprehensive landscape of review obligations are just some of the opportunities of such intelligence. This convergence has been occurring in the presence of usual forces that tend to drive information technology (IT) systems development toward separate stovepipes and data silos. CBER has achieved a significant level of convergence through a gradual process, using a clear goal, agreed upon development practices, and transparency of database objects, rather than through a single, discrete project or IT vendor solution. This approach offers a path forward for FDA systems toward a unified database. PMID:23269527

  9. Validity of breast, lung and colorectal cancer diagnoses in administrative databases: a systematic review protocol

    PubMed Central

    Abraha, Iosief; Giovannini, Gianni; Serraino, Diego; Fusco, Mario; Montedori, Alessandro

    2016-01-01

    Introduction Breast, lung and colorectal cancers constitute the most common cancers worldwide and their epidemiology, related health outcomes and quality indicators can be studied using administrative healthcare databases. To constitute a reliable source for research, administrative healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases 9th and 10th revision codes to identify breast, lung and colorectal cancer diagnoses in administrative healthcare databases. Methods and analysis This review protocol has been developed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. We will search the following databases: MEDLINE, EMBASE, Web of Science and the Cochrane Library, using appropriate search strategies. We will include validation studies that used administrative data to identify breast, lung and colorectal cancer diagnoses or studies that evaluated the validity of breast, lung and colorectal cancer codes in administrative data. The following inclusion criteria will be used: (1) the presence of a reference standard case definition for the disease of interest; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc) and (3) the use of data source from an administrative database. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. Ethics and dissemination Ethics approval is not required. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide to identify appropriate case definitions and algorithms of breast, lung and colorectal cancers for researchers involved in validating administrative healthcare databases as well as for

  10. [Bias and confounding: pharmacoepidemiological study using administrative database].

    PubMed

    Nojiri, Shuko

    2015-01-01

    The provision of health care frequently creates digitalized data such as hospital-based electronic data, medication prescription records, and claims data collectively termed "administrative database research". The data source and analytical opportunities for study create risks that can lead to misinterpretation or bias the results. This review serves as an introduction to the concept of bias and confounding to help researchers conduct methodologically sound pharmacoepidemiologic research projects using administrative databases. Beyond general considerations for observational study, there are several unique issues related to database research that should be addressed. The risks of uninterpretable or biased results can be minimized by: providing a robust description of the data tables used; focusing on why and how they were created; measuring and reporting the accuracy of diagnostic and procedural codes used; and properly accounting for any time-dependent nature of variables. The hallmark of good research is rigorously careful analysis and interpretation. The promise for value of real world evidence using databases in medical decision making must be balanced against concerns related to observational inherited limitations for bias and confounding. Researchers should aim to avoid bias in the design of a study, adjust for confounding, and discuss the effects of residual bias on the results. PMID:26028416

  11. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but

  12. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  13. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  14. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis...

  15. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis...

  16. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis...

  17. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis...

  18. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false TRS User Registration Database and... Services and Related Customer Premises Equipment for Persons With Disabilities § 64.615 TRS User Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall...

  19. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false TRS User Registration Database and... Services and Related Customer Premises Equipment for Persons With Disabilities § 64.615 TRS User Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall...

  20. Comparison of scientific and administrative database management systems

    NASA Technical Reports Server (NTRS)

    Stoltzfus, J. C.

    1983-01-01

    Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.

  1. Planning the future of JPL's management and administrative support systems around an integrated database

    NASA Technical Reports Server (NTRS)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  2. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  3. Case Method: Its Potential for Training Administrators.

    ERIC Educational Resources Information Center

    Nagel, Greta K.

    1991-01-01

    The case method should be used in both preservice and inservice training for administrators to strengthen training programs and help administrators develop practical human relations skills, learn stress reduction and burnout prevention strategies, learn team-building, and develop critical and reflective thinking skills. (14 references) (MLH)

  4. Development of an Ada programming support environment database SEAD (Software Engineering and Ada Database) administration manual

    NASA Technical Reports Server (NTRS)

    Liaw, Morris; Evesson, Donna

    1988-01-01

    Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.

  5. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  6. Evidence-based decision-making 6: Utilization of administrative databases for health services research.

    PubMed

    Chowdhury, Tanvir Turin; Hemmelgarn, Brenda

    2015-01-01

    Health-care systems require reliable information on which to base health-care planning and make decisions, as well as to evaluate their policy impact. Administrative data provide important information about health services use, expenditures, clinical outcomes, and may be used to assess quality of care. With increased digitalization and accessibility of administrative databases, these data are more readily available for health service research purposes, aiding evidence-based decision-making. This chapter discusses the utility of administrative data for population-based studies of health and health care. PMID:25694328

  7. Burden of Diabetes Mellitus Estimated with a Longitudinal Population-Based Study Using Administrative Databases

    PubMed Central

    Scalone, Luciana; Cesana, Giancarlo; Furneri, Gianluca; Ciampichini, Roberta; Beck-Peccoz, Paolo; Chiodini, Virginio; Mangioni, Silvia; Orsi, Emanuela; Fornari, Carla; Mantovani, Lorenzo Giovanni

    2014-01-01

    Objective To assess the epidemiologic and economic burden of diabetes mellitus (DM) from a longitudinal population-based study. Research Design and Methods Lombardy Region includes 9.9 million individuals. Its DM population was identified through a data warehouse (DENALI), which matches with a probabilistic linkage demographic, clinical and economic data of different Healthcare Administrative databases. All individuals, who, during the year 2000 had an hospital discharge with a IDC-9 CM code 250.XX, and/or two consecutive prescriptions of drugs for diabetes (ATC code A10XXXX) within one year, and/or an exemption from co-payment healthcare costs specific for DM, were selected and followed up to 9 years. We calculated prevalence, mortality and healthcare costs (hospitalizations, drugs and outpatient examinations/visits) from the National Health Service’s perspective. Results We identified 312,223 eligible subjects. The study population (51% male) had a mean age of 66 (from 0.03 to 105.12) years at the index date. Prevalence ranged from 0.4% among subjects aged ≤45 years to 10.1% among those >85 years old. Overall 43.4 deaths per 1,000 patients per year were estimated, significantly (p<0.001) higher in men than women. Overall, 3,315€/patient-year were spent on average: hospitalizations were the cost driver (54.2% of total cost). Drugs contributed to 31.5%, outpatient claims represented 14.3% of total costs. Thirty-five percent of hospital costs were attributable to cerebro−/cardiovascular reasons, 6% to other complications of DM, and 4% to DM as a main diagnosis. Cardiovascular drugs contributed to 33.5% of total drug costs, 21.8% was attributable to class A (16.7% to class A10) and 4.3% to class B (2.4% to class B01) drugs. Conclusions Merging different administrative databases can provide with many data from large populations observed for long time periods. DENALI shows to be an efficient instrument to obtain accurate estimates of burden of diseases such as

  8. Ninety-day readmissions after degenerative cervical spine surgery: A single-center administrative database study

    PubMed Central

    Akamnonu, Chibuikem; Goldstein, Jeffrey A.; Errico, Thomas J.; Bendo, John A.

    2015-01-01

    Background Unplanned hospital readmissions result in significant clinical and financial burdens to patients and the healthcare system. Readmission rates and causes have been investigated using large administrative databases which have certain limitations in data reporting and coding. The objective of this study was to provide a description of 90 day post-discharge readmissions following surgery for common degenerative cervical spine pathologies at a large-volume tertiary hospital. The study also compared the readmission rates of patients who underwent anterior- and posterior-approach procedures. Methods The administrative records from a single-center, high-volume tertiary institution were queried using ICD-9 codes for common cervical pathology over a three year period to determine the rate and causes of readmissions within the 90 days following the index surgery. Results A total of 768 patients underwent degenerative cervical spine surgery during the three year study period. Within 90 days of discharge, 24 (3.13%) patients were readmitted; 16 (2.06%) readmissions were planned for lumbar surgery; 8 (1.04%) readmissions were unplanned. 640 patients underwent procedures involving an anterior approach and 128 patients underwent procedures involving a posterior approach. There were 14 (2.17%) planned readmissions in the anterior group and 2 (1.5%) in the posterior group. The unplanned readmission rate was 0.63% (4 patients) and 3.13% (4 patients) in the anterior and posterior groups, respectively. (p=0.0343). Conclusion The 90 day post-discharge unplanned readmission rate that followed elective degenerative cervical spine surgery was 1.04%. The unplanned readmission rate associated with posterior-approach procedures (3.13%) was significantly higher than that of anterior-approach procedures (0.63%). Level of evidence: IV PMID:26114088

  9. Administrative Databases in Orthopaedic Research: Pearls and Pitfalls of Big Data.

    PubMed

    Patel, Alpesh A; Singh, Kern; Nunley, Ryan M; Minhas, Shobhit V

    2016-03-01

    The drive for evidence-based decision-making has highlighted the shortcomings of traditional orthopaedic literature. Although high-quality, prospective, randomized studies in surgery are the benchmark in orthopaedic literature, they are often limited by size, scope, cost, time, and ethical concerns and may not be generalizable to larger populations. Given these restrictions, there is a growing trend toward the use of large administrative databases to investigate orthopaedic outcomes. These datasets afford the opportunity to identify a large numbers of patients across a broad spectrum of comorbidities, providing information regarding disparities in care and outcomes, preoperative risk stratification parameters for perioperative morbidity and mortality, and national epidemiologic rates and trends. Although there is power in these databases in terms of their impact, potential problems include administrative data that are at risk of clerical inaccuracies, recording bias secondary to financial incentives, temporal changes in billing codes, a lack of numerous clinically relevant variables and orthopaedic-specific outcomes, and the absolute requirement of an experienced epidemiologist and/or statistician when evaluating results and controlling for confounders. Despite these drawbacks, administrative database studies are fundamental and powerful tools in assessing outcomes on a national scale and will likely be of substantial assistance in the future of orthopaedic research. PMID:26836377

  10. Using administrative databases in the surveillance of depressive disorders--case definitions.

    PubMed

    Alaghehbandan, Reza; Macdonald, Don; Barrett, Brendan; Collins, Kayla; Chen, Yue

    2012-12-01

    The objective of this study was to assess the usefulness of provincial administrative databases in carrying out surveillance on depressive disorders. Electronic medical records (EMRs) at 3 family practice clinics in St. John's, NL, Canada, were audited; 253 depressive disorder cases and 257 patients not diagnosed with a depressive disorder were selected. The EMR served as the "gold standard," which then was compared to these same patients investigated through the use of various case definitions applied against the provincial hospital and physician administrative databases. Variables used in the development of the case definitions were depressive disorder diagnoses (either in hospital or physician claims data), date of diagnosis, and service provider type [general practitioner (GP) vs. psychiatrist]. Of the 120 case definitions investigated, 26 were found to have a kappa statistic greater than 0.6, of which 5 case definitions were considered the most appropriate for surveillance of depressive disorders. Of the 5 definitions, the following case definition, with a 77.5% sensitivity and 93% specificity, was found to be the most valid ([ ≥1 hospitalizations OR ≥1 psychiatrist visit related to depressive disorders any time] OR ≥2 GP visits related to depressive disorders within the first 2 years of diagnosis). This study found that provincial administrative databases may be useful for carrying out surveillance on depressive disorders among the adult population. The approach used in this study was simple and resulted in rather reasonable sensitivity and specificity. PMID:22788998

  11. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  12. Continuous regional arterial infusion for acute pancreatitis: a propensity score analysis using a nationwide administrative database

    PubMed Central

    2013-01-01

    Introduction Although continuous regional arterial infusion (CRAI) of a protease inhibitor and an antibiotic may be effective in patients with severe acute pancreatitis, CRAI has not yet been validated in large patient populations. We therefore evaluated the effectiveness of CRAI based on data from a national administrative database covering 1,032 Japanese hospitals. Methods In-hospital mortality, length of stay and costs were compared in the CRAI and non-CRAI groups, using propensity score analysis to adjust for treatment selection bias. Results A total of 17,415 eligible patients with acute pancreatitis were identified between 1 July and 30 September 2011, including 287 (1.6%) patients who underwent CRAI. One-to-one propensity-score matching generated 207 pairs with well-balanced baseline characteristics. In-hospital mortality rates were similar in the CRAI and non-CRAI groups (7.7% vs. 8.7%; odds ratio, 0.88; 95% confidence interval, 0.44–1.78, P = 0.720). CRAI was associated with significantly longer median hospital stay (29 vs. 18 days, P < 0.001), significantly higher median total cost (21,800 vs. 12,600 United States dollars, P < 0.001), and a higher rate of interventions for infectious complications, such as endoscopic/surgical necrosectomy or percutaneous drainage (2.9% vs. 0.5%, P = 0.061). Conclusions CRAI was not effective in reducing in-hospital mortality rate in patients with acute pancreatitis, but was associated with longer hospital stay and higher costs. Randomized controlled trials in large numbers of patients are required to further evaluate CRAI for this indication. PMID:24088324

  13. The Association of Lacking Insurance With Outcomes of Severe Sepsis: Retrospective Analysis of an Administrative Database

    PubMed Central

    Kumar, Gagan; Taneja, Amit; Majumdar, Tilottama; Jacobs, Elizabeth R.; Whittle, Jeff; Nanchal, Rahul

    2016-01-01

    Objective Patients with severe sepsis have high mortality that is improved by timely, often expensive, treatments. Patients without insurance are more likely to delay seeking care; they may also receive less intense care. Design We performed a retrospective analysis of administrative database—Healthcare Costs and Utilization Project’s Nationwide Inpatient Sample—to test whether mortality is more likely among uninsured patients hospitalized for severe sepsis. Patients None. Interventions We used International Classification of Diseases—9th Revision, Clinical Modification, codes indicating sepsis and organ system failure to identify hospitalizations for severe sepsis among patients aged 18–64 between 2000 and 2008. We excluded patients with end-stage renal disease or solid organ transplants because very few are uninsured. We performed multivariate logistic regression modeling to examine the association of insurance status and in-hospital mortality, adjusted for patient and hospital characteristics. We performed subgroup analysis to examine whether the impact of insurance status varied by geographical region; by patient age, sex, or race; or by hospital characteristics such as teaching status, size, or ownership. We used similar methods to examine the impact of insurance status on the use of certain procedures, length of stay, and discharge destination. Measurements and Main Results There were 1,600,269 discharges with severe sepsis from 2000 through 2008 in the age group 18–64 years. Uninsured people, who accounted for 7.5% of admissions with severe sepsis, had higher adjusted odds of mortality (odds ratio, 1.43; 95% CI, 1.37–1.47) than privately insured people. The higher mortality in uninsured was present in all subgroups and was similar in each year from 2000 to 2008. After adjustment, uninsured individuals had a slightly shorter length of stay than insured people and were less likely to receive five of the six interventions we examined. They were

  14. Quantifying limitations in chemotherapy data in administrative health databases: implications for measuring the quality of colorectal cancer care.

    PubMed

    Urquhart, Robin; Rayson, Daniel; Porter, Geoffrey A; Grunfeld, Eva

    2011-08-01

    Reliable chemotherapy data are critical to evaluate the quality of care for patients with colorectal cancer who are treated with curative intent. In Canada, limitations in the availability and completeness of chemotherapy data exist in many administrative health databases. In this paper, we discuss these limitations and present findings from a chart review in Nova Scotia that quantifies the completeness of chemotherapy capture in existing databases. The results demonstrate that even basic information on cancer treatment in administrative databases can be insufficient to perform the types of analyses that most decision-makers require for quality-of-care measurement. PMID:22851984

  15. Quantifying Limitations in Chemotherapy Data in Administrative Health Databases: Implications for Measuring the Quality of Colorectal Cancer Care

    PubMed Central

    Rayson, Daniel; Porter, Geoffrey A.; Grunfeld, Eva

    2011-01-01

    Reliable chemotherapy data are critical to evaluate the quality of care for patients with colorectal cancer who are treated with curative intent. In Canada, limitations in the availability and completeness of chemotherapy data exist in many administrative health databases. In this paper, we discuss these limitations and present findings from a chart review in Nova Scotia that quantifies the completeness of chemotherapy capture in existing databases. The results demonstrate that even basic information on cancer treatment in administrative databases can be insufficient to perform the types of analyses that most decision-makers require for quality-of-care measurement. PMID:22851984

  16. Method for Rapid Protein Identification in a Large Database

    PubMed Central

    Zhang, Wenli; Zhao, Xiaofang

    2013-01-01

    Protein identification is an integral part of proteomics research. The available tools to identify proteins in tandem mass spectrometry experiments are not optimized to face current challenges in terms of identification scale and speed owing to the exponential growth of the protein database and the accelerated generation of mass spectrometry data, as well as the demand for nonspecific digestion and post-modifications in complex-sample identification. As a result, a rapid method is required to mitigate such complexity and computation challenges. This paper thus aims to present an open method to prevent enzyme and modification specificity on a large database. This paper designed and developed a distributed program to facilitate application to computer resources. With this optimization, nearly linear speedup and real-time support are achieved on a large database with nonspecific digestion, thus enabling testing with two classical large protein databases in a 20-blade cluster. This work aids in the discovery of more significant biological results, such as modification sites, and enables the identification of more complex samples, such as metaproteomics samples. PMID:24000323

  17. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that...

  18. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Methods of administration. 441.105 Section 441.105... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods of administration. The agency must have methods of administration to ensure that its responsibilities under...

  19. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 441.105 Section 441.105... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods of administration. The agency must have methods of administration to ensure that its responsibilities under...

  20. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that...

  1. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that...

  2. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Methods of administration. 441.105 Section 441.105... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods of administration. The agency must have methods of administration to ensure that its responsibilities under...

  3. 34 CFR 361.12 - Methods of administration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false Methods of administration. 361.12 Section 361.12... State Plan and Other Requirements for Vocational Rehabilitation Services Administration § 361.12 Methods... applicable, employs methods of administration found necessary by the Secretary for the proper and...

  4. [Administrative databases of the Local Health Unit: possible use for clinical governance of chronic kidney disease].

    PubMed

    Degli Esposti, Luca; Sturani, Alessandra; Quintaliani, Giuseppe; Buda, Stefano; Degli Esposti, Ezio

    2014-01-01

    Nowadays a large amount of medical data are available, although they are not always homogeneous, they arise from different backgrounds and are used for different purposes. The aggregation of these data could give huge boost to the epidemiology and, in particular, to nephrology. In many parts of Italy there is the aim to reorganize the hospital health care, as well as the territorial setting. In this framework, the role of nephrology is evaluated without data to support the ongoing decisions, therefore the linkage among the data stored in the administrative and clinical databases of the Local Health Unit could contribute to the planning of nephrological (but not only) activities, in order to ensure the best cost-effectiveness possible for each different reality. PMID:25030017

  5. How to Select the Best Method For Database Applications, Database Independent Platform or Not?

    NASA Astrophysics Data System (ADS)

    Bulbul, Halil Ibrahim; Guneri Sahin, Yasar

    Software developers generally want to create software that could run with many different DBMSs (database independent platform). But, this idea sometimes may not be implemented to real world applications. The key point is that, it is too hard to decide which method (dependent to specific DBMS or independent from DBMS) should be selected. Each method has particular advantages and disadvantages. Actually, the decision is depended on some restrictions that are related to money, human resources, duration of project, etc. In addition, one of the most important factors is the choice of CASE tool that will be used for the software project. This study addresses how a decision should be made as to which method will be convenient for a specific application. Furthermore, possible development scenarios, that may be used in the project has been presented and some coefficient factors were found which help the developers to see which method may be more appropriate for the application.

  6. A Proposed Framework of Test Administration Methods

    ERIC Educational Resources Information Center

    Thompson, Nathan A.

    2008-01-01

    The widespread application of personal computers to educational and psychological testing has substantially increased the number of test administration methodologies available to testing programs. Many of these mediums are referred to by their acronyms, such as CAT, CBT, CCT, and LOFT. The similarities between the acronyms and the methods…

  7. [Use of administrative and economic methods in managing veterinary affairs].

    PubMed

    Kostadinov, I

    1981-01-01

    The possibilities are stated of using the administrative and economic methods in the administration of the veterinary service. It is stressed that the two types of methods are applied in combination. Administrative methods are to be employed in the control of epizootics, including zoonoses, and in the veterinary and sanitary control, resp. surveillance and inspection. Both types of methods are equally referred to in the control of parasitic diseases with epizootic course of development as well as in the control of diseases caused by occasionally pathogenic organisms. The use of economic methods is prevailing in the control of noninfectious diseases, and in the administration of bioproduct and drug production. PMID:7344292

  8. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 2 2011-10-01 2011-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS §...

  9. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS §...

  10. Distance correlation methods for discovering associations in large astrophysical databases

    SciTech Connect

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P. E-mail: mrichards@astro.psu.edu

    2014-01-20

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.

  11. Combining state administrative databases and provider records to assess the quality of care for children enrolled in Medicaid.

    PubMed

    Cotter, J J; Smith, W R; Rossiter, L F; Pugh, C B; Bramble, J D

    1999-01-01

    Our objective was to assess the capability of state administrative health care databases to evaluate the quality of immunization rates for a Medicaid managed care population. Data on 5599 2 year olds were obtained from a Medicaid claims database, a health department database, and the records of the children's assigned providers. The study was conducted on 1 managed care program in 1 state. Test performance ratio analyses were used to assess the relative accuracy and contribution of each source of administrative data. We found that of the 67,188 doses needed, 45,511 (68%) were documented as administered per at least 1 of the data sources. Medicaid claims data alone accounted for 18% of immunized children, while health department data used by itself accounted for 12%. Together, these 2 sources identified 34% of immunized children. Large administrative databases, such as Medicaid claims and data from a health department, while valuable sources of information on quality, may underestimate outcomes such as immunization rates. Assessments of the quality of health care should rely on a combination of administrative data and providers' records as sources of information. PMID:10446671

  12. In-hospital mortality following lung cancer resection: nationwide administrative database.

    PubMed

    Pagès, Pierre-Benoit; Cottenet, Jonathan; Mariet, Anne-Sophie; Bernard, Alain; Quantin, Catherine

    2016-06-01

    Our aim was to determine the effect of a national strategy for quality improvement in cancer management (the "Plan Cancer") according to time period and to assess the influence of type and volume of hospital activity on in-hospital mortality (IHM) within a large national cohort of patients operated on for lung cancer.From January 2005 to December 2013, 76 235 patients were included in the French Administrative Database. Patient characteristics, hospital volume of activity and hospital type were analysed over three periods: 2005-2007, 2008-2010 and 2011-2013.Global crude IHM was 3.9%: 4.3% during 2005-2007, 4% during 2008-2010 and 3.5% during 2011-2013 (p<0.01). 296, 259 and 209 centres performed pulmonary resections in 2005-2007, 2008-2010 and 2011-2013, respectively (p<0.01). The risk of death was higher in centres performing <13 resections per year than in centres performing >43 resections per year (adjusted (a)OR 1.48, 95% CI 1.197-1.834). The risk of death was lower in the period 2011-2013 than in the period 2008-2010 (aOR 0.841, 95% CI 0.764-0.926). Adjustment variables (age, sex, Charlson score and type of resection) were significantly linked to IHM, whereas the type of hospital was not.The French national strategy for quality improvement seems to have induced a significant decrease in IHM. PMID:26965293

  13. Lessons from an enterprise-wide technical and administrative database using CASE and GUI front-ends

    SciTech Connect

    Chan, A.; Crane, G.; MacGregor, I.; Meyer, S.

    1995-07-01

    An enterprise-wide database built via Oracle*CASE is a hot topic. The authors describe the PEP-II/BABAR Project-Wide Database, and the lessons learned in delivering and developing this system with a small team averaging two and one half people. They also give some details of providing World Wide Web (WWW) access to the information, and using Oracle*CASE and Oracle Forms4. The B Factory at the Stanford Linear Accelerator Center (SLAC) is a project built to study the physics of matter and anti-matter. It consists of two accelerator storage rings (PEP-II) and a detector (BABAR)--a project of approximately $250 million with collaboration by many labs worldwide. Foremost among these lessons is that the support and vision of management are key to the successful design and implementation of an enterprise-wide database. The authors faced the challenge of integrating both administrative and technical data into one CASE enterprise design. The goal, defined at the project`s inception in late 1992, was to use a central database as a tool for the collaborating labs to: (1) track quality assurance during construction of the accelerator storage rings and detectors; (2) track down problems faster when they develop; and (3) facilitate the construction process. The focus of the project database, therefore, is on technical data which is less well-defined than administrative data.

  14. Enhancing Clinical Content and Race/Ethnicity Data in Statewide Hospital Administrative Databases: Obstacles Encountered, Strategies Adopted, and Lessons Learned

    PubMed Central

    Pine, Michael; Kowlessar, Niranjana M; Salemi, Jason L; Miyamura, Jill; Zingmond, David S; Katz, Nicole E; Schindler, Joe

    2015-01-01

    Objectives Eight grant teams used Agency for Healthcare Research and Quality infrastructure development research grants to enhance the clinical content of and improve race/ethnicity identifiers in statewide all-payer hospital administrative databases. Principal Findings Grantees faced common challenges, including recruiting data partners and ensuring their continued effective participation, acquiring and validating the accuracy and utility of new data elements, and linking data from multiple sources to create internally consistent enhanced administrative databases. Successful strategies to overcome these challenges included aggressively engaging with providers of critical sources of data, emphasizing potential benefits to participants, revising requirements to lessen burdens associated with participation, maintaining continuous communication with participants, being flexible when responding to participants’ difficulties in meeting program requirements, and paying scrupulous attention to preparing data specifications and creating and implementing protocols for data auditing, validation, cleaning, editing, and linking. In addition to common challenges, grantees also had to contend with unique challenges from local environmental factors that shaped the strategies they adopted. Conclusions The creation of enhanced administrative databases to support comparative effectiveness research is difficult, particularly in the face of numerous challenges with recruiting data partners such as competing demands on information technology resources. Excellent communication, flexibility, and attention to detail are essential ingredients in accomplishing this task. Additional research is needed to develop strategies for maintaining these databases when initial funding is exhausted. PMID:26119470

  15. System, Method and Apparatus for Discovering Phrases in a Database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase discovery is a method of identifying sequences of terms in a database. First, a selection of one or more relevant sequences of terms. such as relevant text, is provided. Next, several shorter sequences of terms, such as phrases, are extracted from the provided relevant sequences of terms. The extracted sequences of terms are then reduced through a culling process. A gathering process then emphasizes the more relevant of the extracted and culled sequences of terms and de-emphasizes the more generic of the extracted and culled sequences of terms. The gathering process can also include iteratively retrieving additional selections of relevant sequences (e.g.. text). extracting and culling additional sequences of terms (e.g.. phrases). emphasizing and de-emphasizing extracted and culled sequences of terms and accumulating all gathered sequences of terms. The resulting gathered sequences of terms are then output.

  16. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  17. A Dynamic Integration Method for Borderland Database using OSM data

    NASA Astrophysics Data System (ADS)

    Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.

    2013-11-01

    Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.

  18. 34 CFR 361.12 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true Methods of administration. 361.12 Section 361.12 Education Regulations of the Offices of the Department of Education (Continued) OFFICE OF SPECIAL EDUCATION... State Plan and Other Requirements for Vocational Rehabilitation Services Administration § 361.12...

  19. Validity of Diagnostic Codes for Acute Stroke in Administrative Databases: A Systematic Review

    PubMed Central

    McCormick, Natalie; Bhole, Vidula; Lacaille, Diane; Avina-Zubieta, J. Antonio

    2015-01-01

    Objective To conduct a systematic review of studies reporting on the validity of International Classification of Diseases (ICD) codes for identifying stroke in administrative data. Methods MEDLINE and EMBASE were searched (inception to February 2015) for studies: (a) Using administrative data to identify stroke; or (b) Evaluating the validity of stroke codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), or Kappa scores) for stroke, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2015) of original papers. Studies solely evaluating codes for transient ischaemic attack were excluded. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Results Seventy-seven studies published from 1976–2015 were included. The sensitivity of ICD-9 430-438/ICD-10 I60-I69 for any cerebrovascular disease was ≥ 82% in most [≥ 50%] studies, and specificity and NPV were both ≥ 95%. The PPV of these codes for any cerebrovascular disease was ≥ 81% in most studies, while the PPV specifically for acute stroke was ≤ 68%. In at least 50% of studies, PPVs were ≥ 93% for subarachnoid haemorrhage (ICD-9 430/ICD-10 I60), 89% for intracerebral haemorrhage (ICD-9 431/ICD-10 I61), and 82% for ischaemic stroke (ICD-9 434/ICD-10 I63 or ICD-9 434&436). For in-hospital deaths, sensitivity was 55%. For cerebrovascular disease or acute stroke as a cause-of-death on death certificates, sensitivity was ≤ 71% in most studies while PPV was ≥ 87%. Conclusions While most cases of prevalent cerebrovascular disease can be detected using 430-438/I60-I69 collectively, acute stroke must be defined using more specific codes. Most in-hospital deaths and death certificates with stroke as a cause-of-death correspond to true stroke deaths. Linking vital

  20. Validity of ICD-9-CM codes for breast, lung and colorectal cancers in three Italian administrative healthcare databases: a diagnostic accuracy study protocol

    PubMed Central

    Abraha, Iosief; Serraino, Diego; Giovannini, Gianni; Stracci, Fabrizio; Casucci, Paola; Alessandrini, Giuliana; Bidoli, Ettore; Chiari, Rita; Cirocchi, Roberto; De Giorgi, Marcello; Franchini, David; Vitale, Maria Francesca; Fusco, Mario; Montedori, Alessandro

    2016-01-01

    Introduction Administrative healthcare databases are useful tools to study healthcare outcomes and to monitor the health status of a population. Patients with cancer can be identified through disease-specific codes, prescriptions and physician claims, but prior validation is required to achieve an accurate case definition. The objective of this protocol is to assess the accuracy of International Classification of Diseases Ninth Revision—Clinical Modification (ICD-9-CM) codes for breast, lung and colorectal cancers in identifying patients diagnosed with the relative disease in three Italian administrative databases. Methods and analysis Data from the administrative databases of Umbria Region (910 000 residents), Local Health Unit 3 of Napoli (1 170 000 residents) and Friuli-Venezia Giulia Region (1 227 000 residents) will be considered. In each administrative database, patients with the first occurrence of diagnosis of breast, lung or colorectal cancer between 2012 and 2014 will be identified using the following groups of ICD-9-CM codes in primary position: (1) 233.0 and (2) 174.x for breast cancer; (3) 162.x for lung cancer; (4) 153.x for colon cancer and (5) 154.0–154.1 and 154.8 for rectal cancer. Only incident cases will be considered, that is, excluding cases that have the same diagnosis in the 5 years (2007–2011) before the period of interest. A random sample of cases and non-cases will be selected from each administrative database and the corresponding medical charts will be assessed for validation by pairs of trained, independent reviewers. Case ascertainment within the medical charts will be based on (1) the presence of a primary nodular lesion in the breast, lung or colon–rectum, documented with imaging or endoscopy and (2) a cytological or histological documentation of cancer from a primary or metastatic site. Sensitivity and specificity with 95% CIs will be calculated. Dissemination Study results will be disseminated widely through

  1. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  2. International children's accelerometry database (ICAD): Design and methods

    PubMed Central

    2011-01-01

    Background Over the past decade, accelerometers have increased in popularity as an objective measure of physical activity in free-living individuals. Evidence suggests that objective measures, rather than subjective tools such as questionnaires, are more likely to detect associations between physical activity and health in children. To date, a number of studies of children and adolescents across diverse cultures around the globe have collected accelerometer measures of physical activity accompanied by a broad range of predictor variables and associated health outcomes. The International Children's Accelerometry Database (ICAD) project pooled and reduced raw accelerometer data using standardized methods to create comparable outcome variables across studies. Such data pooling has the potential to improve our knowledge regarding the strength of relationships between physical activity and health. This manuscript describes the contributing studies, outlines the standardized methods used to process the accelerometer data and provides the initial questions which will be addressed using this novel data repository. Methods Between September 2008 and May 2010 46,131 raw Actigraph data files and accompanying anthropometric, demographic and health data collected on children (aged 3-18 years) were obtained from 20 studies worldwide and data was reduced using standardized analytical methods. Results When using ≥ 8, ≥ 10 and ≥ 12 hrs of wear per day as a criterion, 96%, 93.5% and 86.2% of the males, respectively, and 96.3%, 93.7% and 86% of the females, respectively, had at least one valid day of data. Conclusions Pooling raw accelerometer data and accompanying phenotypic data from a number of studies has the potential to: a) increase statistical power due to a large sample size, b) create a more heterogeneous and potentially more representative sample, c) standardize and optimize the analytical methods used in the generation of outcome variables, and d) provide a means to

  3. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  4. A database method for binary atomic scattering angle calculation

    NASA Astrophysics Data System (ADS)

    Yuan, B.; Yu, P. C.; Tang, S. M.

    1993-11-01

    Calculation of the classical binary atomic scattering angle is a critical factor in computer simulations of ion beam interactions with matter. Different approaches intended for more accurate results with sufficient speed have been reported in the literature. This paper presents an approach using database evaluation. This approach has been tested and found to be extremely fast (18 times faster than the Biersack-Haggmark's Magic-Formula for scattering [Nucl. Instr. and Meth. 174 (1980) 257]), and its accuracy is better than 0.5%. This database takes only 216 kB of computer memory.

  5. National short line railroad database project, 1995-1996: A report to the Federal Railroad Administration

    SciTech Connect

    Benson, D.; Byberg, T.

    1996-06-30

    The objective of the project was to create a central database containing information representing the American short line railroad industry. In the report, processes involved with obtaining, developing, and maintaining the information in the database are discussed. Several data analysis procedures used to help ensure the integrity of the data are addressed. The second annual American Short Line Railroad Association Data Profile for the 1994 Calendar year is also presented in the paper. Further information extracted and comparisons made during the analysis process are described in detail. Discussions on the development of the paper survey and an electronic survey for the third annual data profile for the 1995 calendar year are also presented. The design and implementation of the electronic survey software package are reviewed in detail. The final process presented is the distribution and collection of the 1995 electronic and paper surveys.

  6. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  7. A European Flood Database: facilitating comprehensive flood research beyond administrative boundaries

    NASA Astrophysics Data System (ADS)

    Hall, J.; Arheimer, B.; Aronica, G. T.; Bilibashi, A.; Boháč, M.; Bonacci, O.; Borga, M.; Burlando, P.; Castellarin, A.; Chirico, G. B.; Claps, P.; Fiala, K.; Gaál, L.; Gorbachova, L.; Gül, A.; Hannaford, J.; Kiss, A.; Kjeldsen, T.; Kohnová, S.; Koskela, J. J.; Macdonald, N.; Mavrova-Guirguinova, M.; Ledvinka, O.; Mediero, L.; Merz, B.; Merz, R.; Molnar, P.; Montanari, A.; Osuch, M.; Parajka, J.; Perdigão, R. A. P.; Radevski, I.; Renard, B.; Rogger, M.; Salinas, J. L.; Sauquet, E.; Šraj, M.; Szolgay, J.; Viglione, A.; Volpi, E.; Wilson, D.; Zaimi, K.; Blöschl, G.

    2015-06-01

    The current work addresses one of the key building blocks towards an improved understanding of flood processes and associated changes in flood characteristics and regimes in Europe: the development of a comprehensive, extensive European flood database. The presented work results from ongoing cross-border research collaborations initiated with data collection and joint interpretation in mind. A detailed account of the current state, characteristics and spatial and temporal coverage of the European Flood Database, is presented. At this stage, the hydrological data collection is still growing and consists at this time of annual maximum and daily mean discharge series, from over 7000 hydrometric stations of various data series lengths. Moreover, the database currently comprises data from over 50 different data sources. The time series have been obtained from different national and regional data sources in a collaborative effort of a joint European flood research agreement based on the exchange of data, models and expertise, and from existing international data collections and open source websites. These ongoing efforts are contributing to advancing the understanding of regional flood processes beyond individual country boundaries and to a more coherent flood research in Europe.

  8. Can Italian Healthcare Administrative Databases Be Used to Compare Regions with Respect to Compliance with Standards of Care for Chronic Diseases?

    PubMed Central

    Gini, Rosa; Schuemie, Martijn J.; Francesconi, Paolo; Lapi, Francesco; Cricelli, Iacopo; Pasqua, Alessandro; Gallina, Pietro; Donato, Daniele; Brugaletta, Salvatore; Donatini, Andrea; Marini, Alessandro; Cricelli, Claudio; Damiani, Gianfranco; Bellentani, Mariadonata; van der Lei, Johan; Sturkenboom, Miriam C. J. M.; Klazinga, Niek S.

    2014-01-01

    Background Italy has a population of 60 million and a universal coverage single-payer healthcare system, which mandates collection of healthcare administrative data in a uniform fashion throughout the country. On the other hand, organization of the health system takes place at the regional level, and local initiatives generate natural experiments. This is happening in particular in primary care, due to the need to face the growing burden of chronic diseases. Health services research can compare and evaluate local initiatives on the basis of the common healthcare administrative data.However reliability of such data in this context needs to be assessed, especially when comparing different regions of the country. In this paper we investigated the validity of healthcare administrative databases to compute indicators of compliance with standards of care for diabetes, ischaemic heart disease (IHD) and heart failure (HF). Methods We compared indicators estimated from healthcare administrative data collected by Local Health Authorities in five Italian regions with corresponding estimates from clinical data collected by General Practitioners (GPs). Four indicators of diagnostic follow-up (two for diabetes, one for IHD and one for HF) and four indicators of appropriate therapy (two each for IHD and HF) were considered. Results Agreement between the two data sources was very good, except for indicators of laboratory diagnostic follow-up in one region and for the indicator of bioimaging diagnostic follow-up in all regions, where measurement with administrative data underestimated quality. Conclusion According to evidence presented in this study, estimating compliance with standards of care for diabetes, ischaemic heart disease and heart failure from healthcare databases is likely to produce reliable results, even though completeness of data on diagnostic procedures should be assessed first. Performing studies comparing regions using such indicators as outcomes is a promising

  9. A Straightforward Method for Advance Estimation of User Charges for Information in Numeric Databases.

    ERIC Educational Resources Information Center

    Jarvelin, Kalervo

    1986-01-01

    Describes a method for advance estimation of user charges for queries in relational data model-based numeric databases when charges are based on data retrieved. Use of this approach is demonstrated by sample queries to an imaginary marketing database. The principles and methods of this approach and its relevance are discussed. (MBR)

  10. The Saccharomyces Genome Database: Advanced Searching Methods and Data Mining.

    PubMed

    Cherry, J Michael

    2015-12-01

    At the core of the Saccharomyces Genome Database (SGD) are chromosomal features that encode a product. These include protein-coding genes and major noncoding RNA genes, such as tRNA and rRNA genes. The basic entry point into SGD is a gene or open-reading frame name that leads directly to the locus summary information page. A keyword describing function, phenotype, selective condition, or text from abstracts will also provide a door into the SGD. A DNA or protein sequence can be used to identify a gene or a chromosomal region using BLAST. Protein and DNA sequence identifiers, PubMed and NCBI IDs, author names, and function terms are also valid entry points. The information in SGD has been gathered and is maintained by a group of scientific biocurators and software developers who are devoted to providing researchers with up-to-date information from the published literature, connections to all the major research resources, and tools that allow the data to be explored. All the collected information cannot be represented or summarized for every possible question; therefore, it is necessary to be able to search the structured data in the database. This protocol describes the YeastMine tool, which provides an advanced search capability via an interactive tool. The SGD also archives results from microarray expression experiments, and a strategy designed to explore these data using the SPELL (Serial Pattern of Expression Levels Locator) tool is provided. PMID:26631124

  11. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2006-08-08

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  12. Computer systems and methods for the query and visualization of multidimensional database

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2010-05-11

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  13. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  14. A Bayesian Approach to Latent Class Modeling for Estimating the Prevalence of Schizophrenia Using Administrative Databases

    PubMed Central

    Laliberté, Vincent; Joseph, Lawrence; Gold, Ian

    2015-01-01

    Estimating the incidence and the prevalence of psychotic disorders in the province of Quebec has been the object of some interest in recent years as a contribution to the epidemiological study of the causes of psychotic disorders being carried out primarily in UK and Scandinavia. A number of studies have used administrative data from the Régie de l’assurance maladie du Québec (RAMQ) that includes nearly all Quebec citizens to obtain geographical and temporal prevalence estimates for the illness. However, there has been no investigation of the validity of RAMQ diagnoses for psychotic disorders, and without a measure of the sensitivity and the specificity of these diagnoses, it is impossible to be confident in the accuracy of the estimates obtained. This paper proposes the use of latent class analysis to ascertain the validity of a diagnosis of schizophrenia using RAMQ data. PMID:26217241

  15. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor); Livny, Miron (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  16. Cardiac Fatalities in Firefighters: An Analysis of the U.S. Fire Administration Database.

    PubMed

    Sen, Soman; Palmieri, Tina; Greenhalgh, David

    2016-01-01

    Cardiac fatalities are the leading cause of death among all firefighters. Increasing age has been linked to increased cardiac fatalities in firefighters; however, circumstances surrounding in-line-of-duty cardiac firefighter deaths can also increase the risk of a cardiac death. The authors hypothesize that cardiac fatalities among firefighters will be related to the type of duty and level of physical exertion. The authors analyzed the Firefighter Fatalities and Statistics data collected by the U.S. Fire Administration (http://apps.usfa.fema.gov/firefighter-fatalities/fatalityData/statistics) from January 2002 to December 2012. Data were analyzed for associations between age, firefighter classification, duty-type, and cause of fatal cardiac event. A total of 1153 firefighter fatalities occurred during the 10-year period reviewed. Of these, 47% were cardiac fatalities. Mean age was significantly higher in firefighters who suffered a cardiac fatality (52.0 ± 11.4 ± 40.8 ± 14.7 years; P < .05). Volunteer firefighters suffered significantly higher proportion of cardiac fatalities (62%; P < .05) followed by career firefighters (32%). Additionally, cardiac fatalities were the leading cause of death for volunteer firefighters (54%; P < .05). The highest proportion of cardiac fatalities occurred on-the-scene (29%; P < .05) followed by after-duty fatalities (25%). Stress and overexertion accounted for 98% of the cause of cardiac fatalities. Adjusting for rank and firefighter classification, age (odds ratio, 1.06; 95% confidence interval, 1.05-1.08) and stress or overexertion (odds ratio, 11.9; 95% confidence interval, 1.7-83.4) were independent predictors of a firefighter cardiac fatality. Both career and volunteer firefighters are at significantly higher risk of a fatal cardiac event as they age. These fatalities occur in a significant proportion on-the-scene. National efforts should be aimed at these high-risk populations to improve cardiovascular health. PMID:25501775

  17. Surfactant administration in neonates: A review of delivery methods.

    PubMed

    Nouraeyan, Nina; Lambrinakos-Raymond, Alicia; Leone, Marisa; Sant'Anna, Guilherme

    2014-01-01

    Surfactant has revolutionized the treatment of respiratory distress syndrome and some other respiratory conditions that affect the fragile neonatal lung. Despite its widespread use, the optimal method of surfactant administration in preterm infants has yet to be clearly determined. The present article reviews several aspects of administration techniques that can influence surfactant delivery into the pulmonary airways: the bolus volume, injection rate, gravity and orientation, ventilation strategies, alveolar recruitment, and viscosity and surface tension of the fluid instilled. Based on the present review, knowledge gaps regarding the best way to administer surfactant to neonates remain. From the available evidence, however, the most effective way to optimize surfactant delivery and obtain a more homogeneous distribution of the drug is by using rapid bolus instillation in combination with appropriate alveolar recruitment techniques. PMID:26078618

  18. An Efficient Method for Rare Spectra Retrieval in Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Du, Changde; Luo, Ali; Yang, Haifeng; Hou, Wen; Guo, Yanxin

    2016-03-01

    One of the most important aims of astronomical data mining is to systematically search for specific rare objects in a massive spectral data set, given a small fraction of identified samples with the same type. Most existing methods are mainly based on binary classification, which usually suffers from incompleteness when there are too few known samples. Rank-based methods could provide good solutions for such cases. After investigating several algorithms, a method combining a bipartite ranking model with bootstrap aggregating techniques was developed in this paper. The method was applied while searching for carbon stars in the spectral data of Sloan Digital Sky Survey Data Release 10 and compared with several other popular methods used for data mining. Experimental results validate that the proposed method is not only the most effective but also the least time-consuming technique among its competitors when searching for rare spectra in a large but unlabeled data set.

  19. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  20. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  1. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  2. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  3. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  4. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  5. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  6. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  7. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  8. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  9. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  10. Methods of Imputation used in the USDA National Nutrient Database for Standard Reference

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Objective: To present the predominate methods of imputing used to estimate nutrient values for foods in the USDA National Nutrient Database for Standard Reference (SR20). Materials and Methods: The USDA Nutrient Data Laboratory developed standard methods for imputing nutrient values for foods wh...

  11. Examining cross-database global training to evaluate five different methods for ventricular beat classification.

    PubMed

    Chudácek, V; Georgoulas, G; Lhotská, L; Stylios, C; Petrík, M; Cepek, M

    2009-07-01

    The detection of ventricular beats in the holter recording is a task of great importance since it can direct clinicians toward the parts of the electrocardiogram record that might be crucial for determining the final diagnosis. Although there already exists a fair amount of research work dealing with ventricular beat detection in holter recordings, the vast majority uses a local training approach, which is highly disputable from the point of view of any practical-real-life-application. In this paper, we compare five well-known methods: a classical decision tree approach and its variant with fuzzy rules, a self-organizing map clustering method with template matching for classification, a back-propagation neural network and a support vector machine classifier, all examined using the same global cross-database approach for training and testing. For this task two databases were used-the MIT-BIH database and the AHA database. Both databases are required for testing any newly developed algorithms for holter beat classification that is going to be deployed in the EU market. According to cross-database global training, when the classifier is trained with the beats from the records of one database then the records from the other database are used for testing. The results of all the methods are compared and evaluated using the measures of sensitivity and specificity. The support vector machine classifier is the best classifier from the five we tested, achieving an average sensitivity of 87.20% and an average specificity of 91.57%, which outperforms nearly all the published algorithms when applied in the context of a similar global training approach. PMID:19525571

  12. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  13. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations

    PubMed Central

    Kim, Seung Won; Kim, Bae-Hwan

    2016-01-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  14. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  15. New tools and methods for direct programmatic access to the dbSNP relational database

    PubMed Central

    Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260

  16. Methods for 17β-oestradiol administration to rats.

    PubMed

    Isaksson, Ida-Maria; Theodorsson, Annette; Theodorsson, Elvar; Strom, Jakob O

    2011-11-01

    Several studies indicate that the beneficial or harmful effects of oestrogens in stroke are dose-dependent. Rats are amongst the most frequently used animals in these studies, which calls for thoroughly validated methods for administering 17β-oestradiol to rats. In an earlier study we characterised three different administration methods for 17β-oestradiol over 42 days. The present study assesses the concentrations in a short time perspective, with the addition of a novel peroral method. Female Sprague-Dawley rats were ovariectomised and administered 17β-oestradiol by subcutaneous injections, silastic capsules, pellets and orally (in the nut-cream Nutella(®)), respectively. One group received 17β-oestradiol by silastic capsules without previous washout time. Blood samples were obtained after 30 minutes, 1, 2, 4, 8, 12, 24, 48 and 168 hours and serum 17β-oestradiol (and oestrone sulphate in some samples) was subsequently analysed. For long-term characterisation, one group treated perorally was blood sampled after 2, 7, 14, 21, 28, 35 and 42 days. At sacrifice, uterine horns were weighed and subcutaneous tissue samples were taken for histological assessment. The pellets, silastic capsule and injection groups produced serum 17β-oestradiol concentrations that were initially several orders of magnitude higher than physiological levels, while the peroral groups had 17β-oestradiol levels that were within the physiological range during the entire experiment. The peroral method is a promising option for administering 17β-oestradiol if physiological levels or similarity to women's oral hormone therapy are desired. Uterine weights were found to be a very crude measure of oestrogen exposure. PMID:21834617

  17. Perioperative use of anti-rheumatic agents does not increase early postoperative infection risks: a Veteran Affairs' administrative database study.

    PubMed

    Abou Zahr, Zaki; Spiegelman, Andrew; Cantu, Maria; Ng, Bernard

    2015-02-01

    The aim of this study was to validate a novel technique that predicts stopping of disease-modifying anti-rheumatic drugs (DMARDs) and biologic agents (BA) from the Veterans Affairs (VA) database and compare infection risks of rheumatoid arthritis patients who stopped versus continued DMARDs/BA perioperatively. We identified 6,024 patients on 1 DMARD or BA in the perioperative period between 1999 and 2009. Time gap between medication stop date and the next start date predicted drug stoppage (X). Time gap between surgery date and stop date predicted whether stoppage was before surgery (Y). Chart review from Houston VA was used for validation. ROC analyses were performed on chart review data to obtain X and Y cutoffs. The primary endpoints were wound infections and other infections within 30 days. ROC analyses found X ≥ 33 (AUC = 0.954) and Y ≥ -11 (AUC = 0.846). Risk of postoperative infections was not different when stopping and continuing DMARDs/BA preoperatively. Stopping BA after surgery was associated with higher odds of postoperative wound (OR 14.15, 95 % CI 1.76-113.76) and general infection (OR 9.2, 95 % CI 1.99-42.60) compared to not stopping. Stopping DMARDs after surgery was associated with increased risk of postoperative general infection (OR 1.84, 95 % CI 1.07-3.16) compared with not stopping. There was positive association between stopping DMARDs after surgery and postoperative wound infection but failed to achieve statistical significance (OR 1.67, 95 % CI 0.96-2.91). There was no significant difference in postoperative infection risk when stopping or continuing DMARD/BA. Our new validated method can be utilized in the VA and other databases to predict drug stoppage. PMID:25187198

  18. Retrosigmoid Versus Translabyrinthine Approach for Acoustic Neuroma Resection: An Assessment of Complications and Payments in a Longitudinal Administrative Database

    PubMed Central

    Cole, Tyler; Veeravagu, Anand; Zhang, Michael; Swinney, Christian; Li, Gordon H; Ratliff, John K; Giannotta, Steven L

    2015-01-01

    Object Retrosigmoid (RS) and translabyrinthine (TL) surgery remain essential treatment approaches for symptomatic or enlarging acoustic neuromas (ANs). We compared nationwide complication rates and payments, independent of tumor characteristics, for these two strategies. Methods We identified 346 and 130 patients who underwent RS and TL approaches, respectively, for AN resection in the 2010-2012 MarketScan database, which characterizes primarily privately-insured patients from multiple institutions nationwide. Results Although we found no difference in 30-day general neurological or neurosurgical complication rates, in TL procedures there was a decreased risk for postoperative cranial nerve (CN) VII injury (20.2% vs 10.0%, CI 0.23–0.82), dysphagia (10.4% vs 3.1%, CI 0.10–0.78), and dysrhythmia (8.4% vs 2.3%, CI 0.08–0.86). Overall, there was no difference in surgical repair rates of CSF leak; however, intraoperative fat grafting was significantly higher in TL approaches (19.8% vs 60.2%, CI 3.95–9.43). In patients receiving grafts, there was a trend towards a higher repair rate after RS approach, while in those without grafts, there was a trend towards a higher repair rate after TL approach. Median total payments were $16,856 higher after RS approaches ($67,774 vs $50,918, p < 0.0001), without differences in physician or 90-day postoperative payments. Conclusions  Using a nationwide longitudinal database, we observed that the TL, compared to RS, approach for AN resection experienced lower risks of CN VII injury, dysphagia, and dysrhythmia. There was no significant difference in CSF leak repair rates. The payments for RS procedures exceed payments for TL procedures by approximately $17,000. Data from additional years and non-private sources will further clarify these trends. PMID:26623224

  19. Independent Identification Method applied to EDMOND and SonotaCo databases

    NASA Astrophysics Data System (ADS)

    Rudawska, R.; Matlovic, P.; Toth, J.; Kornos, L.; Hajdukova, M.

    2015-10-01

    In recent years, networks of low-light-level video cameras have contributed many new meteoroid orbits. As a result of cooperation and data sharing among national networks and International Meteor Organization Video Meteor Database (IMO VMDB), European Video Meteor Network Database (EDMOND; [2, 3]) has been created. Its current version contains 145 830 orbits collected from 2001 to 2014. Another productive camera network has been that of the Japanese SonotaCo consortium [5], which at present made available 168 030 meteoroid orbits collected from 2007 to 2013. In our survey we used EDMOND database with SonotaCo database together, in order to identify existing meteor showers in both databases (Figure 1 and 2). For this purpose we applied recently intoduced independed identification method [4]. In the first step of the survey we used criterion based on orbital parameters (e, q, i, !, and) to find groups around each meteor within the similarity threshold. Mean parameters of the groups were calculated usingWelch method [6], and compared using a new function based on geocentric parameters (#, #, #, and Vg). Similar groups were merged into final clusters (representing meteor showers), and compared with the IAU Meteor Data Center list of meteor showers [1]. This poster presents the results obtained by the proposed methodology.

  20. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  1. GMOMETHODS: the European Union database of reference methods for GMO analysis.

    PubMed

    Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre

    2012-01-01

    In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis. PMID:23451388

  2. Development of a Publicly Available, Comprehensive Database of Fiber and Health Outcomes: Rationale and Methods

    PubMed Central

    Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.

    2016-01-01

    Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733

  3. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    SciTech Connect

    Bennett, Joseph W.; Rabe, Karin M.

    2012-11-15

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb{sub 1/2}Mn{sub 1/2})O{sub 3} as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb{sub 2}O{sub 4}; and (3) ferroelectric semiconductors with formula M{sub 2}P{sub 2}(S,Se){sub 6}. A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties. - Graphical abstract: Integration of first-principles methods with crystallographic database mining, for the discovery and design of novel ferroelectric materials, could potentially lead to new classes of multifunctional materials. Highlights: Black-Right-Pointing-Pointer Integration of first-principles methods and database mining. Black-Right-Pointing-Pointer Minor structural families with desirable functional properties. Black-Right-Pointing-Pointer Survey of polar entries in the Inorganic Crystal Structural Database.

  4. Development of a Cast Iron Fatigue Properties Database for use with Modern Design Methods

    SciTech Connect

    DeLa'O, James, D.; Gundlach, Richard, B.; Tartaglia, John, M.

    2003-09-18

    A reliable and comprehensive database of design properties for cast iron is key to full and efficient utilization of this versatile family of high production-volume engineering materials. A database of strain-life fatigue properties and supporting data for a wide range of structural cast irons representing industry standard quality was developed in this program. The database primarily covers ASTM/SAE standard structural grades of ADI, CGI, ductile iron and gray iron as well as an austempered gray iron. Twenty-two carefully chosen materials provided by commercial foundries were tested and fifteen additional datasets were contributed by private industry. The test materials are principally distinguished on the basis of grade designation; most grades were tested in a 25 mm section size and in a single material condition common for the particular grade. Selected grades were tested in multiple sections-sizes and/or material conditions to delineate the properties associated with a range of materials for the given grade. The cyclic properties are presented in terms of the conventional strain-life formalism (e.g., SAE J1099). Additionally, cyclic properties for gray iron and CGI are presented in terms of the Downing Model, which was specifically developed to treat the unique stress-strain response associated with gray iron (and to a lesser extent with CGI). The test materials were fully characterized in terms of alloy composition, microstructure and monotonic properties. The CDROM database presents the data in various levels of detail including property summaries for each material, detailed data analyses for each specimen and raw monotonic and cyclic stress-strain data. The CDROM database has been published by the American Foundry Society (AFS) as an AFS Research Publication entitled ''Development of a Cast Iron Fatigue Properties Database for Use in Modern Design Methods'' (ISDN 0-87433-267-2).

  5. Association of Opioids and Sedatives with Increased Risk of In-Hospital Cardiopulmonary Arrest from an Administrative Database

    PubMed Central

    Overdyk, Frank J.; Dowling, Oonagh; Marino, Joseph; Qiu, Jiejing; Chien, Hung-Lun; Erslon, Mary; Morrison, Neil; Harrison, Brooke; Dahan, Albert; Gan, Tong J.

    2016-01-01

    Background While opioid use confers a known risk for respiratory depression, the incremental risk of in-hospital cardiopulmonary arrest, respiratory arrest, or cardiopulmonary resuscitation (CPRA) has not been studied. Our aim was to investigate the prevalence, outcomes, and risk profile of in-hospital CPRA for patients receiving opioids and medications with central nervous system sedating side effects (sedatives). Methods A retrospective analysis of adult inpatient discharges from 2008–2012 reported in the Premier Database. Patients were grouped into four mutually exclusive categories: (1) opioids and sedatives, (2) opioids only, (3) sedatives only, and (4) neither opioids nor sedatives. Results Among 21,276,691 inpatient discharges, 53% received opioids with or without sedatives. A total of 96,554 patients suffered CPRA (0.92 per 1000 hospital bed-days). Patients who received opioids and sedatives had an adjusted odds ratio for CPRA of 3.47 (95% CI: 3.40–3.54; p<0.0001) compared with patients not receiving opioids or sedatives. Opioids alone and sedatives alone were associated with a 1.81-fold and a 1.82-fold (p<0.0001 for both) increase in the odds of CPRA, respectively. In opioid patients, locations of CPRA were intensive care (54%), general care floor (25%), and stepdown units (15%). Only 42% of patients survived CPRA and only 22% were discharged home. Opioid patients with CPRA had mean increased hospital lengths of stay of 7.57 days and mean increased total hospital costs of $27,569. Conclusions Opioids and sedatives are independent and additive risk factors for in-hospital CPRA. The impact of opioid sparing analgesia, reduced sedative use, and better monitoring on CPRA incidence deserves further study. PMID:26913753

  6. Prevalence and Costs of Multimorbidity by Deprivation Levels in the Basque Country: A Population Based Study Using Health Administrative Databases

    PubMed Central

    Orueta, Juan F.; García-Álvarez, Arturo; García-Goñi, Manuel; Paolucci, Francesco; Nuño-Solinís, Roberto

    2014-01-01

    Background Multimorbidity is a major challenge for healthcare systems. However, currently, its magnitude and impact in healthcare expenditures is still mostly unknown. Objective To present an overview of the prevalence and costs of multimorbidity by socioeconomic levels in the whole Basque population. Methods We develop a cross-sectional analysis that includes all the inhabitants of the Basque Country (N = 2,262,698). We utilize data from primary health care electronic medical records, hospital admissions, and outpatient care databases, corresponding to a 4 year period. Multimorbidity was defined as the presence of two or more chronic diseases out of a list of 52 of the most important and common chronic conditions given in the literature. We also use socioeconomic and demographic variables such as age, sex, individual healthcare cost, and deprivation level. Predicted adjusted costs were obtained by log-gamma regression models. Results Multimorbidity of chronic diseases was found among 23.61% of the total Basque population and among 66.13% of those older than 65 years. Multimorbid patients account for 63.55% of total healthcare expenditures. Prevalence of multimorbidity is higher in the most deprived areas for all age and sex groups. The annual cost of healthcare per patient generated for any chronic disease depends on the number of coexisting comorbidities, and varies from 637 € for the first pathology in average to 1,657 € for the ninth one. Conclusion Multimorbidity is very common for the Basque population and its prevalence rises in age, and unfavourable socioeconomic environment. The costs of care for chronic patients with several conditions cannot be described as the sum of their individual pathologies in average. They usually increase dramatically according to the number of comorbidities. Given the ageing population, multimorbidity and its consequences should be taken into account in healthcare policy, the organization of care and medical research

  7. Ecological Methods in the Study of Administrative Behavior.

    ERIC Educational Resources Information Center

    Scott, Myrtle; Eklund, Susan J.

    Qualitative/naturalistic inquiry intends to discover whatever naturally occurring order exists rather than to test various theories or conceptual frameworks held by the investigator. Naturalistic, ecological data are urgently needed concerning the behavior of educational administrators. Such data can considerably change the knowledge base of the…

  8. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must...

  9. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must...

  10. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must...

  11. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must...

  12. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must...

  13. Method using a Monte Carlo simulation database for optimizing x-ray fluoroscopic conditions

    NASA Astrophysics Data System (ADS)

    Ueki, Hironori; Okajima, Kenichi

    2001-06-01

    To improve image quality (IQ) and reduce dose in x-ray fluoroscopy, we have developed a new method for optimizing x-ray conditions such as x-ray tube voltage, tube current, and gain of the detector. This method uses a Monte Carlo (MC)-simulation database for analyzing the relations between IQ, x-ray dose, and x-ray conditions. The optimization consists of three steps. First, a permissible dose limit for each object thickness is preset. Then, the MC database is used to calculate the IQ of x-ray projections under all the available conditions that satisfy this presetting. Finally, the optimum conditions are determined as the ones that provide the highest IQ. The MC database contains projections of an estimation phantom simulated under emissions of single-energy photons with various energies. By composing these single-energy projections according to the bremsstrahlung energy distributions, the IQs under any x-ray conditions can be calculated in a very short time. These calculations show that the optimum conditions are determined by the relation between quantum noise and scattering. Moreover, the heat-capacity limit of the x-ray tube can also determine the optimum conditions. It is concluded that the developed optimization method can reduce the time and cost of designing x-ray fluoroscopic systems.

  14. Basti: Does the equipment and method of administration matter?

    PubMed Central

    Gundeti, Manohar S.; Raut, Ashwinikumar A.; Kamat, Nitin M.

    2013-01-01

    Basti is one of the five procedures of panchakarma in Ayurveda. Classically, it is advocated in the diseases of vata. It is mainly of two types viz. asthapana and anuvasana. According to the classical texts basti administration is done with the help of animal bladder (bastiputaka) and specially prepared metal/wooden nozzle/catheter (bastinetra), the whole assembly is called as bastiyantra. Nowadays, except in some of the Vaidya traditions in Kerala, basti administration is often done using enema-can or douche-set. In the aforesaid classical procedure active pressure is expected to be given on the bastiputaka whereas, in conventionally used enema-can only passive or gravitational force plays a role. This is important in the context of ‘basti danakala or pidanakala i.e. time for basti administration′. PMID:23741155

  15. The Institute of Public Administration's Document Center: From Paper to Electronic Records--A Full Image Government Documents Database.

    ERIC Educational Resources Information Center

    Al-Zahrani, Rashed S.

    Since its establishment in 1960, the Institute of Public Administration (IPA) in Riyadh, Saudi Arabia has had responsibility for documenting Saudi administrative literature, the official publications of Saudi Arabia, and the literature of regional and international organizations through establishment of the Document Center in 1961. This paper…

  16. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  17. DOE/MSU composite material fatigue database: Test methods, materials, and analysis

    SciTech Connect

    Mandell, J.F.; Samborsky, D.D.

    1997-12-01

    This report presents a detailed analysis of the results from fatigue studies of wind turbine blade composite materials carried out at Montana State University (MSU) over the last seven years. It is intended to be used in conjunction with the DOE/MSU composite Materials Fatigue Database. The fatigue testing of composite materials requires the adaptation of standard test methods to the particular composite structure of concern. The stranded fabric E-glass reinforcement used by many blade manufacturers has required the development of several test modifications to obtain valid test data for materials with particular reinforcement details, over the required range of tensile and compressive loadings. Additionally, a novel testing approach to high frequency (100 Hz) testing for high cycle fatigue using minicoupons has been developed and validated. The database for standard coupon tests now includes over 4,100 data points for over 110 materials systems. The report analyzes the database for trends and transitions in static and fatigue behavior with various materials parameters. Parameters explored are reinforcement fabric architecture, fiber content, content of fibers oriented in the load direction, matrix material, and loading parameters (tension, compression, and reversed loading). Significant transitions from good fatigue resistance to poor fatigue resistance are evident in the range of materials currently used in many blades. A preliminary evaluation of knockdowns for selected structural details is also presented. The high frequency database provides a significant set of data for various loading conditions in the longitudinal and transverse directions of unidirectional composites out to 10{sup 8} cycles. The results are expressed in stress and strain based Goodman Diagrams suitable for design. A discussion is provided to guide the user of the database in its application to blade design.

  18. Persistence with weekly and monthly bisphosphonates among postmenopausal women: analysis of a US pharmacy claims administrative database

    PubMed Central

    Fan, Tao; Zhang, Qiaoyi; Sen, Shuvayu S

    2013-01-01

    Background Bisphosphonates are available in daily, weekly, and monthly dosing formulations to treat postmenopausal osteoporosis. Some researchers suggested that adherence to monthly bisphosphonate might be different from that with weekly or daily bisphosphonate because of different dosing regimens. However, the actual persistency rates in regular practice settings are unknown. Objectives To compare persistence rates with alendronate 70 mg once weekly (AOW), risedronate 35 mg once weekly (ROW), and ibandronate 150 mg once monthly (IOM) in a US pharmacy claims database. Methods In this retrospective cohort study, pharmacy claims data of patients with new bisphosphonate prescriptions were extracted for women aged ≥ 50 years who had an AOW, ROW, or IOM prescription (index prescription) between December 30, 2004 and May 31, 2005 (the index period) and did not have the index Rx during the previous 12 months. Patients’ records were reviewed for at least 5 months from their index date to November 2, 2005 (the follow-up period). Patients were considered persistent if they neither discontinued (failed to refill the index Rx within a 45-day period following the last supply day of the previous dispensing) nor switched (changed to another bisphosphonate) during the follow-up period. Medication-possession ratio was defined as days with index prescription supplies/total days of follow-up. Results Among 44,635 patients, 25,207 (56.5%) received prescriptions of AOW, 18,689 (41.9%) ROW, and 739 (1.7%) IOM as the index prescription. In all, 35.1% of AOW patients, 32.5% of ROW patients, and 30.4% of IOM patients (P < 0.0001 AOW vs ROW or IOM) had persisted with their initial therapy, whereas 64.0% of AOW, 66.4% of ROW, and 68.2% of IOM patients discontinued (P < 0.0001) during follow-up. The medication-possession ratio (days with index prescription supplies/total days of follow-up) was significantly higher for AOW (0.55) compared with ROW (0.52) and IOM (0.51, P < 0.05). By

  19. 45 CFR 235.50 - State plan requirements for methods of personnel administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false State plan requirements for methods of personnel administration. 235.50 Section 235.50 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATION...

  20. 45 CFR 235.50 - State plan requirements for methods of personnel administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 2 2011-10-01 2011-10-01 false State plan requirements for methods of personnel administration. 235.50 Section 235.50 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATION...

  1. Databases for solar energetic particle models: methodical uncertainties and technical errors

    NASA Astrophysics Data System (ADS)

    Mottl, D.; Nymmik, R.

    Quite a number of models have been developed recently to describe the solar energetic proton fluxes, which constitute a very important source of radiation hazard in the space. The models make use of not only different methods, but also different experimental databases. Therefore, the credibility of the models is essentially defined by not only the adequacy of the preferred methods, but also the reliability of the databases. The results are reported of the comparative analysis of the IMP-8, GOES-6, GOES-7, and METEOR-measured proton flux databases of cycle 22. The comparative analysis results are used together with the data on the compatibility of the measured energy spectra with the ground-based neutron monitor measurements. Significant methodological errors have been found in the particle fluxes measured with different instruments. The systematic technical errors in separate energy channels reach factor 5 (the 15-44 MeV channel of GOES-6). The reliability of the corrections introduced into the measured fluxes, as well as of the techniques for scaling the measured differential fluxes to the calculated integral fluxes, is discussed. The GOES -7 - measured (uncorrected) differential fluxes and the METEOR-measured integral fluxes are shown to be most reliable.

  2. Discovery of novel mesangial cell proliferation inhibitors using a three-dimensional database searching method.

    PubMed

    Kurogi, Y; Miyata, K; Okamura, T; Hashimoto, K; Tsutsumi, K; Nasu, M; Moriyasu, M

    2001-07-01

    A three-dimensional pharmacophore model of mesangial cell (MC) proliferation inhibitors was generated from a training set of 4-(diethoxyphosphoryl)methyl-N-(3-phenyl-[1,2,4]thiadiazol-5-yl)benzamide, 2, and its derivatives using the Catalyst/HIPHOP software program. On the basis of the in vitro MC proliferation inhibitory activity, a pharmacophore model was generated as seven features consisting of two hydrophobic regions, two hydrophobic aromatic regions, and three hydrogen bond acceptors. Using this model as a three-dimensional query to search the Maybridge database, structurally novel 41 compounds were identified. The evaluation of MC proliferation inhibitory activity using available samples from the 41 identified compounds exhibited over 50% inhibitory activity at the 100 nM range. Interestingly, the newly identified compounds by the 3D database searching method exhibited the reduced inhibition of normal proximal tubular epithelial cell proliferation compared to a training set of compounds. PMID:11428924

  3. Use of Fibrates Monotherapy in People with Diabetes and High Cardiovascular Risk in Primary Care: A French Nationwide Cohort Study Based on National Administrative Databases

    PubMed Central

    Roussel, Ronan; Chaignot, Christophe; Weill, Alain; Travert, Florence; Hansel, Boris; Marre, Michel; Ricordeau, Philippe; Alla, François; Allemand, Hubert

    2015-01-01

    Background and Aim According to guidelines, diabetic patients with high cardiovascular risk should receive a statin. Despite this consensus, fibrate monotherapy is commonly used in this population. We assessed the frequency and clinical consequences of the use of fibrates for primary prevention in patients with diabetes and high cardiovascular risk. Design Retrospective cohort study based on nationwide data from the medical and administrative databases of French national health insurance systems (07/01/08-12/31/09) with a follow-up of up to 30 months. Methods Lipid-lowering drug-naive diabetic patients initiating fibrate or statin monotherapy were identified. Patients at high cardiovascular risk were then selected: patients with a diagnosis of diabetes and hypertension, and >50 (men) or 60 (women), but with no history of cardiovascular events. The composite endpoint comprised myocardial infarction, stroke, amputation, or death. Results Of the 31,652 patients enrolled, 4,058 (12.8%) received a fibrate. Age- and gender-adjusted annual event rates were 2.42% (fibrates) and 2.21% (statins). The proportionality assumption required for the Cox model was not met for the fibrate/statin variable. A multivariate model including all predictors was therefore calculated by dividing data into two time periods, allowing Hazard Ratios to be calculated before (HR<540) and after 540 days (HR>540) of follow-up. Multivariate analyses showed that fibrates were associated with an increased risk for the endpoint after 540 days: HR<540 = 0.95 (95% CI: 0.78–1.16) and HR>540 = 1.73 (1.28–2.32). Conclusion Fibrate monotherapy is commonly prescribed in diabetic patients with high cardiovascular risk and is associated with poorer outcomes compared to statin therapy. PMID:26398765

  4. Development of Database Assisted Structure Identification (DASI) Methods for Nontargeted Metabolomics.

    PubMed

    Menikarachchi, Lochana C; Dubey, Ritvik; Hill, Dennis W; Brush, Daniel N; Grant, David F

    2016-01-01

    Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns). For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1) substructures required (i.e., seed structures); (2) substructures not allowed; and (3) filters to remove incorrect structures. Our approach (database assisted structure identification, DASI) used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da. PMID:27258318

  5. Development of Database Assisted Structure Identification (DASI) Methods for Nontargeted Metabolomics

    PubMed Central

    Menikarachchi, Lochana C.; Dubey, Ritvik; Hill, Dennis W.; Brush, Daniel N.; Grant, David F.

    2016-01-01

    Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns). For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1) substructures required (i.e., seed structures); (2) substructures not allowed; and (3) filters to remove incorrect structures. Our approach (database assisted structure identification, DASI) used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da. PMID:27258318

  6. Hospitalizations of Infants and Young Children with Down Syndrome: Evidence from Inpatient Person-Records from a Statewide Administrative Database

    ERIC Educational Resources Information Center

    So, S. A.; Urbano, R. C.; Hodapp, R. M.

    2007-01-01

    Background: Although individuals with Down syndrome are increasingly living into the adult years, infants and young children with the syndrome continue to be at increased risk for health problems. Using linked, statewide administrative hospital discharge records of all infants with Down syndrome born over a 3-year period, this study "follows…

  7. A Method to Calculate and Analyze Residents' Evaluations by Using a Microcomputer Data-Base Management System.

    ERIC Educational Resources Information Center

    Mills, Myron L.

    1988-01-01

    A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)

  8. New method for administration of hydrochloric acid in metabolic alkalosis.

    PubMed

    Knutsen, O H

    1983-04-30

    In a new method for peripheral intravenous infusion of hydrochloric acid the HCl is buffered in an aminoacid solution and infused with a fat emulsion. The aminoacids and the fat emulsions are stable in the presence of HCl, and the transfusion set is resistant to the chemical actin of 0.15 mol/l HCl. Two case-reports show that HCl can be administered safely through a peripheral vein. PMID:6132269

  9. Network II Database

    1994-11-07

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network II Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database.

  10. Developing a heme iron database for meats according to meat type, cooking method and doneness level

    PubMed Central

    Cross, Amanda J.; Harnly, James M.; Ferrucci, Leah M.; Risch, Adam; Mayne, Susan T.; Sinha, Rashmi

    2012-01-01

    Background Animal studies have demonstrated that iron may be related to carcinogenesis, and human studies found that heme iron can increase the formation of N-nitroso compounds, which are known carcinogens. Objectives One of the postulated mechanisms linking red meat intake to cancer risk involves iron. Epidemiologic studies attempt to investigate the association between heme iron intake and cancer by applying a standard factor to total iron from meat. However, laboratory studies suggest that heme iron levels in meat vary according to cooking method and doneness level. We measured heme iron in meats cooked by different cooking methods to a range of doneness levels to use in conjunction with a food frequency questionnaire to estimate heme iron intake. Methods Composite meat samples were made to represent each meat type, cooking method and doneness level. Heme iron was measured using atomic absorption spectrometry and inductively coupled plasma-atomic emission spectrometry. Results Steak and hamburgers contained the highest levels of heme iron, pork and chicken thigh meat had slightly lower levels, and chicken breast meat had the lowest. Conclusions Although heme iron levels varied, there was no clear effect of cooking method or doneness level. We outline the methods used to create a heme iron database to be used in conjunction with food frequency questionnaires to estimate heme iron intake in relation to disease outcome. PMID:23459329

  11. Supervised method to build an atlas database for multi-atlas segmentation-propagation

    NASA Astrophysics Data System (ADS)

    Shen, Kaikai; Bourgeat, Pierrick; Fripp, Jurgen; Mériaudeau, Fabrice; Ames, David; Ellis, Kathryn A.; Masters, Colin L.; Villemagne, Victor L.; Rowe, Christopher C.; Salvado, Olivier

    2010-03-01

    Multiatlas based segmentation-propagation approaches have been shown to obtain accurate parcelation of brain structures. However, this approach requires a large number of manually delineated atlases, which are often not available. We propose a supervised method to build a population specific atlas database, using the publicly available Internet Brain Segmentation Repository (IBSR). The set of atlases grows iteratively as new atlases are added, so that its segmentation capability may be enhanced in the multiatlas based approach. Using a dataset of 210 MR images of elderly subjects (170 elderly control, 40 Alzheimer's disease) from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study, 40 MR images were segmented to build a population specific atlas database for the purpose of multiatlas segmentation-propagation. The population specific atlases were used to segment the elderly population of 210 MR images, and were evaluated in terms of the agreement among the propagated labels. The agreement was measured by using the entropy H of the probability image produced when fused by voting rule and the partial moment μ2 of the histogram. Compared with using IBSR atlases, the population specific atlases obtained a higher agreement when dealing with images of elderly subjects.

  12. A Privacy-Preserved Analytical Method for eHealth Database with Minimized Information Loss

    PubMed Central

    Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun

    2012-01-01

    Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification. PMID:22969273

  13. Database Manager

    ERIC Educational Resources Information Center

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  14. A New Method for Identification of Moving Groups in the HIPPARCOS Database

    NASA Astrophysics Data System (ADS)

    Aguilar, L. A.; Hoogerwerf, R.

    1998-09-01

    A new method to identify moving groups in proper motion catalogues is introduced. It requires parallax and proper motion information. No radial velocity data is needed, thus it is well suited for the Hipparcos database. This method uses all the available information to constrain the locations of stars in velocity space, and then searches for statistically significative groupings of the constrained star locations. As the search is done in velocity space, groups need not be constrained in position on the sky to be identified. Monte Carlo experiments are used to gauge the success of this method for synthetic groups at various distances and kinematics. The method is then used on the Hyades; good agreement in the member list and a difference of only 0.3 km/s in center of mass velocity, is found with the work of work of Perryman et al. (1998), which included radial velocity information besides Hipparcos proper motions. Bibliography: Perryman, M.A., Brown, A.G., Lebreton, Y., Gomez, A., Turon, C., Cayrel de Strobel, G., Mermilliod, J., Robichon, N., Kovalevsky, J., Crifo, F. (1998), A.&A., submitted.

  15. Assessment of methods for creating a national building statistics database for atmospheric dispersion modeling

    SciTech Connect

    Velugubantla, S. P.; Burian, S. J.; Brown, M. J.; McKinnon, A. T.; McPherson, T. N.; Han, W. S.

    2004-01-01

    Mesoscale meteorological codes and transport and dispersion models are increasingly being applied in urban areas. Representing urban terrain characteristics in these models is critical for accurate predictions of air flow, heating and cooling, and airborne contaminant concentrations in cities. A key component of urban terrain characterization is the description of building morphology (e.g., height, plan area, frontal area) and derived properties (e.g., roughness length). Methods to determine building morphological statistics range from manual field surveys to automated processing of digital building databases. In order to improve the quality and consistency of mesoscale meteorological and atmospheric dispersion modeling, a national dataset of building morphological statistics is needed. Currently, due to the expense and logistics of conducting detailed field surveys, building statistics have been derived for only small sections of a few cities. In most other cities, modeling projects rely on building statistics estimated using intuition and best guesses. There has been increasing emphasis in recent years to derive building statistics using digital building data or other data sources as a proxy for those data. Although there is a current expansion in public and private sector development of digital building data, at present there is insufficient data to derive a national building statistics database using automated analysis tools. Too many cities lack digital data on building footprints and heights and many of the cities having such data do so for only small areas. Due to the lack of sufficient digital building data, other datasets are used to estimate building statistics. Land use often serves as means to provide building statistics for a model domain, but the strength and consistency of the relationship between land use and building morphology is largely uncertain. In this paper, we investigate whether building statistics can be correlated to the underlying land

  16. Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)

    2004-01-01

    Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.

  17. Database Marketplace 2002: The Database Universe.

    ERIC Educational Resources Information Center

    Tenopir, Carol; Baker, Gayle; Robinson, William

    2002-01-01

    Reviews the database industry over the past year, including new companies and services, company closures, popular database formats, popular access methods, and changes in existing products and services. Lists 33 firms and their database services; 33 firms and their database products; and 61 company profiles. (LRW)

  18. First Toronto Conference on Database Users. User Performance, Research Methods, and Human Factors.

    ERIC Educational Resources Information Center

    Lee, Eric; And Others

    1987-01-01

    The first of three articles reports the effectiveness of a feature-matching approach to computer retrieval of graphic information. The second discusses the use of multivariate techniques to measure perceived complexity of database searching by users. The third explores the limitations of user consultation in database design. (Author/CLB)

  19. Global vs. Localized Search: A Comparison of Database Selection Methods in a Hierarchical Environment.

    ERIC Educational Resources Information Center

    Conrad, Jack G.; Claussen, Joanne Smestad; Yang, Changwen

    2002-01-01

    Compares standard global information retrieval searching with more localized techniques to address the database selection problem that users often have when searching for the most relevant database, based on experiences with the Westlaw Directory. Findings indicate that a browse plus search approach in a hierarchical environment produces the most…

  20. A Multi-Index Integrated Change Detection Method for Updating the National Land Cover Database

    NASA Astrophysics Data System (ADS)

    Jin, S.; Yang, L.; Xian, G. Z.; Danielson, P.; Homer, C.

    2010-12-01

    Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.

  1. A Multi-Index Integrated Change detection method for updating the National Land Cover Database

    USGS Publications Warehouse

    Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin

    2010-01-01

    Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.

  2. A method to add richness to the National Landslide Database of Great Britain

    NASA Astrophysics Data System (ADS)

    Taylor, Faith; Freeborough, Katy; Malamud, Bruce; Demeritt, David

    2014-05-01

    Landslides in Great Britain (GB) pose a risk to infrastructure, property and livelihoods. Our understanding of where landslide hazard and impact will be greatest is based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in GB by searching electronic archives of local and regional newspapers. In Great Britain, the British Geological Survey (BGS) are responsible for updating and maintaining records of GB landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of approximately 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. Here we aim to supplement the richness of the NLD by (i) identifying additional landslide events and (ii) adding more detail to existing database entries. This is done by systematically searching the LexisNexis digital archive of 568 local and regional newspapers published in the UK. The first step in the methodology was to construct Boolean search criteria that optimised the balance between minimising the number of irrelevant articles (e.g. "a landslide victory") and maximising those referring to landslide events. This keyword search was then applied to the LexisNexis archive of newspapers for all articles published between 1 January and 31 December 2012, resulting in 1,668 articles. These articles were assessed to determine whether they related to a landslide event. Of the 1,668 articles, approximately 30% (~700) referred to landslide events, with others referring to landslides more generally or themes unrelated to landslides. Examples of information obtained from newspaper articles included: date/time of landslide occurrence, spatial location, size, impact, landslide type and triggering mechanism, although the amount of detail and precision attainable from individual articles was variable. Of the 700 articles found for

  3. Efficient HPLC method development using structure-based database search, physico-chemical prediction and chromatographic simulation.

    PubMed

    Wang, Lin; Zheng, Jinjian; Gong, Xiaoyi; Hartman, Robert; Antonucci, Vincent

    2015-02-01

    Development of a robust HPLC method for pharmaceutical analysis can be very challenging and time-consuming. In our laboratory, we have developed a new workflow leveraging ACD/Labs software tools to improve the performance of HPLC method development. First, we established ACD-based analytical method databases that can be searched by chemical structure similarity. By taking advantage of the existing knowledge of HPLC methods archived in the databases, one can find a good starting point for HPLC method development, or even reuse an existing method as is for a new project. Second, we used the software to predict compound physicochemical properties before running actual experiments to help select appropriate method conditions for targeted screening experiments. Finally, after selecting stationary and mobile phases, we used modeling software to simulate chromatographic separations for optimized temperature and gradient program. The optimized new method was then uploaded to internal databases as knowledge available to assist future method development efforts. Routine implementation of such standardized workflows has the potential to reduce the number of experiments required for method development and facilitate systematic and efficient development of faster, greener and more robust methods leading to greater productivity. In this article, we used Loratadine method development as an example to demonstrate efficient method development using this new workflow. PMID:25481084

  4. Object {open_quotes}request{close_quotes} based clustering for method processing in object-oriented database system

    SciTech Connect

    Goel, S.; Bhargava, B.

    1996-12-31

    Static grouping (clustering) of component objects in a complex object at the server has been an active area of research in client/server based object oriented database systems. We present a client-driven object grouping approach. A client executing a method makes dynamic decisions and groups objects for a request to the server. The client requires run-time and statically analyzed information for the method to make its decisions. Complex object skeletons are used for navigating the complex object. We have conducted experimental studies to evaluate our approach. We have used a prototype object-oriented database system called O-Raid for our experiments.

  5. Clinical experimentation with aerosol antibiotics: current and future methods of administration

    PubMed Central

    Zarogoulidis, Paul; Kioumis, Ioannis; Porpodis, Konstantinos; Spyratos, Dionysios; Tsakiridis, Kosmas; Huang, Haidong; Li, Qiang; Turner, J Francis; Browning, Robert; Hohenforst-Schmidt, Wolfgang; Zarogoulidis, Konstantinos

    2013-01-01

    Currently almost all antibiotics are administered by the intravenous route. Since several systems and situations require more efficient methods of administration, investigation and experimentation in drug design has produced local treatment modalities. Administration of antibiotics in aerosol form is one of the treatment methods of increasing interest. As the field of drug nanotechnology grows, new molecules have been produced and combined with aerosol production systems. In the current review, we discuss the efficiency of aerosol antibiotic studies along with aerosol production systems. The different parts of the aerosol antibiotic methodology are presented. Additionally, information regarding the drug molecules used is presented and future applications of this method are discussed. PMID:24115836

  6. A comparison of the temporal expressiveness of three database query methods.

    PubMed Central

    Das, A. K.; Musen, M. A.

    1995-01-01

    Time is a multifaceted phenomenon that developers of clinical decision-support systems can model at various levels of complexity. An unresolved issue for the design of clinical databases is whether the underlying data model should support interval semantics. In this paper, we examine whether interval-based operations are required for querying protocol-based conditions. We report on an analysis of a set of 256 eligibility criteria that the T-HELPER system uses to screen patients for enrollment in eight clinical-trial protocols for HIV disease. We consider three data-manipulation methods for temporal querying: the consensus query representation Arden Syntax, the commercial standard query language SQL, and the temporal query language TimeLineSQL (TLSQL). We compare the ability of these three query methods to express the eligibility criteria. Seventy nine percent of the 256 criteria require operations on time stamps. These temporal conditions comprise four distinct patterns, two of which use interval-based data. Our analysis indicates that the Arden Syntax can query the two non-interval patterns, which represent 54% of the temporal conditions. Timepoint comparisons formulated in SQL can instantiate the two non-interval patterns and one interval pattern, which encompass 96% of the temporal conditions. TLSQL, which supports an interval-based model of time, can express all four types of temporal patterns. Our results demonstrate that the T-HELPER system requires simple temporal operations for most protocol-based queries. Of the three approaches tested, TLSQL is the only query method that is sufficiently expressive for the temporal conditions in this system. PMID:8563296

  7. Exploring the potential impact of rotavirus vaccination on work absenteeism among female administrative personnel of the City of Antwerp through a retrospective database analysis

    PubMed Central

    Standaert, Baudouin; Van de Mieroop, Els; Nelen, Vera

    2015-01-01

    Objectives Rotavirus vaccination has been reimbursed in Belgium since November 2006 with a high uptake (>85%). Economic analyses of the vaccine have been reported, including estimates of indirect cost gain related to the reduction in work absenteeism. The objective of this study was to evaluate the latter parameter using real-life data. Design and setting A simple model estimated the reduction in absent workdays per working mother with a firstborn baby after the introduction of the rotavirus vaccine. Next, data on work absences were retrospectively analysed (from 2003 to 2012) using a database of administrative employees (n=11 600 working women per year) in the City of Antwerp. Observed reductions in absenteeism after the introduction of the vaccine were compared with the results from the model. These reductions would most likely be observed during the epidemic periods of rotavirus (from January to the end of May) for short-duration absences of ≤5 days. We compared data from outside epidemic periods (from June to December), expecting no changes over time prevaccine and postvaccine introduction, as well as with a control group of women aged 30–35 years with no first child. Results Model estimates were 0.73 working days gained per working mother. In the database of the City of Antwerp, we identified a gain of 0.88 working days during the epidemic period, and an accumulated gain of 2.24 days over a 3-year follow-up period. In the control group, no decrease in absenteeism was measured. Giving vaccine access to working mothers resulted in an estimated accumulated net cost gain of €187 per mother. Conclusions Reduction in absenteeism among working mothers was observed during periods of the epidemic after the introduction of the rotavirus vaccine in Belgium. This reduction is in line with estimates of indirect cost gains used in economic evaluations of the rotavirus vaccine. Trial registration number HO-12-12768. PMID:26129633

  8. New sources for alternative methods on the Internet: the objectives of databases and web sites.

    PubMed

    Grune, Barbara; Dörendahl, Antje; Köhler-Hahn, Dorothea; Feuerstein, Céline; Box, Rainer; Wohlgemuth, Heinz; Spielmann, Horst

    2004-06-01

    One of the main requirements of the current animal welfare legislation in Europe is to prove the necessity of performing a given experiment with animals. Thus, a study using animals should not proceed, if another scientifically reliable method is available to obtain the desired results that either avoids animal experiments altogether, minimises pain and suffering of animals or reduces the number of animals needed. Scientists are legally required to search the literature and other relevant sources for alternatives prior to any experimental study with animals. Access to information has become much easier since the introduction of the Internet as a standard tool. Today, a variety of online sources is available, e.g. web-based bibliographic databases and specialised web sites providing details about alternatives to animal studies. However, scientists still need to determine the most appropriate searching strategies, depending on the objectives of the relevant web sites and their own line of research. A critical discussion of this issue takes into account the objectives of both the information providers and the information retrieval systems. PMID:23581139

  9. Similarity landscapes: An improved method for scientific visualization of information from protein and DNA database searches

    SciTech Connect

    Dogget, N.; Myers, G.; Wills, C.J.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The authors have used computer simulations and examination of a variety of databases to answer questions about a wide range of evolutionary questions. The authors have found that there is a clear distinction in the evolution of HIV-1 and HIV-2, with the former and more virulent virus evolving more rapidly at a functional level. The authors have discovered highly non-random patterns in the evolution of HIV-1 that can be attributed to a variety of selective pressures. In the course of examination of microsatellite DNA (short repeat regions) in microorganisms, the authors have found clear differences between prokaryotes and eukaryotes in their distribution, differences that can be tied to different selective pressures. They have developed a new method (topiary pruning) for enhancing the phylogenetic information contained in DNA sequences. Most recently, the authors have discovered effects in complex rainforest ecosystems that indicate strong frequency-dependent interactions between host species and their parasites, leading to the maintenance of ecosystem variability.

  10. A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases

    PubMed Central

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  11. A hybrid spatio-temporal data indexing method for trajectory databases.

    PubMed

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  12. Exploring the Ligand-Protein Networks in Traditional Chinese Medicine: Current Databases, Methods, and Applications

    PubMed Central

    Zhao, Mingzhu; Wei, Dong-Qing

    2013-01-01

    The traditional Chinese medicine (TCM), which has thousands of years of clinical application among China and other Asian countries, is the pioneer of the “multicomponent-multitarget” and network pharmacology. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This paper firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in detail along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM. PMID:23818932

  13. Method of content-based image retrieval for a spinal x-ray image database

    NASA Astrophysics Data System (ADS)

    Krainak, Daniel M.; Long, L. Rodney; Thoma, George R.

    2002-05-01

    The Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM) maintains a digital archive of 17,000 cervical and lumbar spine images collected in the second National Health and Nutrition Examination Survey (NHANES II) conducted by the National Center for Health Statistics (NCHS). Classification of the images for the osteoarthritis research community has been a long-standing goal of researchers at the NLM, collaborators at NCHS, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), and capability to retrieve images based on geometric characteristics of the vertebral bodies is of interest to the vertebral morphometry community. Automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. We implemented a prototype system for a database of 118 spine x-rays and health survey text data related to these x-rays. The system supports conventional text retrieval, as well as retrieval based on shape similarity to a user-supplied vertebral image or sketch.

  14. A METHOD FOR ESTIMATING GAS PRESSURE IN 3013 CONTAINERS USING AN ISP DATABASE QUERY

    SciTech Connect

    Friday, G; L. G. Peppers, L; D. K. Veirs, D

    2008-07-31

    The U.S. Department of Energy's Integrated Surveillance Program (ISP) is responsible for the storage and surveillance of plutonium-bearing material. During storage, plutonium-bearing material has the potential to generate hydrogen gas from the radiolysis of adsorbed water. The generation of hydrogen gas is a safety concern, especially when a container is breached within a glove box during destructive evaluation. To address this issue, the DOE established a standard (DOE, 2004) that sets the criteria for the stabilization and packaging of material for up to 50 years. The DOE has now packaged most of its excess plutonium for long-term storage in compliance with this standard. As part of this process, it is desirable to know within reasonable certainty the total maximum pressure of hydrogen and other gases within the 3013 container if safety issues and compliance with the DOE standards are to be attained. The principal goal of this investigation is to document the method and query used to estimate total (i.e. hydrogen and other gases) gas pressure within a 3013 container based on the material properties and estimated moisture content contained in the ISP database. Initial attempts to estimate hydrogen gas pressure in 3013 containers was based on G-values (hydrogen gas generation per energy input) derived from small scale samples. These maximum G-values were used to calculate worst case pressures based on container material weight, assay, wattage, moisture content, container age, and container volume. This paper documents a revised hydrogen pressure calculation that incorporates new surveillance results and includes a component for gases other than hydrogen. The calculation is produced by executing a query of the ISP database. An example of manual mathematical computations from the pressure equation is compared and evaluated with results from the query. Based on the destructive evaluation of 17 containers, the estimated mean absolute pressure was significantly higher

  15. Leveraging Administrative Data for Program Evaluations: A Method for Linking Data Sets Without Unique Identifiers.

    PubMed

    Lorden, Andrea L; Radcliff, Tiffany A; Jiang, Luohua; Horel, Scott A; Smith, Matthew L; Lorig, Kate; Howell, Benjamin L; Whitelaw, Nancy; Ory, Marcia

    2016-06-01

    In community-based wellness programs, Social Security Numbers (SSNs) are rarely collected to encourage participation and protect participant privacy. One measure of program effectiveness includes changes in health care utilization. For the 65 and over population, health care utilization is captured in Medicare administrative claims data. Therefore, methods as described in this article for linking participant information to administrative data are useful for program evaluations where unique identifiers such as SSN are not available. Following fuzzy matching methodologies, participant information from the National Study of the Chronic Disease Self-Management Program was linked to Medicare administrative data. Linking variables included participant name, date of birth, gender, address, and ZIP code. Seventy-eight percent of participants were linked to their Medicare claims data. Linking program participant information to Medicare administrative data where unique identifiers are not available provides researchers with the ability to leverage claims data to better understand program effects. PMID:25139849

  16. A practical tool for public health surveillance: Semi-automated coding of short injury narratives from large administrative databases using Naïve Bayes algorithms.

    PubMed

    Marucci-Wellman, Helen R; Lehto, Mark R; Corns, Helen L

    2015-11-01

    Public health surveillance programs in the U.S. are undergoing landmark changes with the availability of electronic health records and advancements in information technology. Injury narratives gathered from hospital records, workers compensation claims or national surveys can be very useful for identifying antecedents to injury or emerging risks. However, classifying narratives manually can become prohibitive for large datasets. The purpose of this study was to develop a human-machine system that could be relatively easily tailored to routinely and accurately classify injury narratives from large administrative databases such as workers compensation. We used a semi-automated approach based on two Naïve Bayesian algorithms to classify 15,000 workers compensation narratives into two-digit Bureau of Labor Statistics (BLS) event (leading to injury) codes. Narratives were filtered out for manual review if the algorithms disagreed or made weak predictions. This approach resulted in an overall accuracy of 87%, with consistently high positive predictive values across all two-digit BLS event categories including the very small categories (e.g., exposure to noise, needle sticks). The Naïve Bayes algorithms were able to identify and accurately machine code most narratives leaving only 32% (4853) for manual review. This strategy substantially reduces the need for resources compared with manual review alone. PMID:26412196

  17. The Use of Quantitative Methods as an Aid to Decision Making in Educational Administration.

    ERIC Educational Resources Information Center

    Alkin, Marvin C.

    Three quantitative methods are outlined, with suggestions for application to particular problem areas of educational administration: (1) The Leontief input-output analysis, incorporating a "transaction table" for displaying relationships between economic outputs and inputs, mainly applicable to budget analysis and planning; (2) linear programing,…

  18. Personality and Student Performance on Evaluation Methods Used in Business Administration Courses

    ERIC Educational Resources Information Center

    Lakhal, Sawsen; Sévigny, Serge; Frenette, Éric

    2015-01-01

    The objective of this study was to verify whether personality (Big Five model) influences performance on the evaluation methods used in business administration courses. A sample of 169 students enrolled in two compulsory undergraduate business courses responded to an online questionnaire. As it is difficult within the same course to assess…

  19. 78 FR 2280 - Federal Housing Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... INFORMATION: I. Background On July 15, 2010, at 75 FR 41225, HUD published a Federal Register notice.... (See, Section II.F. of the July 15, 2010, notice, at 75 FR 41227). The ten percent discount has proved... URBAN DEVELOPMENT Federal Housing Administration (FHA) First Look Sales Method Under the...

  20. A comprehensive change detection method for updating the National Land Cover Database to circa 2011

    USGS Publications Warehouse

    Jin, Suming; Yang, Limin; Danielson, Patrick; Homer, Collin G.; Fry, Joyce; Xian, George

    2013-01-01

    The importance of characterizing, quantifying, and monitoring land cover, land use, and their changes has been widely recognized by global and environmental change studies. Since the early 1990s, three U.S. National Land Cover Database (NLCD) products (circa 1992, 2001, and 2006) have been released as free downloads for users. The NLCD 2006 also provides land cover change products between 2001 and 2006. To continue providing updated national land cover and change datasets, a new initiative in developing NLCD 2011 is currently underway. We present a new Comprehensive Change Detection Method (CCDM) designed as a key component for the development of NLCD 2011 and the research results from two exemplar studies. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (CV, RCVMAX, dNBR, and dNDVI) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. For NLCD 2011, the improved and enhanced change products obtained from the CCDM provide critical information on location, magnitude, and direction of potential change areas and serve as a basis for further characterizing land cover changes for the nation. An accuracy assessment from the two study areas show 100% agreement between CCDM mapped no-change class with reference dataset, and 18% and 82% disagreement for the change class for WRS path/row p22r39 and p33r33, respectively. The strength of the CCDM is that the method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and

  1. Energy Flux Method for Die-to-Database Inspection of Critical Layer Reticles

    NASA Astrophysics Data System (ADS)

    Garcia, Hector I.; Volk, William W.; Xiong, Yalin; Watson, Sterling G.; Yu, Zongchang; Guo, Zhian; Wang, Lantian

    2002-12-01

    With growing implementation of low k1 lithography on DUV scanners for wafer production, detecting and analyzing photomask critical dimension (CD) errors and semitransparent defects is vital for qualifying photomasks to enable high IC wafer yields for 130nm and 100nm nodes. Using the TeraStar pattern inspection system's image computer platform, a new die-to-database algorithm, TeraFlux, has been implemented and tested for the inspection of small "closed" features. The algorithm is run in die-to-database mode comparing the energy flux difference between reticle and the database reference for small closed features, such as, contacts, trenches, and cells on chrome and half-tone reticles. The algorithm is applicable to both clear and dark field reticles. Tests show the new algorithm provides CD error detection to 6% energy flux variation with low false defect counts. We have characterized the sensitivity and false defect performance of the die-to-database energy flux algorithm with production masks and programmed defect test masks. A sampling of inspection results will be presented. Wafer printability results using the programmed defects on a programmed defect test reticle will be presented and compared to the inspection defect sensitivity results.

  2. Analysis of Institutionally Specific Retention Research: A Comparison between Survey and Institutional Database Methods

    ERIC Educational Resources Information Center

    Caison, Amy L.

    2007-01-01

    This study empirically explores the comparability of traditional survey-based retention research methodology with an alternative approach that relies on data commonly available in institutional student databases. Drawing on Tinto's [Tinto, V. (1993). "Leaving College: Rethinking the Causes and Cures of Student Attrition" (2nd Ed.), The University…

  3. An Interactive Iterative Method for Electronic Searching of Large Literature Databases

    ERIC Educational Resources Information Center

    Hernandez, Marco A.

    2013-01-01

    PubMed® is an on-line literature database hosted by the U.S. National Library of Medicine. Containing over 21 million citations for biomedical literature--both abstracts and full text--in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return…

  4. Methods and management: NIH administrators, federal oversight, and the Framingham Heart Study.

    PubMed

    Patel, Sejal S

    2012-01-01

    This article explores the 1965 controversy over the Framingham Heart Study in the midst of growing oversight into the management of science at the National Institutes of Health (NIH). It describes how, beginning in the early 1960s, federal overseers demanded that NIH administrators adopt particular management styles in administering programs and how these growing pressures led administrators to favor investigative pursuits that allowed for easy prospective accounting of program payoffs, especially those based on experimental methods designed to examine discrete interventions or outcomes of interest. In light of this changing managerial culture within the NIH, the Framingham study and other population laboratories-with their bases in observation and in open-ended study designs-became harder for NIH administrators to justify and defend. PMID:22643985

  5. Methods and Management: NIH Administrators, Federal Oversight, and the Framingham Heart Study

    PubMed Central

    Patel, Sejal S.

    2012-01-01

    Summary This article explores the 1965 controversy over the Framingham Heart Study in the midst of growing oversight into the management of science at the National Institutes of Health (NIH). It describes how, beginning in the early 1960s, federal overseers demanded that NIH administrators adopt particular management styles in administering programs and how these growing pressures led administrators to favor investigative pursuits that allowed for easy prospective accounting of program payoffs, especially those based on experimental methods designed to examine discrete interventions or outcomes of interest. In light of this changing managerial culture within the NIH, the Framingham study and other population laboratories—with their bases in observation and in open-ended study designs—became harder for NIH administrators to justify and defend. PMID:22643985

  6. Similarity graphing and enzyme-reaction database: methods to detect sequence regions of importance for recognition of chemical structures.

    PubMed

    Sumi, K; Nishioka, T; Oda, J

    1991-04-01

    We developed a new method which searches sequence segments responsible for the recognition of a given chemical structure. These segments are detected as those locally conserved among a sequence to be analyzed (target sequence) and a set of sequences (reference sequences). Reference sequences are the sequences of functionally related proteins, ligands of which contain a common chemical substructure in their molecular structures. 'Similarity graphing' cuts target sequences into segments, aligns them with reference sequence pairwise, calculates the degree of similarity for each alignment, and shows graphically cumulative similarity values on target sequence. Any locally conserved regions, short or long in length and weak or strong in similarity, are detected at their optimal conditions by adjusting three parameters. The 'enzyme-reaction database' contains chemical structures and their related enzymes. When a chemical substructure is input into the database, sequences of the enzymes related to the input substructure are systematically searched from the NBRF sequence database and output as reference sequences. Examples of analysis using similarity graphing in combination with the enzyme-reaction database showed a great potentiality in the systematic analysis of the relationships between sequences and molecular recognitions for protein engineering. PMID:1881867

  7. Optimal Dose and Method of Administration of Intravenous Insulin in the Management of Emergency Hyperkalemia: A Systematic Review

    PubMed Central

    Harel, Ziv; Kamel, Kamel S.

    2016-01-01

    Background and Objectives Hyperkalemia is a common electrolyte disorder that can result in fatal cardiac arrhythmias. Despite the importance of insulin as a lifesaving intervention in the treatment of hyperkalemia in an emergency setting, there is no consensus on the dose or the method (bolus or infusion) of its administration. Our aim was to review data in the literature to determine the optimal dose and route of administration of insulin in the management of emergency hyperkalemia. Design, Setting, Participants, & Measurements We searched several databases from their date of inception through February 2015 for eligible articles published in any language. We included any study that reported on the use of insulin in the management of hyperkalemia. Results We identified eleven studies. In seven studies, 10 units of regular insulin was administered (bolus in five studies, infusion in two studies), in one study 12 units of regular insulin was infused over 30 minutes, and in three studies 20 units of regular insulin was infused over 60 minutes. The majority of included studies were biased. There was no statistically significant difference in mean decrease in serum potassium (K+) concentration at 60 minutes between studies in which insulin was administered as an infusion of 20 units over 60 minutes and studies in which 10 units of insulin was administered as a bolus (0.79±0.25 mmol/L versus 0.78±0.25 mmol/L, P = 0.98) or studies in which 10 units of insulin was administered as an infusion (0.79±0.25 mmol/L versus 0.39±0.09 mmol/L, P = 0.1). Almost one fifth of the study population experienced an episode of hypoglycemia. Conclusion The limited data available in the literature shows no statistically significant difference between the different regimens of insulin used to acutely lower serum K+ concentration. Accordingly, 10 units of short acting insulin given intravenously may be used in cases of hyperkalemia. Alternatively, 20 units of short acting insulin may be

  8. Application of the stochastic tunneling method to high throughput database screening

    NASA Astrophysics Data System (ADS)

    Merlitz, H.; Burghardt, B.; Wenzel, W.

    2003-03-01

    The stochastic tunneling technique is applied to screen a database of chemical compounds to the active site of dihydrofolate reductase for lead candidates in the receptor-ligand docking problem. Using an atomistic force field we consider the ligand's internal rotational degrees of freedom. It is shown that the natural ligand (methotrexate) scores best among 10 000 randomly chosen compounds. We analyze the top scoring compounds to identify hot-spots of the receptor. We mutate the amino acids that are responsible for the hot-spots of the receptor and verify that its specificity is lost upon modification.

  9. [Method of traditional Chinese medicine formula design based on 3D-database pharmacophore search and patent retrieval].

    PubMed

    He, Yu-su; Sun, Zhi-yi; Zhang, Yan-ling

    2014-11-01

    By using the pharmacophore model of mineralocorticoid receptor antagonists as a starting point, the experiment stud- ies the method of traditional Chinese medicine formula design for anti-hypertensive. Pharmacophore models were generated by 3D-QSAR pharmacophore (Hypogen) program of the DS3.5, based on the training set composed of 33 mineralocorticoid receptor antagonists. The best pharmacophore model consisted of two Hydrogen-bond acceptors, three Hydrophobic and four excluded volumes. Its correlation coefficient of training set and test set, N, and CAI value were 0.9534, 0.6748, 2.878, and 1.119. According to the database screening, 1700 active compounds from 86 source plant were obtained. Because of lacking of available anti-hypertensive medi cation strategy in traditional theory, this article takes advantage of patent retrieval in world traditional medicine patent database, in order to design drug formula. Finally, two formulae was obtained for antihypertensive. PMID:25850277

  10. Assessing potentially inappropriate prescribing (PIP) and predicting patient outcomes in Ontario’s older population: a population-based cohort study applying subsets of the STOPP/START and Beers’ criteria in large health administrative databases

    PubMed Central

    Bjerre, Lise M; Ramsay, Timothy; Cahir, Catriona; Ryan, Cristín; Halil, Roland; Farrell, Barbara; Thavorn, Kednapa; Catley, Christina; Hawken, Steven; Gillespie, Ulrika; Manuel, Douglas G

    2015-01-01

    Introduction Adverse drug events (ADEs) are common in older people and contribute significantly to emergency department (ED) visits, unplanned hospitalisations, healthcare costs, morbidity and mortality. Many ADEs are avoidable if attention is directed towards identifying and preventing inappropriate drug use and undesirable drug combinations. Tools exist to identify potentially inappropriate prescribing (PIP) in clinical settings, but they are underused. Applying PIP assessment tools to population-wide health administrative data could provide an opportunity to assess the impact of PIP on individual patients as well as on the healthcare system. This would open new possibilities for interventions to monitor and optimise medication management on a broader, population-level scale. Methods and analysis The aim of this study is to describe the occurrence of PIP in Ontario's older population (aged 65 years and older), and to assess the health outcomes and health system costs associated with PIP—more specifically, the association between PIP and the occurrence of ED visits, hospitalisations and death, and their related costs. This will be done within the framework of a population-based retrospective cohort study using Ontario's large health administrative and population databases. Eligible patients aged 66 years and older who were issued at least 1 prescription between 1 April 2003 and 31 March 2014 (approximately 2 million patients) will be included. Ethics and dissemination Ethical approval was obtained from the Ottawa Health Services Network Ethical Review Board and from the Bruyère Research Institute Ethics Review Board. Dissemination will occur via publication, presentation at national and international conferences, and ongoing exchanges with regional, provincial and national stakeholders, including the Ontario Drug Policy Research Network and the Ontario Ministry of Health and Long-Term Care. Trial registration number Registered with clinicaltrials

  11. Method of Administration of PROMIS Scales Did Not Significantly Impact Score Level, Reliability or Validity

    PubMed Central

    Bjorner, Jakob B.; Rose, Matthias; Gandek, Barbara; Stone, Arthur A.; Junghaenel, Doerte U.; Ware, John E.

    2014-01-01

    Objective To test the impact of method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). Study Design and Setting Two non-overlapping parallel forms each containing 8 items from each of three PROMIS item banks (Physical Function, Fatigue and Depression) were completed by 923 adults with COPD, depression, or rheumatoid arthritis. In a randomized cross-over design, subjects answered one form by interactive voice response (IVR) technology, paper questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICC), and convergent/discriminant validity. Results In difference score analyses, no significant mode differences were found and all confidence intervals were within the pre-specified MID of 0.2 SD. Parallel forms reliabilities were very high (ICC=0.85-0.93). Only one across mode ICC was significantly lower than the same mode ICC. Tests of validity showed no differential effect by MOA. Participants preferred screen interface over PQ and IVR. Conclusion We found no statistically or clinically significant differences in score levels or psychometric properties of IVR, PQ or PDA administration as compared to PC. PMID:24262772

  12. Methods for long-term 17β-estradiol administration to mice.

    PubMed

    Ingberg, E; Theodorsson, A; Theodorsson, E; Strom, J O

    2012-01-01

    Rodent models constitute a cornerstone in the elucidation of the effects and biological mechanisms of 17β-estradiol. However, a thorough assessment of the methods for long-term administration of 17β-estradiol to mice is lacking. The fact that 17β-estradiol has been demonstrated to exert different effects depending on dose emphasizes the need for validated administration regimens. Therefore, 169 female C57BL/6 mice were ovariectomized and administered 17β-estradiol using one of the two commonly used subcutaneous methods; slow-release pellets (0.18 mg, 60-day release pellets; 0.72 mg, 90-day release pellets) and silastic capsules (with/without convalescence period, silastic laboratory tubing, inner/outer diameter: 1.575/3.175 mm, filled with a 14 mm column of 36 μg 17β-estradiol/mL sesame oil), or a novel peroral method (56 μg 17β-estradiol/day/kg body weight in the hazelnut cream Nutella). Forty animals were used as ovariectomized and intact controls. Serum samples were obtained weekly for five weeks and 17β-estradiol concentrations were measured using radioimmunoassay. The peroral method resulted in steady concentrations within--except on one occasion--the physiological range and the silastic capsules produced predominantly physiological concentrations, although exceeding the range by maximum a factor three during the first three weeks. The 0.18 mg pellet yielded initial concentrations an order of magnitude higher than the physiological range, which then decreased drastically, and the 0.72 mg pellet produced between 18 and 40 times higher concentrations than the physiological range during the entire experiment. The peroral method and silastic capsules described in this article constitute reliable modes of administration of 17β-estradiol, superior to the widely used commercial pellets. PMID:22137913

  13. A comparison of two methods for indexing and retrieval from a full-text medical database.

    PubMed

    Hersh, W R; Hickam, D H

    1993-01-01

    The objective of this study was to compare how well medical professionals are able to retrieve relevant literature references using two computerized literature searching systems that provide automated (non-human) indexing of content. The first program was SAPHIRE, which features concept-based indexing, free-text input of queries, and ranking of retrieved references for relevance. The second program was SWORD, which provides single-word searching using Boolean operators (AND, OR). Sixteen fourth-year medical students participated in the study. The database for searching was six volumes from the 1989 Yearbook series. The queries were ten questions generated on teaching rounds. All subjects searched half the queries with each program. After the searching, each subject was given a questionnaire about prior experience and preferences about the two programs. Recall (proportion of relevant articles retrieved from the database) and precision (proportion of relevant articles in the retrieved set) were measured for each search done by each participant. Mean recall was 57.6% with SAPHIRE; it was 58.6% with SWORD. Precision was 48.1% with SAPHIRE vs 57.6% with SWORD. Each program was rated easier to use than the other by half of the searchers, and preferences were associated with better searching performance for that program. Both systems achieved recall and precision comparable to existing systems and may represent effective alternatives to MEDLINE and other retrieval systems based on human indexing for searching medical literature. PMID:8412551

  14. Technical and economic evaluation of different methods of Newcastle disease vaccine administration.

    PubMed

    Degefa, T; Dadi, L; Yami, A; GMariam, K; Nassir, M

    2004-01-01

    Two types of locally produced live vaccines (HB1 and La Sota--lentogenic strains) and inactivated oil adjuvant (IOAV) vaccine were used to compare the efficiency of three vaccination techniques, namely drinking water, ocular and spray on broiler chicks. The ocular route of vaccination on 1-day-old chicks followed by a booster dose on the third week through the same route induced a significantly higher level of haemagglutination inhibition antibody titre (P < 0.0001). The highest mean antibody titre was log(2) 6.6 and 93.3% of the chicks were protected from the challenge. The spray technique induced a lower antibody titre (peak of log(2) 5.9) and only 53% of the chicks in this treatment survived against the challenge. The results of this study show that the ocular route is superior to the drinking water route, which is superior to the spray technique. The economic analysis result showed that the ocular HB1 and La Sota vaccine administration method to 1- and 21-day-old chicks gave the highest revenue followed by the drinking water method. In terms of total cost, the injection method required the highest cost (0.21 birr/chick) followed by the ocular method (0.18 birr/chick). The marginal cost of vaccine administration is too small compared with marginal revenues from relative effectiveness of the methods. The internal rate of return for the ocular method was very high. The results of sensitive analysis on revenues from different vaccination methods indicate that a 25% reduction in broiler price reduces the marginal revenue from the ocular method by 12 487 birr but this still does not prove that the ocular method is economically viable for small- and medium-scale poultry farms. PMID:15533121

  15. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  16. The intelligent database machine

    NASA Technical Reports Server (NTRS)

    Yancey, K. E.

    1985-01-01

    The IDM data base was compared with the data base crack to determine whether IDM 500 would better serve the needs of the MSFC data base management system than Oracle. The two were compared and the performance of the IDM was studied. Implementations that work best on which database are implicated. The choice is left to the database administrator.

  17. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix

  18. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  19. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  20. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  1. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  2. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  3. A semantic data dictionary method for database schema integration in CIESIN

    NASA Astrophysics Data System (ADS)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  4. Updating the 2001 National Land Cover Database Impervious Surface Products to 2006 using Landsat Imagery Change Detection Methods

    USGS Publications Warehouse

    Xian, G.; Homer, C.

    2010-01-01

    A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline. ?? 2010 Elsevier Inc.

  5. MapReduce Implementation of a Hybrid Spectral Library-Database Search Method for Large-Scale Peptide Identification

    SciTech Connect

    Kalyanaraman, Anantharaman; Cannon, William R.; Latt, Benjamin K.; Baxter, Douglas J.

    2011-11-01

    A MapReduce-based implementation called MR- MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs.

  6. The REF52 protein database. Methods of database construction and analysis using the QUEST system and characterizations of protein patterns from proliferating and quiescent REF52 cells.

    PubMed

    Garrels, J I; Franza, B R

    1989-03-25

    The construction and analysis of protein databases using the QUEST system is described, and the REF52 protein database is presented. A protein database provides the means to store and compare quantitative and descriptive data for up to 2000 proteins from many experiments that employ computer-analyzed two-dimensional gel electrophoresis. The QUEST system provides the tools to manage, analyze, and communicate these data. The REF52 database contains experiments with normal and transformed rat cell lines. In this report, many of the proteins on the REF52 map are identified by name, by subcellular localization, and by mode of post-translational modification. The quantitative experiments analyzed and compared here include 1) a study of the quantitative reproducibility of the analysis system, 2) a study of the clonal reproducibility of REF52 cells, 3) a study of growth-related changes in REF52 cells, and 4) a study of the effects of labeling cells for varying lengths of time. Of the proteins analyzed from REF52 cells, 10% are nuclear, 6% are phosphoproteins, and 4% are mannose-labeled glycoproteins. The mannose-labeled proteins are more prominent in patterns from quiescent cells, while the synthesis of cytoskeletal proteins is generally repressed at quiescence. A small set of proteins, selected for elevated rates of synthesis is generally repressed at quiescence. A small set of proteins, selected for elevated rates of synthesis in quiescent versus proliferating cells includes one of the tropomyosin isoforms, a myosin light chain isoform, and several prominent glycoproteins. These proteins are thought to be characteristic of the differentiated state of untransformed REF52 cells. Proteins induced early versus late after refeeding quiescent cells show very different patterns of growth regulation. These studies lay the foundations of the REF52 database and provide information needed to interpret the experiments with transformed REF52 cells, which are reported in the

  7. Global Database on Donation and Transplantation: goals, methods and critical issues (www.transplant-observatory.org).

    PubMed

    Mahillo, Beatriz; Carmona, Mar; Álvarez, Marina; Noel, Luc; Matesanz, Rafael

    2013-04-01

    The Global Database on Donation and Transplantation represents the most comprehensive source to date of worldwide data concerning activities in organ donation and transplantation derived from official sources, as well as information on legal and organizational aspects. The objectives are to collect, analyse and disseminate this kind of information of the WHO Member States and to facilitate a network of focal persons in the field of transplantation. They are responsible for providing the legislative and organizational aspects and the annual activity practices through a specific questionnaire. 104 out of the 194 WHO Member States that cover the 90% of the global population contribute to this project.Although we know the numerous limitations and biases as a result of the different interpretations of the questions, based on cultural factors and language, there is no other similar approach to collect information on donation and transplantation practices all over the world. The knowledge of demand for transplantation, availability of deceased and living donor organs and the access to transplantation is essential to monitor global trends in transplantation needs and donor organ availability. Information regarding the existence of regulatory oversight is fundamental to ensure the ethical practice of organ donation and transplantation. PMID:23477800

  8. A noninvasive method for evaluating portal circulation by administration of /sup 201/Tl per rectum

    SciTech Connect

    Tonami, N.; Nakajima, K.; Hisada, K.; Tanaka, N.; Kobayashi, K.

    1982-11-01

    A new method for evaluating portal systemic circulation by administration of /sup 201/Tl per rectum was performed in 13 control subjects and in 65 patients with various liver diseases. In normal controls, the liver was visualized on the 0--5-min image whereas the images of other organs such as the heart, spleen, and lungs were very poor. In patients with liver cirrhosis associated with portal-systemic shunt, and in many other patients with hepatocellular damage, the liver was not so clearly visualized, whereas radioactivity in other organs, especially the heart, became evident. The heart-to-liver uptake ratio at 20 min after administration (H/L ratio) was significantly higher in liver cirrhosis than in normals and patients with chronic hepatitis (p less than 0.001). The patients with esophageal varices showed a significantly higher H/L ratio compared with that in cirrhotic patients without esophageal varices (p less than 0.001). The H/L ratio also showed a significant difference (p less than 0.01) between Stage 1 and Stage 3 esophageal varices. Since there were many other patients with hepatocellular damage who had high H/L ratios similar to those in liver cirrhosis, the effect that hepatocellular damage has on the liver uptake of /sup 201/Tl is also considered. Our present data suggest that this noninvasive method seems to be useful in evaluating portal-to-systemic shunting.

  9. A noninvasive method for evaluating portal circulation by administration of Tl-201 per rectum

    SciTech Connect

    Tonami, N.; Nakajima, K.; Hisada, K.; Tanaka, N.; Kobayashi, K.

    1982-11-01

    A new method for evaluating portal systemic circulation by administration of Tl-201 per rectum was performed in 13 control subjects and in 65 patients with various liver diseases. In normal controls, the liver was visualized on the 0-5-min image whereas the images of other organs such as the heart, spleen, and lungs were very poor. In patients with liver cirrhosis associated with portal-systemic shunt, and in many other patients with hepatocellular damage, the liver was not so clearly visualized, whereas radioactivity in other organs, especially the heart, became evident. The heart-to-liver uptake ratio at 20 min after administration (H/L ratio) was significantly higher in liver cirrhosis than in normals and patients with chronic hepatitis (p<0.001). The patients with esophageal varices showed a significantly higher H/L ratio compared with that in cirrhotic patients without esophageal varices (p<0.001). The H/L ratio also showed a significant difference (p<0.01) between Stage 1 and Stage 3 esophageal varices. Since there were many other patients with hepatocellular damage who had high H/L ratios similar to those in liver cirrhosis, the effect that hepatocellular damage has on the liver uptake of T1-201 is also considered. Our present data suggest that this noninvasive method seems to be useful in evaluating portal-to-systemic shunting.

  10. A Comprehensive Software and Database Management System for Glomerular Filtration Rate Estimation by Radionuclide Plasma Sampling and Serum Creatinine Methods

    PubMed Central

    Jha, Ashish Kumar

    2015-01-01

    Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software “GFR estimation software” which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows® as operating system and Visual Basic 6.0 as the front end and Microsoft Access® as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis. PMID:26097422

  11. Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases

    PubMed Central

    Elgendi, Mohamed

    2013-01-01

    The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usually based on machine-learning approaches that require sufficient computational resources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically efficient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-efficient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital filter design. PMID:24066054

  12. Modelos para la Unificacion de Conceptos, Metodos y Procedimientos Administrativos (Guidelines for Uniform Administrative Concepts, Methods, and Procedures).

    ERIC Educational Resources Information Center

    Serrano, Jorge A., Ed.

    These documents, discussed and approved during the first meeting of the university administrators affiliated with the Federation of Private Universities of Central America and Panama (FUPAC), seek to establish uniform administrative concepts, methods, and procedures, particularly with respect to budgetary matters. The documents define relevant…

  13. La Administradora: A Mixed Methods Study of the Resilience of Mexican American Women Administrators at Hispanic Serving Institutions

    ERIC Educational Resources Information Center

    Sanchez-Zamora, Sabrina Suzanne

    2013-01-01

    This mixed methods study explored the resilience of Mexican American women administrators at Hispanic Serving Institutions (HSIs). The women administrators that were considered in this study included department chairs, deans, and vice presidents in a four-year public HSI. There is an underrepresentation of Mexican American women in higher…

  14. Critically Evaluated Database of Environmental Properties: The Importance of Thermodynamic Relationships, Chemical Family Trends, and Prediction Methods

    NASA Astrophysics Data System (ADS)

    Brockbank, Sarah A.; Russon, Jenna L.; Giles, Neil F.; Rowley, Richard L.; Wilding, W. Vincent

    2013-11-01

    A database containing Henry's law constants, infinite dilution activity coefficients, and solubility data of industrially important chemicals has been compiled for aqueous systems. These properties are important in predicting the fate and transport of chemicals in the environment. The structure of this database is compatible with the existing 801 database and DIADEM interface, and data are included for a subset of compounds found in the 801 database. Thermodynamic relationships, chemical family trends, and predicted values were carefully considered when designating recommended values.

  15. Comparing subjective image quality measurement methods for the creation of public databases

    NASA Astrophysics Data System (ADS)

    Redi, Judith; Liu, Hantao; Alers, Hani; Zunino, Rodolfo; Heynderickx, Ingrid

    2010-01-01

    The Single Stimulus (SS) method is often chosen to collect subjective data testing no-reference objective metrics, as it is straightforward to implement and well standardized. At the same time, it exhibits some drawbacks; spread between different assessors is relatively large, and the measured ratings depend on the quality range spanned by the test samples, hence the results from different experiments cannot easily be merged . The Quality Ruler (QR) method has been proposed to overcome these inconveniences. This paper compares the performance of the SS and QR method for pictures impaired by Gaussian blur. The research goal is, on one hand, to analyze the advantages and disadvantages of both methods for quality assessment and, on the other, to make quality data of blur impaired images publicly available. The obtained results show that the confidence intervals of the QR scores are narrower than those of the SS scores. This indicates that the QR method enhances consistency across assessors. Moreover, QR scores exhibit a higher linear correlation with the distortion applied. In summary, for the purpose of building datasets of subjective quality, the QR approach seems promising from the viewpoint of both consistency and repeatability.

  16. Rate of bleeding-related episodes in adult patients with primary immune thrombocytopenia: a retrospective cohort study using a large administrative medical claims database in the US

    PubMed Central

    Altomare, Ivy; Cetin, Karynsa; Wetten, Sally; Wasser, Jeffrey S

    2016-01-01

    Background Immune thrombocytopenia (ITP) is a rare disorder characterized by low platelet counts and an increased tendency to bleed. The goal of ITP therapy is to treat or prevent bleeding. Actual rates of bleeding are unknown. Clinical trial data may not reflect real-world bleeding rates because of the inclusion of highly refractory patients and more frequent use of rescue therapy. Methods We used administrative medical claims data in the US to examine the occurrence of bleeding-related episodes (BREs) – a composite end point including bleeding and/or rescue therapy use – in adults diagnosed with primary ITP (2008–2012). BRE rates were calculated overall and by ITP phase and splenectomy status. Patients were followed from ITP diagnosis until death, disenrollment from the health plan, or June 30, 2013, whichever came first. Results We identified 6,651 adults diagnosed with primary ITP over the study period (median age: 53 years; 59% female). During 13,064 patient-years of follow-up, 3,768 patients (57%) experienced ≥1 BRE (1.08 BREs per patient-year; 95% confidence interval: 1.06–1.10). The majority (58%) of BREs consisted of rescue therapy use only. Common bleeding types were gastrointestinal hemorrhage, hematuria, ecchymosis, and epistaxis. Intracranial hemorrhage was reported in 74 patients (1%). Just over 7% of patients underwent splenectomy. Newly diagnosed and splenectomized patients had elevated BRE rates. Conclusion We provide current real-world estimates of BRE rates in adults with primary ITP. The majority of ITP patients experienced ≥1 BRE, and over half were defined by rescue therapy use alone. This demonstrates the importance of examining both bleeding and rescue therapy use to fully assess disease burden. PMID:27382333

  17. Facilitators and Barriers to Safe Medication Administration to Hospital Inpatients: A Mixed Methods Study of Nurses’ Medication Administration Processes and Systems (the MAPS Study)

    PubMed Central

    McLeod, Monsey; Barber, Nicholas; Franklin, Bryony Dean

    2015-01-01

    Context Research has documented the problem of medication administration errors and their causes. However, little is known about how nurses administer medications safely or how existing systems facilitate or hinder medication administration; this represents a missed opportunity for implementation of practical, effective, and low-cost strategies to increase safety. Aim To identify system factors that facilitate and/or hinder successful medication administration focused on three inter-related areas: nurse practices and workarounds, workflow, and interruptions and distractions. Methods We used a mixed-methods ethnographic approach involving observational fieldwork, field notes, participant narratives, photographs, and spaghetti diagrams to identify system factors that facilitate and/or hinder successful medication administration in three inpatient wards, each from a different English NHS trust. We supplemented this with quantitative data on interruptions and distractions among other established medication safety measures. Findings Overall, 43 nurses on 56 drug rounds were observed. We identified a median of 5.5 interruptions and 9.6 distractions per hour. We identified three interlinked themes that facilitated successful medication administration in some situations but which also acted as barriers in others: (1) system configurations and features, (2) behaviour types among nurses, and (3) patient interactions. Some system configurations and features acted as a physical constraint for parts of the drug round, however some system effects were partly dependent on nurses’ inherent behaviour; we grouped these behaviours into ‘task focused’, and ‘patient-interaction focused’. The former contributed to a more streamlined workflow with fewer interruptions while the latter seemed to empower patients to act as a defence barrier against medication errors by being: (1) an active resource of information, (2) a passive information resource, and/or (3) a

  18. Identification of crystals deposited in brain and kidney after xylitol administration by biochemical, histochemical, and electron diffraction methods.

    PubMed

    Evans, G W; Phillips, G; Mukherjee, T M; Snow, M R; Lawrence, J R; Thomas, D W

    1973-01-01

    The positive identification of crystals of calcium oxalate occurring in brain and kidney after xylitol administration is described. Biochemical, histochemical, conventional light and electron microscopical methods, including selected area electron diffraction, were used to characterize the crystals. PMID:4693896

  19. Identification of crystals deposited in brain and kidney after xylitol administration by biochemical, histochemical, and electron diffraction methods

    PubMed Central

    Evans, G. W.; Phillips, Gael; Mukherjee, T. M.; Snow, M. R.; Lawrence, J. R.; Thomas, D. W.

    1973-01-01

    The positive identification of crystals of calcium oxalate occurring in brain and kidney after xylitol administration is described. Biochemical, histochemical, conventional light and electron microscopical methods, including selected area electron diffraction, were used to characterize the crystals. Images PMID:4693896

  20. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  1. A new database of source time functions (STFs) extracted from the SCARDEC method

    NASA Astrophysics Data System (ADS)

    Vallée, Martin; Douet, Vincent

    2016-08-01

    SCARDEC method (Vallée et al., 2011) offers a natural access to the earthquakes source time functions (STFs), together with the 1st order earthquake source parameters (seismic moment, depth and focal mechanism). This article first aims at presenting some new approaches and related implementations done in order to automatically provide broadband STFs with the SCARDEC method, both for moderate and very large earthquakes. The updated method has been applied to all earthquakes above magnitude 5.8 contained in the NEIC-PDE catalog since 1992, providing a new consistent catalog of source parameters associated with STFs. This represents today a large catalog (2782 events on 2014/12/31) that we plan to update on a regular basis. It is made available through a web interface whose functionalities are described here.

  2. Comparative study of multimodal intra-subject image registration methods on a publicly available database

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Ghayoor, Ali; Johnson, Hans J.; Sonka, Milan

    2016-03-01

    This work reports on a comparative study between five manual and automated methods for intra-subject pair-wise registration of images from different modalities. The study includes a variety of inter-modal image registrations (MR-CT, PET-CT, PET-MR) utilizing different methods including two manual point-based techniques using rigid and similarity transformations, one automated point-based approach based on Iterative Closest Point (ICP) algorithm, and two automated intensity-based methods using mutual information (MI) and normalized mutual information (NMI). These techniques were employed for inter-modal registration of brain images of 9 subjects from a publicly available dataset, and the results were evaluated qualitatively via checkerboard images and quantitatively using root mean square error and MI criteria. In addition, for each inter-modal registration, a paired t-test was performed on the quantitative results in order to find any significant difference between the results of the studied registration techniques.

  3. Open Rotor Tone Shielding Methods for System Noise Assessments Using Multiple Databases

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Thomas, Russell H.; Lopes, Leonard V.; Burley, Casey L.; Van Zante, Dale E.

    2014-01-01

    Advanced aircraft designs such as the hybrid wing body, in conjunction with open rotor engines, may allow for significant improvements in the environmental impact of aviation. System noise assessments allow for the prediction of the aircraft noise of such designs while they are still in the conceptual phase. Due to significant requirements of computational methods, these predictions still rely on experimental data to account for the interaction of the open rotor tones with the hybrid wing body airframe. Recently, multiple aircraft system noise assessments have been conducted for hybrid wing body designs with open rotor engines. These assessments utilized measured benchmark data from a Propulsion Airframe Aeroacoustic interaction effects test. The measured data demonstrated airframe shielding of open rotor tonal and broadband noise with legacy F7/A7 open rotor blades. Two methods are proposed for improving the use of these data on general open rotor designs in a system noise assessment. The first, direct difference, is a simple octave band subtraction which does not account for tone distribution within the rotor acoustic signal. The second, tone matching, is a higher-fidelity process incorporating additional physical aspects of the problem, where isolated rotor tones are matched by their directivity to determine tone-by-tone shielding. A case study is conducted with the two methods to assess how well each reproduces the measured data and identify the merits of each. Both methods perform similarly for system level results and successfully approach the experimental data for the case study. The tone matching method provides additional tools for assessing the quality of the match to the data set. Additionally, a potential path to improve the tone matching method is provided.

  4. Computer networks for financial activity management, control and statistics of databases of economic administration at the Joint Institute for Nuclear Research

    NASA Astrophysics Data System (ADS)

    Tyupikova, T. V.; Samoilov, V. N.

    2003-04-01

    Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research.

  5. Description of two waterborne disease outbreaks in France: a comparative study with data from cohort studies and from health administrative databases.

    PubMed

    Mouly, D; Van Cauteren, D; Vincent, N; Vaissiere, E; Beaudeau, P; Ducrot, C; Gallay, A

    2016-02-01

    Waterborne disease outbreaks (WBDO) of acute gastrointestinal illness (AGI) are a public health concern in France. Their occurrence is probably underestimated due to the lack of a specific surveillance system. The French health insurance database provides an interesting opportunity to improve the detection of these events. A specific algorithm to identify AGI cases from drug payment reimbursement data in the health insurance database has been previously developed. The purpose of our comparative study was to retrospectively assess the ability of the health insurance data to describe WBDO. Data from the health insurance database was compared with the data from cohort studies conducted in two WBDO in 2010 and 2012. The temporal distribution of cases, the day of the peak and the duration of the epidemic, as measured using the health insurance data, were similar to the data from one of the two cohort studies. However, health insurance data accounted for 54 cases compared to the estimated 252 cases accounted for in the cohort study. The accuracy of using health insurance data to describe WBDO depends on the medical consultation rate in the impacted population. As this is never the case, data analysis underestimates the total number of AGI cases. However this data source can be considered for the development of a detection system of a WBDO in France, given its ability to describe an epidemic signal. PMID:26194500

  6. A Comparison of Bibliographic Instruction Methods on CD-ROM Databases.

    ERIC Educational Resources Information Center

    Davis, Dorothy F.

    1993-01-01

    Describes a study of four methods used to instruct college students on searching PsycLIT on CD-ROM: (1) lecture/demonstration; (2) lecture/demonstration using a liquid crystal display (LCD) panel; (3) video; and (4) a computer-based tutorial. Performance data are analyzed, and factors to consider when developing a CD-ROM bibliographic instruction…

  7. RAId_DbS: Method for Peptide ID using Database Search with Accurate Statistics

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey; Yu, Yi-Kuo

    2007-03-01

    The key to proteomics studies, essential in systems biology, is peptide identification. Under tandem mass spectrometry, each spectrum generated consists of a list of mass/charge peaks along with their intensities. Software analysis is then required to identify from the spectrum peptide candidates that best interpret the spectrum. The library search, which compares the spectral peaks against theoretical peaks generated by each peptide in a library, is among the most popular methods. This method, although robust, lacks good quantitative statistical underpinning. As we show, many library search algorithms suffer from statistical instability. The need for a better statistical basis prompted us to develop RAId_DbS. Taking into account the skewness in the peak intensity distribution while scoring peptides, RAId_DbS provides an accurate statistical significance assignment to each peptide candidate. RAId_DbS will be a valuable tool especially when one intends to identify proteins through peptide identifications.

  8. A new database of Source Time Functions (STFs) extracted from the SCARDEC method

    NASA Astrophysics Data System (ADS)

    Vallée, Martin; Douet, Vincent

    2016-04-01

    SCARDEC method (Vallée et al., 2011) offers a natural access to the earthquakes source time functions (STFs), together with the first order earthquake source parameters (seismic moment, depth and focal mechanism). We first present here some new approaches and related implementations done in order to automatically provide broadband STFs with the SCARDEC method, both for moderate (down to magnitude 5.8) and very large earthquakes. The updated method has been applied to all the earthquakes since 1992, providing a new consistent catalog of source parameters associated with STFs. Applications are expected to be various, as STFs offer quantitative information on the source process, helping fundamental research on earthquake mechanics or more applied studies related to seismic hazard. On the other hand, they can be also seen as a tool for Earth structure analyses, where the excitation of the medium at the source has to be known. The catalog now contains 2889 events (including earthquakes till 2014/12/31), and we plan to update it on a regular basis. It is made available through a web interface whose functionalities are described here.

  9. Evaluating current automatic de-identification methods with Veteran’s health administration clinical documents

    PubMed Central

    2012-01-01

    Background The increased use and adoption of Electronic Health Records (EHR) causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI), which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act “Safe Harbor” method. This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA) clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. Methods We installed and evaluated five text de-identification systems “out-of-the-box” using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique ‘PHI’ category. Performance of the systems was assessed using recall (equivalent to sensitivity) and precision (equivalent to positive predictive value) metrics, as well as the F2-measure. Results Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest “out-of-the-box” F2-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F2-measure to 79% with partial matches. Conclusions The

  10. Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database

    PubMed Central

    Turner, Erick H.; Knoepflmacher, Daniel; Shapley, Lee

    2012-01-01

    Background Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy. Methods and Findings FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant. Conclusions The magnitude of publication bias found for antipsychotics was less than that found

  11. A search for streams and associations in meteor databases. Method of Indices

    NASA Astrophysics Data System (ADS)

    Svoreň, J.; Neslušan, L.; Porubčan, V.

    2000-08-01

    A new method of searching for minor meteor streams and associations is presented and discussed. The procedure, based only on mathematical statistics, enables a parallel separation of major and minor streams or associations. The approach utilizes a division of the ranges of examined parameters into equidistant intervals. The method is tested on the IAU Meteor Data Center Lund catalogue of precise photographic orbits representing the most extensive set of photographic meteor orbits. Besides the five orbital elements incorporated in the Southworth-Hawkins D-criterion, we have also included in the procedure the coordinates of the radiant which belong to the most accurately known parameters and the geocentric velocity as a significant parameter characteristic for physically related orbits. The basic idea of the procedure is a division of the observed ranges of parameters into a number of equidistant intervals and assignment of indices to a meteor according to the intervals pertinent to its parameters. The meteors with equal indices are regarded as mutually related. Since various parameters listed in the catalogue contain various relative errors, it is necessary to use several intervals in the division of each parameter to obtain a good fit with the real orbital distribution. The relative ratios, approximated by small integers, corresponding to the reciprocal values of the relative errors, were applied as the basic numbers for the division of the parameters. To test the quality of this method, the first step presented in this paper is aimed at wider intervals providing a less detailed classification (a smaller branching). In this step all the major streams (except of the northern branch of δ-Aquarids) were identified, confirming the efficiency of the procedure. After combining the related groups, 16 streams were identified. The search program also identifies widely spread Taurids. There are separated orbits pertinent to some minor streams such as the o-Draconids, κ

  12. Methods and pitfalls in searching drug safety databases utilising the Medical Dictionary for Regulatory Activities (MedDRA).

    PubMed

    Brown, Elliot G

    2003-01-01

    The Medical Dictionary for Regulatory Activities (MedDRA) is a unified standard terminology for recording and reporting adverse drug event data. Its introduction is widely seen as a significant improvement on the previous situation, where a multitude of terminologies of widely varying scope and quality were in use. However, there are some complexities that may cause difficulties, and these will form the focus for this paper. Two methods of searching MedDRA-coded databases are described: searching based on term selection from all of MedDRA and searching based on terms in the safety database. There are several potential traps for the unwary in safety searches. There may be multiple locations of relevant terms within a system organ class (SOC) and lack of recognition of appropriate group terms; the user may think that group terms are more inclusive than is the case. MedDRA may distribute terms relevant to one medical condition across several primary SOCs. If the database supports the MedDRA model, it is possible to perform multiaxial searching: while this may help find terms that might have been missed, it is still necessary to consider the entire contents of the SOCs to find all relevant terms and there are many instances of incomplete secondary linkages. It is important to adjust for multiaxiality if data are presented using primary and secondary locations. Other sources for errors in searching are non-intuitive placement and the selection of terms as preferred terms (PTs) that may not be widely recognised. Some MedDRA rules could also result in errors in data retrieval if the individual is unaware of these: in particular, the lack of multiaxial linkages for the Investigations SOC, Social circumstances SOC and Surgical and medical procedures SOC and the requirement that a PT may only be present under one High Level Term (HLT) and one High Level Group Term (HLGT) within any single SOC. Special Search Categories (collections of PTs assembled from various SOCs by

  13. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    PubMed Central

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  14. Threshold detection for the generalized Pareto distribution: Review of representative methods and application to the NOAA NCDC daily rainfall database

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto

    2016-04-01

    In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.

  15. Administrative Support and Alternatively Certified Teachers: A Mixed Methods Study on New Teacher Support and Retention

    ERIC Educational Resources Information Center

    Anderson, Erin M.

    2012-01-01

    A non-experimental study was conducted to examine the perceived administrative support needs of alternatively certified teachers and determine their impact on teacher retention. The study sought to identify the most valued administrative support needs of alternatively-certified teachers; to compare those needs by gender and tier teaching level;…

  16. Updating the 2001 National Land Cover Database land cover classification to 2006 by using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, G.; Homer, C.; Fry, J.

    2009-01-01

    The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline. ?? 2009 Elsevier Inc.

  17. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  18. Generalized method for probability-based peptide and protein identification from tandem mass spectrometry data and sequence database searching.

    PubMed

    Ramos-Fernández, Antonio; Paradela, Alberto; Navajas, Rosana; Albar, Juan Pablo

    2008-09-01

    Tandem mass spectrometry-based proteomics is currently in great demand of computational methods that facilitate the elimination of likely false positives in peptide and protein identification. In the last few years, a number of new peptide identification programs have been described, but scores or other significance measures reported by these programs cannot always be directly translated into an easy to interpret error rate measurement such as the false discovery rate. In this work we used generalized lambda distributions to model frequency distributions of database search scores computed by MASCOT, X!TANDEM with k-score plug-in, OMSSA, and InsPecT. From these distributions, we could successfully estimate p values and false discovery rates with high accuracy. From the set of peptide assignments reported by any of these engines, we also defined a generic protein scoring scheme that enabled accurate estimation of protein-level p values by simulation of random score distributions that was also found to yield good estimates of protein-level false discovery rate. The performance of these methods was evaluated by searching four freely available data sets ranging from 40,000 to 285,000 MS/MS spectra. PMID:18515861

  19. Electronic Databases.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    1985-01-01

    Presents examples of bibliographic, full-text, and numeric databases. Also discusses how to access these databases online, aids to online retrieval, and several issues and trends (including copyright and downloading, transborder data flow, use of optical disc/videodisc technology, and changing roles in database generation and processing). (JN)

  20. Visualization of multidimensional database

    NASA Astrophysics Data System (ADS)

    Lee, Chung

    2008-01-01

    The concept of multidimensional databases has been extensively researched and wildly used in actual database application. It plays an important role in contemporary information technology, but due to the complexity of its inner structure, the database design is a complicated process and users are having a hard time fully understanding and using the database. An effective visualization tool for higher dimensional information system helps database designers and users alike. Most visualization techniques focus on displaying dimensional data using spreadsheets and charts. This may be sufficient for the databases having three or fewer dimensions but for higher dimensions, various combinations of projection operations are needed and a full grasp of total database architecture is very difficult. This study reviews existing visualization techniques for multidimensional database and then proposes an alternate approach to visualize a database of any dimension by adopting the tool proposed by Kiviat for software engineering processes. In this diagramming method, each dimension is represented by one branch of concentric spikes. This paper documents a C++ based visualization tool with extensive use of OpenGL graphics library and GUI functions. Detailed examples of actual databases demonstrate the feasibility and effectiveness in visualizing multidimensional databases.

  1. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database

    PubMed Central

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Background: Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. Methods: This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. Results: The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P <0.001). Sixty-five percent of the researchers indicated that, although the visual method was better than the controlled method, the control of keywords in the controlled method resulted in finding more proper keywords for the searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. Conclusion: For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual

  2. Comparison of two formaldehyde administration methods of in ovo-injected eggs.

    PubMed

    Steinlage, Sara J Throne; Sander, Jean E; Wilson, Jeanna L

    2002-01-01

    Formaldehyde administration in the hatchery can be very useful in decreasing microbial numbers. However, its use is controversial because of the adverse effects that can occur to chicks and people. This study was designed to look at alternative methods of application of formaldehyde in the hatchery. In addition, the study compared the effects of these methods of application on in ovo-and non-in ovo-injected eggs. All in ovo-injected eggs were given diluent only with no vaccine or antibiotic added. In hatchers containing both in ovo-injected eggs and non-in ovo-injected eggs, formaldehyde was administered two ways, dose (DOSE) and constant rate infusion (CRI). In the DOSE hatcher, 12 ml of formaldehyde was administered at one time every 12 hr, whereas in the CRI hatcher, the same volume was administered at a rate of 1 ml/hr over a 12-hr period. A control (CONT) hatcher received 12 ml of distilled water at the same time that the DOSE hatcher was given formaldehyde. In the DOSE hatcher, a peak concentration of formaldehyde of 102 ppm was reached. The CRI was maintained at approximately 20 ppm of formaldehyde. At pipping, the aerosol bacterial load in the hatchers receiving formaldehyde (DOSE, 130 colony-forming units [CFU]/m3; CRI, 82.5 CFU/m3) was significantly less than in the CONT hatcher (235 CFU/m3). At hatch, the CRI (337.5 CFU/m3) was not able to control bacterial levels and only the DOSE hatcher (150 CFU/m3) had a significantly lower aerosol bacterial count. The CRI non-in ovo-injected eggs (93.39%) had a significantly higher percentage of hatch of fertile compared with non-in ovo-injected eggs exposed to water (84.27%). In ovo-injected eggs in CONT and DOSE treatment groups contained significantly higher percentages of visual contamination than non-in on-injected eggs in the same hatchers. This difference had numerical significance only in the treatment groups within the CRI hatcher. The chicks were then placed into replicate treatment groups and grown for 14

  3. Different profiles of quercetin metabolites in rat plasma: comparison of two administration methods.

    PubMed

    Kawai, Yoshichika; Saito, Satomi; Nishikawa, Tomomi; Ishisaka, Akari; Murota, Kaeko; Terao, Junji

    2009-03-23

    The bioavailability of polyphenols in human and rodents has been discussed regarding their biological activity. We found different metabolite profiles of quercetin in rat plasma between two administration procedures. A single intragastric administration (50 mg/kg) resulted in the appearance of a variety of metabolites in the plasma, whereas only a major fraction was detected by free access (1% quercetin). The methylated/non-methylated metabolites ratio was much higher in the free access group. Mass spectrometric analyses showed that the fraction from free access contained highly conjugated quercetin metabolites such as sulfo-glucuronides of quercetin and methylquercetin. The metabolite profile of human plasma after an intake of onion was similar to that with intragastric administration in rats. In vitro oxidation of human low-density lipoprotein showed that methylation of the catechol moiety of quercetin significantly attenuated the antioxidative activity. These results might provide information about the bioavailability of quercetin when conducting animal experiments. PMID:19270373

  4. Characterization of Listeria monocytogenes recovered from imported cheese contributed to the National PulseNet Database by the U.S. Food and Drug Administration from 2001 to 2008.

    PubMed

    Timbo, Babgaleh B; Keys, Christine; Klontz, Karl

    2010-08-01

    Imported foods must meet the same U.S. Food and Drug Administration (FDA) standards as domestic foods. The FDA determines whether an imported food is in compliance with the Federal Food, Drug, and Cosmetic Act. Pursuant to its regulatory activities, the FDA conducts compliance surveillance on imported foods offered for entry into the U.S. commerce. The National PulseNet Database is the molecular surveillance network for foodborne infections and is widely used to provide real-time subtyping support to epidemiologic investigations of foodborne diseases. FDA laboratories use pulsed-field gel electrophoresis to subtype foodborne pathogens recovered from imported foods and submit the molecular patterns to the National PulseNet Database at the Centers for Disease Control and Prevention. There were 60 isolates of Listeria monocytogenes in the FDA Field Accomplishment and Compliance Tracking System from 2001 to 2008 due to cheese imported from the following countries: Mexico (n=21 isolates), Italy (19), Israel (9), Portugal (5), Colombia (3), Greece (2), and Spain (1). We observed genetic diversity of L. monocytogenes isolates and genetic relatedness among strains recovered from imported cheese products coming to the United States from different countries. Consistent characterization of L. monocytogenes isolates recovered from imported cheeses, accompanied by epidemiologic investigations to ascertain human illness associated with these strains, could be helpful in the control of listeriosis acquired from imported cheeses. PMID:20819363

  5. Faculty and Administrator Perspectives of Merit Pay Compensation Systems in Private Higher Education: A Mixed Methods Analysis

    ERIC Educational Resources Information Center

    Power, Anne L.

    2013-01-01

    The purpose of this explanatory sequential mixed methods study is to explore faculty and administrator perspectives of faculty merit pay compensation systems in private, higher education institutions. The study focuses on 10 small, private, four-year institutions which are religiously affiliated. All institutions are located in Nebraska, Iowa, and…

  6. The Relative Value of Skills, Knowledge, and Teaching Methods in Explaining Master of Business Administration (MBA) Program Return on Investment

    ERIC Educational Resources Information Center

    van Auken, Stuart; Wells, Ludmilla Gricenko; Chrysler, Earl

    2005-01-01

    In this article, the authors provide insight into alumni perceptions of Master of Business Administration (MBA) program return on investment (ROI). They sought to assess the relative value of skills, knowledge, and teaching methods in explaining ROI. By developing insight into the drivers of ROI, the real utility of MBA program ingredients can be…

  7. Database in Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  8. Toward a New Vocational and Career Education in the Cleveland City Schools: A Context Statement for Use with the Data-based Course Assessment Method.

    ERIC Educational Resources Information Center

    Grossman, Gary M.; And Others

    The Data-based Course Assessment Method (DCAM) assists curriculum managers in making appropriate program-related decisions. To set the context for DCAM in the Cleveland (Ohio) Public Schools, a study was made of Cleveland's employer/business community attitudes. Five characteristics of Cleveland in the context of the 21st century were examined:…

  9. Statistical databases

    SciTech Connect

    Kogalovskii, M.R.

    1995-03-01

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  10. HPERD Administrators' Perspectives Concerning Importance and Practice of Selected Marketing Methods.

    ERIC Educational Resources Information Center

    Ballew, Jerry L.

    This paper reports on the critical role that marketing can have on the health, physical education, recreation, and dance professions (HPERD) and on a national survey of college administrators in the field and their attitudes and practices at the college level. The first half of the paper briefly traces the growing impact of marketing on service…

  11. Managing Vocational Training Systems: A Handbook for Senior Administrators. Quantitative Methods in Social Protection Series.

    ERIC Educational Resources Information Center

    Gasskov, Vladimir

    This book provides state-of-the-art materials relating to the management and organization of public VET (vocational education and training) systems and suggests a framework for developing the management competence of senior administrators. The book is organized by areas of management function and consists of 6 modules with 19 learning units.…

  12. Development of the method and U.S. normalization database for Life Cycle Impact Assessment and sustainability metrics.

    PubMed

    Bare, Jane; Gloria, Thomas; Norris, Gregory

    2006-08-15

    Normalization is an optional step within Life Cycle Impact Assessment (LCIA) that may be used to assist in the interpretation of life cycle inventory data as well as life cycle impact assessment results. Normalization transforms the magnitude of LCI and LCIA results into relative contribution by substance and life cycle impact category. Normalization thus can significantly influence LCA-based decisions when tradeoffs exist. The U. S. Environmental Protection Agency (EPA) has developed a normalization database based on the spatial scale of the 48 continental U.S. states, Hawaii, Alaska, the District of Columbia, and Puerto Rico with a one-year reference time frame. Data within the normalization database were compiled based on the impact methodologies and lists of stressors used in TRACI-the EPA's Tool for the Reduction and Assessment of Chemical and other environmental Impacts. The new normalization database published within this article may be used for LCIA case studies within the United States, and can be used to assist in the further development of a global normalization database. The underlying data analyzed for the development of this database are included to allow the development of normalization data consistent with other impact assessment methodologies as well. PMID:16955915

  13. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  14. Methods to examine molecular changes and behavioral effects of drug administration.

    PubMed

    Michna, Laura; Brenz-Verca, Maria; Dreyer, Jean-Luc; Wagner, George C

    2002-06-01

    Our laboratory has developed an integrative approach to study the molecular changes and behavioral effects of drug administration consisting of a combination of quantitative real-time reverse transcription polymerase chain reaction, RNA isolation and differential display, in situ hybridization, place preference conditioning and high-performance liquid chromatography. Although the techniques are not novel, this multi-systems approach allows for the examination of gene expression changes following the administration of drugs of abuse such as cocaine, and allows for an analysis of behavior and neurochemistry of gene knockout mice. As a result of this combination of techniques, we have been able to determine the expression, location and function of the CD81 protein. Specifically, CD81 was induced exclusively in the nucleus accumbens by cocaine treatment. Subsequent behavioral testing of CD81 knockout mice revealed these mice displayed altered sensitivity to cocaine. PMID:12113778

  15. [Animal models of drug dependence using the drug self-administration method].

    PubMed

    Yamamoto, T; Yabuuchi, K; Yamaguchi, T; Nakamichi, M

    2001-01-01

    This paper will review 1) experimental models of drug-seeking behavior and 2) mechanisms underlying the behavior, focusing on cocaine self-administration. After the acquisition of self-administration, vigorous lever-pressing is generally observable after the drug was replaced by saline. This lever-pressing behavior under saline infusion can be considered "drug-seeking behavior". Drug-seeking behavior is reinstated by non-contingent injection of the drug, stress exposure and presentation of drug-associated stimuli even after extinction. This is called a relapse/reinstatement model. Electrophysiological studies showed that the majority of accumbal neurons is tonically inhibited during cocaine self-administration and exhibited phasic increases in firing time-locked to cocaine self-infusion, which might represent the craving state or drive animals to drug-seeking behavior. Voltammetry and microdialysis studies indicated that the timing of drug-seeking responses can be predicted from fluctuations in accumbal extracellular dopamine concentration. Whereas dopamine D2-like agonists reinstated extinguished cocaine-seeking behavior, D1-like agonists prevented the relapse in cocaine-seeking behavior induced by cocaine itself. Given that an AMPA receptor antagonist, but not dopamine antagonist, prevented cocaine-seeking behavior induced by cocaine, glutamate transmission in the nucleus accumbens is thought to be important for expression of craving or drug-seeking behavior. PMID:11233296

  16. Comparison of two methods of administration of amphetamine on the dynamics of dopaminergic neurons in the rat.

    PubMed

    Leccese, A P; Smith, D G; Geis, L S; Lyness, W H

    1986-09-01

    Dopamine (DA) and its metabolite dihydroxyphenylacetic acid (DOPAC) in brain were examined in the striatum and nucleus accumbens septi after the administration of amphetamine by two different methods. A computer-controlled device was constructed to deliver intravenous injections of amphetamine in patterns mimicking those of animals in a self-administration paradigm, i.e. a total of 65 injections of 0.125 mg/kg/injection over 8 hr [total; 8.13 mg/kg (22.05 mumoles/kg)]. The second method was the intraperitoneal injection of 8.13 mg/kg as a single bolus. Control animals were intravenously or intraperitoneally administered saline. The effects of the two injection methods on the concentrations of DA and DOPAC were quite distinct at early times. This may in part be due to differences in the peak concentrations of amphetamine in brain achieved by the two regimens. Differences still persisted 48 hr after injection, particularly in the striatum. Increased levels of DA and DOPAC were observed at this time after the computer-controlled injections, while significantly decreased DA in the striatum is found after intraperitoneal bolus injections. These data strongly suggest that the method of administration of amphetamine can substantially alter the effects and possible toxicity of the drug on dopaminergic systems. PMID:3774131

  17. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  18. A Novel Forensic Tool for the Characterization and Comparison of Printing Ink Evidence: Development and Evaluation of a Searchable Database Using Data Fusion of Spectrochemical Methods.

    PubMed

    Trejos, Tatiana; Torrione, Peter; Corzo, Ruthmara; Raeva, Ana; Subedi, Kiran; Williamson, Rhett; Yoo, Jong; Almirall, Jose

    2016-05-01

    A searchable printing ink database was designed and validated as a tool to improve the chemical information gathered from the analysis of ink evidence. The database contains 319 samples from printing sources that represent some of the global diversity in toner, inkjet, offset, and intaglio inks. Five analytical methods were used to generate data to populate the searchable database including FTIR, SEM-EDS, LA-ICP-MS, DART-MS, and Py-GC-MS. The search algorithm based on partial least-squares discriminant analysis generates a similarity "score" used for the association between similar samples. The performance of a particular analytical method to associate similar inks was found to be dependent on the ink type with LA-ICP-MS performing best, followed by SEM-EDS and DART-MS methods, while FTIR and Py-GC-MS were less useful in association but were still useful for classification purposes. Data fusion of data collected from two complementary methods (i.e., LA-ICP-MS and DART-MS) improves the classification and association of similar inks. PMID:27122411

  19. BIOMARKERS DATABASE

    EPA Science Inventory

    This database was developed by assembling and evaluating the literature relevant to human biomarkers. It catalogues and evaluates the usefulness of biomarkers of exposure, susceptibility and effect which may be relevant for a longitudinal cohort study. In addition to describing ...

  20. Validation of the multivariable In-hospital Mortality for PulmonAry embolism using Claims daTa (IMPACT) prediction rule within an all-payer inpatient administrative claims database

    PubMed Central

    Coleman, Craig I; Kohn, Christine G; Crivera, Concetta; Schein, Jeffrey R; Peacock, W Frank

    2015-01-01

    Objective To validate the In-hospital Mortality for PulmonAry embolism using Claims daTa (IMPACT) prediction rule, in a database consisting only of inpatient claims. Design Retrospective claims database analysis. Setting The 2012 Healthcare Cost and Utilization Project National Inpatient Sample. Participants Pulmonary embolism (PE) admissions were identified by an International Classification of Diseases, ninth edition (ICD-9) code either in the primary position or secondary position when accompanied by a primary code for a PE complication. The multivariable IMPACT rule, which includes age and 11 comorbidities, was used to estimate patients’ probability of in-hospital mortality and classify them as low or higher risk (≤1.5% deemed low risk). Primary and secondary outcome measures The rule's sensitivity, specificity, positive and negative predictive values (PPV and NPV) and area under the receiver operating characteristic curve statistic for predicting in-hospital mortality with accompanying 95% CIs. Results A total of 34 108 admissions for PE were included, with a 3.4% in-hospital case-fatality rate. IMPACT classified 11 025 (32.3%) patients as low risk, and low risk patients had lower in-hospital mortality (OR, 0.17, 95% CI 0.13 to 0.21), shorter length of stay (−1.2 days, p<0.001) and lower total treatment costs (−$3074, p<0.001) than patients classified as higher risk. IMPACT had a sensitivity of 92.4%, 95% CI 90.7 to 93.8 and specificity of 33.2%, 95% CI 32.7 to 33.7 for classifying mortality risk. It had a high NPV (>99%), low PPV (4.6%) and an AUC of 0.74, 95% CI 0.73 to 0.76. Conclusions The IMPACT rule appeared valid when used in this all payer, inpatient only administrative claims database. Its high sensitivity and NPV suggest the probability of in-hospital death in those classified as low risk by IMPACT was minimal. PMID:26510731

  1. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  2. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    SciTech Connect

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  3. Networked Administration Streamlines Operations.

    ERIC Educational Resources Information Center

    School Planning and Management, 1996

    1996-01-01

    An Iowa school district has retooled its computer systems for more standardized administration. In addition to administration, the district is doing inhouse databasing of financial accounting, and doing inhouse scheduling and grade reporting. A partnership with the Chamber of Commerce contributed $500,000 for the network system. (MLF)

  4. Pediatric immunization-related safety incidents in primary care: A mixed methods analysis of a national database

    PubMed Central

    Rees, Philippa; Edwards, Adrian; Powell, Colin; Evans, Huw Prosser; Carter, Ben; Hibbert, Peter; Makeham, Meredith; Sheikh, Aziz; Donaldson, Liam; Carson-Stevens, Andrew

    2015-01-01

    Background Children are scheduled to receive 18–20 immunizations before their 18th birthday in England and Wales; this approximates to 13 million vaccines administered per annum. Each immunization represents a potential opportunity for immunization-related error and effective immunization is imperative to maintain the public health benefit from immunization. Using data from a national reporting system, this study aimed to characterize pediatric immunization-related safety incident reports from primary care in England and Wales between 2002 and 2013. Methods A cross-sectional mixed methods study was undertaken. This comprised reading the free-text of incident reports and applying codes to describe incident type, potential contributory factors, harm severity, and incident outcomes. A subsequent thematic analysis was undertaken to interpret the most commonly occurring codes, such as those describing the incident, events leading up to it and reported contributory factors, within the contexts they were described. Results We identified 1745 reports and most (n = 1077, 61.7%) described harm outcomes including three deaths, 67 reports of moderate harm and 1007 reports of low harm. Failure of timely vaccination was the potential cause of three child deaths from meningitis and pneumonia, and described in a further 113 reports. Vaccine administration incidents included the wrong number of doses (n = 476, 27.3%), wrong timing (n = 294, 16.8%), and wrong vaccine (n = 249, 14.3%). Documentation failures were frequently implicated. Socially and medically vulnerable children were commonly described. Conclusion This is the largest examination of reported contributory factors for immunization-related patient safety incidents in children. Our findings suggest investments in IT infrastructure to support data linkage and identification of risk predictors, development of consultation models that promote the role of parents in mitigating safety incidents, and improvement

  5. 76 FR 74804 - Federal Housing Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... Look Sales Method (see 75 FR 41226, first column). Through this notice, HUD announces that it has... availability of a universal NAID to aid eligible purchasers under the First Look Sales method. DATES: The dates... Relay Service at (800) 877-8339. SUPPLEMENTARY INFORMATION: On July 15, 2010, at 75 FR 41255,...

  6. [DATABASE FOR DEPOSITARY DEPARTMENT OF MICROORGANISMS].

    PubMed

    Brovarnyk, V; Golovach, T M

    2015-01-01

    The database on microorganism culture depositary is designed with using MS Access 2010. Three major modules, namely general description, administration, storage, compound database kernel. Description of information in these modules is given. Web page of the depositary is developed on the database. PMID:26638488

  7. Development of an Aerodynamic Analysis Method and Database for the SLS Service Module Panel Jettison Event Utilizing Inviscid CFD and MATLAB

    NASA Technical Reports Server (NTRS)

    Applebaum, Michael P.; Hall, Leslie, H.; Eppard, William M.; Purinton, David C.; Campbell, John R.; Blevins, John A.

    2015-01-01

    This paper describes the development, testing, and utilization of an aerodynamic force and moment database for the Space Launch System (SLS) Service Module (SM) panel jettison event. The database is a combination of inviscid Computational Fluid Dynamic (CFD) data and MATLAB code written to query the data at input values of vehicle/SM panel parameters and return the aerodynamic force and moment coefficients of the panels as they are jettisoned from the vehicle. The database encompasses over 5000 CFD simulations with the panels either in the initial stages of separation where they are hinged to the vehicle, in close proximity to the vehicle, or far enough from the vehicle that body interference effects are neglected. A series of viscous CFD check cases were performed to assess the accuracy of the Euler solutions for this class of problem and good agreement was obtained. The ultimate goal of the panel jettison database was to create a tool that could be coupled with any 6-Degree-Of-Freedom (DOF) dynamics model to rapidly predict SM panel separation from the SLS vehicle in a quasi-unsteady manner. Results are presented for panel jettison simulations that utilize the database at various SLS flight conditions. These results compare favorably to an approach that directly couples a 6-DOF model with the Cart3D Euler flow solver and obtains solutions for the panels at exact locations. This paper demonstrates a method of using inviscid CFD simulations coupled with a 6-DOF model that provides adequate fidelity to capture the physics of this complex multiple moving-body panel separation event.

  8. A General Method for Evaluating Deep Brain Stimulation Effects on Intravenous Methamphetamine Self-Administration

    PubMed Central

    Batra, Vinita; Guerin, Glenn F.; Goeders, Nicholas E.; Wilden, Jessica A.

    2016-01-01

    Substance use disorders, particularly to methamphetamine, are devastating, relapsing diseases that disproportionally affect young people. There is a need for novel, effective and practical treatment strategies that are validated in animal models. Neuromodulation, including deep brain stimulation (DBS) therapy, refers to the use of electricity to influence pathological neuronal activity and has shown promise for psychiatric disorders, including drug dependence. DBS in clinical practice involves the continuous delivery of stimulation into brain structures using an implantable pacemaker-like system that is programmed externally by a physician to alleviate symptoms. This treatment will be limited in methamphetamine users due to challenging psychosocial situations. Electrical treatments that can be delivered intermittently, non-invasively and remotely from the drug-use setting will be more realistic. This article describes the delivery of intracranial electrical stimulation that is temporally and spatially separate from the drug-use environment for the treatment of IV methamphetamine dependence. Methamphetamine dependence is rapidly developed in rodents using an operant paradigm of intravenous (IV) self-administration that incorporates a period of extended access to drug and demonstrates both escalation of use and high motivation to obtain drug. PMID:26863392

  9. Experiment Databases

    NASA Astrophysics Data System (ADS)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  10. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  11. Using a diary to record near misses and minor injuries—which method of administration is best?

    PubMed Central

    Marsh, P.; Kendrick, D.

    1999-01-01

    Objectives—To determine the effect of different methods of administering a diary to collect information from parents on near miss and minor injuries on responses, completeness and accuracy, the number of incidents reported, the effect of a financial incentive on response, and the cost of administering each method. Setting—The study was set within the context of a cluster randomised controlled trial of injury prevention in 36 practices in Nottingham. Methods—The study population comprised the 1594 parents who responded to the baseline questionnaire. Parents were allocated systematically to one of four groups: postal administration, with and without financial incentive, telephone administration, with and without financial incentive (102 in each group). A clinic visit method with and without financial incentive (50 in each group) was also used. Results—A significant trend was found, with decreasing response rates with increasing degree of contact with the parent, such that administering the diary in the clinic had the lowest response (χ2 for trend = 5.54, 1 df, p = 0.02). Offering a financial incentive increased responses from 47% to 59% (χ2=5.78, 1 df, p = 0.016). The most complete recording was found in the diaries handed out at clinic visits. Importantly, parents were accurate in their recording of near miss and minor injuries, suggesting they understood the differences between the two types of incident. Postal methods were the least expensive method of administering the diary in terms of average cost per returned diary. Using a financial incentive resulted in a lower cost per returned diary for telephone and clinic visit methods. Conclusions—Parents can accurately and reliably complete diaries recording near miss and minor injuries occurring to their preschool children. More work is needed to investigate methods of increasing response. Postal diaries achieve the highest response but have the least complete recording of data. Diaries administered

  12. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  13. Simplified methods of topical fluoride administration: effects in individuals with hyposalivation.

    PubMed

    Gabre, Pia; Moberg Sköld, Ulla; Birkhed, Dowen

    2013-01-01

    The aim was to compare fluoride (F) levels in individuals with normal salivary secretion and hyposalivation in connection with their use of F solutions and toothpaste. Seven individuals with normal salivation and nine with hyposalivation rinsed with 0.2% NaF solution for 1 minute. In addition, individuals with hyposalivation performed the following: (i) 0.2% NaF rinsing for 20 seconds, (ii) rubbing oral mucosa with a swab soaked with 0.2% NaF solution, and (iii) brushing with 5,000 ppm F (1.1% NaF) toothpaste. Subjects characterized by hyposalivation reached approximately five times higher peak values of F concentrations in saliva after 1 minute rinsing with the F solution and higher area under the curve (AUC) values. The simplified methods exhibited the same AUC values as did 1 minute of rinsing. Brushing with 5,000 ppm F toothpaste resulted in higher AUC values than did the simplified methods. The F concentrations reached higher levels in individuals with hyposalivation compared to those with normal salivation. The simplified methods tested showed similar effects as conventional methods. PMID:23600981

  14. Perception of Teachers and Administrators on the Teaching Methods That Influence the Acquisition of Generic Skills

    ERIC Educational Resources Information Center

    Audu, R.; Bin Kamin, Yusri; Bin Musta'amal, Aede Hatib; Bin Saud, Muhammad Sukri; Hamid, Mohd. Zolkifli Abd.

    2014-01-01

    This study is designed to identify the most significant teaching methods that influence the acquisition of generic skills of mechanical engineering trades students at technical college level. Descriptive survey research design was utilized in carrying out the study. One hundred and ninety (190) respondents comprised of mechanical engineering…

  15. A Case Study of Qualitative Research: Methods and Administrative Impact. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schoen, Jane; Warner, Sean

    A case study in program evaluation that demonstrates the effectiveness of qualitative research methods is presented. Over a 5-year period, the Union for Experimenting Colleges and Universities in Ohio offered a baccalaureate program (University Without Walls) to local employees of a national manufacturing firm. The institutional research office…

  16. Advanced Methods in Distance Education: Applications and Practices for Educators, Administrators and Learners

    ERIC Educational Resources Information Center

    Dooley, Kim E.; Lindner, James R.; Dooley, Larry M.

    2005-01-01

    Courses and programs being delivered at a distance require a unique set of professional competencies. Whether using synchronous or asynchronous methods of instruction, systematic instructional design can help stimulate motivation, increase interaction and social presence, and authenticate learning outcomes. Principles of adult learning, including…

  17. Novel Method of Storing and Reconstructing Events at Fermilab E-906/SeaQuest Using a MySQL Database

    NASA Astrophysics Data System (ADS)

    Hague, Tyler

    2010-11-01

    Fermilab E-906/SeaQuest is a fixed target experiment at Fermi National Accelerator Laboratory. We are investigating the antiquark asymmetry in the nucleon sea. By examining the ratio of the Drell- Yan cross sections of proton-proton and proton-deuterium collisions we can determine the asymmetry ratio. An essential feature in the development of the analysis software is to update the event reconstruction to modern software tools. We are doing this in a unique way by doing a majority of the calculations within an SQL database. Using a MySQL database allows us to take advantage of off-the-shelf software without sacrificing ROOT compatibility and avoid network bottlenecks with server-side data selection. Using our raw data we create stubs, or partial tracks, at each station which are pieced together to create full tracks. Our reconstruction process uses dynamically created SQL statements to analyze the data. These SQL statements create tables that contain the final reconstructed tracks as well as intermediate values. This poster will explain the reconstruction process and how it is being implemented.

  18. A Method to Compare ICF and SNOMED CT for Coverage of U.S. Social Security Administration's Disability Listing Criteria.

    PubMed

    Tu, Samson W; Nyulas, Csongor I; Tudorache, Tania; Musen, Mark A

    2015-01-01

    We developed a method to evaluate the extent to which the International Classification of Function, Disability, and Health (ICF) and SNOMED CT cover concepts used in the disability listing criteria of the U.S. Social Security Administration's "Blue Book." First we decomposed the criteria into their constituent concepts and relationships. We defined different types of mappings and manually mapped the recognized concepts and relationships to either ICF or SNOMED CT. We defined various metrics for measuring the coverage of each terminology, taking into account the effects of inexact matches and frequency of occurrence. We validated our method by mapping the terms in the disability criteria of Adult Listings, Chapter 12 (Mental Disorders). SNOMED CT dominates ICF in almost all the metrics that we have computed. The method is applicable for determining any terminology's coverage of eligibility criteria. PMID:26958262

  19. Databases as an information service

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  20. A global database of lake surface temperatures collected by in situ and satellite methods from 1985–2009

    USGS Publications Warehouse

    Sharma, Sapna; Gray, Derek; Read, Jordan S.; O'Reilly, Catherine; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie; Hook, Simon; Lenters, John; Livingstone, David M.; McIntyre, Peter B.; Adrian, Rita; Allan, Mathew; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John E.; Baron, Jill S.; Brookes, Justin D; Chen, Yuwei; Daly, Robert; Ewing, Kye; de Eyto, Elvira; Dokulil, Martin; Hamilton, David B.; Havens, Karl; Haydon, Shane; Hetzenaeur, Harald; Heneberry, Jocelyn; Hetherington, Amy; Higgins, Scott; Hixson, Eric; Izmest'eva, Lyubov; Jones, Benjamin M.; Kangur, Kulli; Kasprzak, Peter; Kraemer, Benjamin; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Dörthe Müller-Navarra; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Pius Niederhauser; North, Ryan P.; Andrew Paterson; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Lars Rudstam; Rusak, James A.; Salmaso, Nico; Samal, Nihar R.; Daniel E. Schindler; Geoffrey Schladow; Schmidt, Silke R.; Tracey Schultz; Silow, Eugene A.; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A.; Craig E. Williamson; Kara H. Woo

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985–2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues.

  1. A global database of lake surface temperatures collected by in situ and satellite methods from 1985-2009.

    PubMed

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O'Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest'eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985-2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues. PMID:25977814

  2. A global database of lake surface temperatures collected by in situ and satellite methods from 1985–2009

    PubMed Central

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O’Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest’eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985–2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues. PMID:25977814

  3. Challenges of the Administrative Consultation Wiki Research Project as a Learning and Competences Development Method for MPA Students

    ERIC Educational Resources Information Center

    Kovac, Polonca; Stare, Janez

    2015-01-01

    Administrative Consultation Wiki (ACW) is a project run under the auspices of the Faculty of Administration and the Ministry of Public Administration in Slovenia since 2009. A crucial component thereof is the involvement of students of Master of Public Administration (MPA) degree programs to offer them an opportunity to develop competences in…

  4. Directory of clinical databases: improving and promoting their use

    PubMed Central

    Black, N; Payne, M

    2003-01-01

    Background: The controversy surrounding the actual and potential use of clinical databases partly reflects the huge variation in their content and quality. In addition, use of existing clinical databases is severely limited by a lack of knowledge of their availability. Objectives: To develop and test a standardised method for assessing the quality (completeness and accuracy) of clinical databases and to establish a web based directory of databases in the UK. Methods: An expert group was set up (1) to establish the criteria for inclusion of databases; (2) to develop a quality assessment instrument with high content validity, based on epidemiological theory; (3) to test empirically, modify, and retest the acceptability to database custodians, face validity and floor/ceiling effects; and (4) to design a website. Results: Criteria for inclusion of databases were the provision of individual level data; inclusion in the database defined by a common circumstance (e.g. condition, treatment), an administrative arrangement, or an adverse outcome; and inclusion of data from more than one provider. A quality assessment instrument consisting of 10 items (four on coverage, six on reliability and validity) was developed and shown to have good face and content validity, no floor/ceiling effects, and to be acceptable to database custodians. A website (www.docdat.org) was developed. Indications over the first 18 months (number of visitors to the site) are that it is increasingly popular. By November 2002 there were around 3500 hits a month. Conclusions: A website now exists where visitors can identify clinical databases in the UK that may be suitable to meet their aims. It is planned both to develop a local version for use within a hospital and to encourage similar national systems in other countries. PMID:14532366

  5. Unconventional Methods and Materials for Preparing Educational Administrators. ERIC/CEM-UCEA Series on Administrator Preparation. ERIC/CEM State-of-the-Knowledge Series, Number Fifteen. UCEA Monograph Series, Number Two.

    ERIC Educational Resources Information Center

    Wynn, Richard

    In this monograph, the author describes the variety of new and innovative instructional methods and materials being used to prepare educational administrators. Because the subject is new and the nomenclature surrounding it imprecise, the author defines his terms. An outline of the history of unconventional instructional methods and the rationale…

  6. A novel method for local administration of strontium from implant surfaces.

    PubMed

    Forsgren, Johan; Engqvist, Håkan

    2010-05-01

    This study proves that a film of Strontianite (SrCO(3)) successfully can be formed on a bioactive surface of sodium titanate when exposed to a strontium acetate solution. This Strontianite film is believed to enable local release of strontium ions from implant surfaces and thus stimulate bone formation in vivo. Depending on the method, different types of films were achieved with different release rates of strontium ions, and the results points at the possibility to tailor the rate and amount of strontium that is to be released from the surface. Strontium has earlier been shown to be highly involved in the formation of new bone as it stimulates the replication of osteoblasts and decreases the activity of osteoclasts. The benefit of strontium has for example been proved in studies where the number of vertebral compression fractures in osteoporotic persons was drastically reduced in patients receiving therapeutical doses of strontium. Therefore, it is here suggested that the bone healing process around an implant may be improved if strontium is administered locally at the site of the implant. The films described in this paper were produced by a simple immersion process where alkali treated titanium was exposed to an aqueous solution containing strontium acetate. By heating the samples at different times during the process, different release rates of strontium ions were achieved when the samples were exposed to simulated body fluid. The strontium containing films also promoted precipitation of bone like apatite when exposed to a simulated body fluid. PMID:20162327

  7. Drinking Water Treatability Database (Database)

    EPA Science Inventory

    The drinking Water Treatability Database (TDB) will provide data taken from the literature on the control of contaminants in drinking water, and will be housed on an interactive, publicly-available USEPA web site. It can be used for identifying effective treatment processes, rec...

  8. A novel method for detecting inpatient pediatric asthma encounters using administrative data.

    PubMed

    Knighton, Andrew J; Flood, Andrew; Harmon, Brian; Smith, Patti; Crosby, Carrie; Payne, Nathaniel R

    2014-08-01

    Multiple methods for detecting asthma encounters are used today in public surveillance, quality reporting, and clinical research. Failure to detect asthma encounters can make it difficult to measure the scope and effectiveness of hospital or community-based interventions important in comparative effectiveness research and accountable care. Given the pairing of asthma with certain respiratory conditions, the objective of this study was to develop and test an asthma detection algorithm with specificity and sensitivity using 2 criteria: (1) principal discharge diagnosis and (2) asthma diagnosis code position. A medical record review was conducted (n=191) as the gold standard for identifying asthma encounters given objective criteria. The study team observed that for certain principal respiratory diagnoses (n=110), the observed odds ratio that encounters were for asthma when asthma was coded in the second or third code position was not significantly different than when asthma was coded as the principal diagnosis, 0.36 (P=0.42) and 0.18 (P=0.14), respectively. In contrast, the observed odds ratio was significantly different when asthma was coded in the fourth or fifth positions (P<.001). This difference remained after adjusting for covariates. Including encounters with asthma in 1 of the 3 first positions increased the detection sensitivity to 0.84 [95% confidence interval (CI): 0.76-0.92] while increasing the false positive rate to 0.19 [95% CI: 0.07-0.31]. Use of the proposed algorithm significantly improved the reporting accuracy [0.83 95%CI:0.76-0.90] over use of (1) the principal diagnosis alone [0.55 95% CI:0.46-0.64] or (2) all encounters with asthma 0.66 [95% CI:0.57-0.75]. Bed days resulting from asthma encounters increased 64% over use of the principal diagnosis alone. Given these findings, an algorithm using certain respiratory principal diagnoses and asthma diagnosis code position can reliably improve asthma encounter detection for population-based health

  9. A Mixed Method Study Measuring the Perceptions of Administrators, Classroom Teachers and Professional Staff on the Use of iPads in a Midwest School District

    ERIC Educational Resources Information Center

    Beckerle, Andrea Laux

    2013-01-01

    The purpose of this mixed methods study was to assess the perceptions of classroom teachers, administrators and professional support staff in one Midwest school district regarding the usefulness and effectiveness of the iPad device as an instructional and support tool within the classroom. The need to address classroom teacher, administrator and…

  10. Administrative Uses of the Microcomputer.

    ERIC Educational Resources Information Center

    Spuck, Dennis W.; Atkinson, Gene

    1983-01-01

    An outline of microcomputer applications for administrative computing in education is followed by discussions of aspects of office automation, database management systems, management information systems, administrative computer systems, and software. Several potential problems relating to administrative computing in education are identified.…

  11. An open-label, randomized bioavailability study with alternative methods of administration of crushed ticagrelor tablets in healthy volunteers

    PubMed Central

    Teng, Renli; Carlson, Glenn; Hsia, Judith

    2015-01-01

    Objective: To compare the bioavailability and safety profile of crushed ticagrelor tablets suspended in water and administered orally or via nasogastric tube, with that of whole tablets administered orally. Methods: In this single-center, open-label, randomized, three-treatment crossover study, 36 healthy volunteers were randomized to receive a single 90-mg dose of ticagrelor administered orally as a whole tablet or as crushed tablets suspended in water and given orally or via a nasogastric tube into the stomach, with a minimum 7-day wash-out between treatments. Plasma concentrations of ticagrelor and AR-C124910XX were assessed at 0, 0.5, 1, 2, 3, 4, 6, 8, 10, 12, 16, 24, 36, and 48 hours post-ticagrelor dose for pharmacokinetic analyses. Safety and tolerability was assessed throughout the study. Results: At 0.5 hours postdose, plasma concentrations of ticagrelor and AR-C124910XX were higher with crushed tablets administered orally (148.6 ng/mL and 13.0 ng/mL, respectively) or via nasogastric tube (264.6 ng/mL and 28.6 ng/mL, respectively) compared with whole-tablet administration (33.3 ng/mL and 5.2 ng/mL, respectively). A similar trend was observed at 1 hour postdose. Ticagrelor tmax was shorter following crushed vs. whole-tablet administration (1 vs. 2 hours, respectively). Geometric mean ratios between treatments for AUC and Cmax were contained within the bioequivalence limits of 80 – 125% for ticagrelor and AR-C124910XX. All treatments were generally well tolerated. Conclusions: Ticagrelor administered as a crushed tablet is bioequivalent to whole-tablet administration, independent of mode of administration (oral or via nasogastric tube), and resulted in increased plasma concentrations of ticagrelor and AR-C124910XX at early timepoints. PMID:25500486

  12. Database Access Integration Services (DAIS)

    SciTech Connect

    Mitchell, P. . Sensor and System Development Center); Nordell, D. )

    1992-12-01

    The Database Access Integration Services (DAIS) is a collection of services that facilitate access to data among diverse data systems in an electric utility communications network. DAIS provides access to data in distributed, heterogeneous data systems that include relational database management systems, other database management systems, control systems, file systems, and application systems. It also provides a common method for describing data, common data access operations and essential support services including a data dictionary, a data directory and distributed data access management capabilities. The DAIS project has developed specifications intended for vendor and third-party implementation. The software developed is only to implement a data access integration demonstration. These specifications can serve as a basis for influencing industry standards development. One important consequence of this strategy is that most actual software development will be performed by vendors, not utilities. DAIS is a tool to support data access. It is policy neutral regarding issues such as local or central administration of data or standardization of information model contents (e.g., EPRI Plant Information Network). As a tool, it can be used to help realize such policies. The DAIS does not provide data storage facilities, schema integration, distributed query processing, distributed applications or cooperative processing. Rather, DAIS is complementary to these functions and can be used with other software that does provide these functions. This project documented the requirements for the DAIS. These requirements are the basis for design of the DAIS specifications. The key requirements for a DAIS are: Uniform access to heterogeneous utility data systems, remote update; coexistence with local data systems; local autonomy ; Security and access restriction enforcement; OSI compatibility; open architecture and extensibility; and operating platform independence.

  13. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  14. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  15. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  16. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  17. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  18. Water pollution in the pulp and paper industry: Treatment methods. (Latest citations from the NTIS database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning waste treatment methods for the pulp and paper industry. Some of these methods are: sedimentation, flotation, filtration, coagulation, adsorption, and general concentration processes. (Contains a minimum of 142 citations and includes a subject term index and title list.)

  19. Database machines for large statistical databases. Final report, January 16, 1983-January 15, 1986

    SciTech Connect

    DeWitt, D.J.

    1986-01-01

    Research activities have been directed towards the design and implementation of a high performance database machine for accessing, manipulating, and analyzing very large statistical data sets. Topics of discussion include statistical databases, parallel algorithms and secondary storage methods, methodology for database system performance evaluation, implementation techniques for main memory database systems, intelligent buffer management systems, Gamma-a high performance dataflow database machine, and extensible database management systems. 1 fig. (DWL)

  20. Mapping the literature of nursing administration

    PubMed Central

    Galganski, Carol J.

    2006-01-01

    Objectives: As part of Phase I of a project to map the literature of nursing, sponsored by the Nursing and Allied Health Resources Section of the Medical Library Association, this study identifies the core literature cited in nursing administration and the indexing services that provide access to the core journals. The results of this study will assist librarians and end users searching for information related to this nursing discipline, as well as database producers who might consider adding specific titles to their indexing services. Methods: Using the common methodology described in the overview article, five source journals for nursing administration were identified and selected for citation analysis over a three-year period, 1996 to 1998, to identify the most frequently cited titles according to Bradford's Law of Scattering. From this core of most productive journal titles, the bibliographic databases that provide the best access to these titles were identified. Results: Results reveal that nursing administration literature relies most heavily on journal articles and on those titles identified as core nursing administrative titles. When the indexing coverage of nine services is compared, PubMed/MEDLINE and CINAHL provide the most comprehensive coverage of this nursing discipline. Conclusions: No one indexing service adequately covers this nursing discipline. Researchers needing comprehensive coverage in this area must search more than one database to effectively research their projects. While PubMed/MEDLINE and CINAHL provide more coverage for this discipline than the other indexing services, none is sufficiently broad in scope to provide indexing of nursing, health care management, and medical literature in a single file. Nurse administrators using the literature to research current work issues need to review not only the nursing titles covered by CINAHL but should also include the major weekly medical titles, core titles in health care administration, and

  1. Databases and data mining

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  2. Quality assurance of specialised treatment of eating disorders using large-scale Internet-based collection systems: methods, results and lessons learned from designing the Stepwise database.

    PubMed

    Birgegård, Andreas; Björck, Caroline; Clinton, David

    2010-01-01

    Computer-based quality assurance of specialist eating disorder (ED) care is a possible way of meeting demands for evaluating the real-life effectiveness of treatment, in a large-scale, cost-effective and highly structured way. The Internet-based Stepwise system combines clinical utility for patients and practitioners, and provides research-quality naturalistic data. Stepwise was designed to capture relevant variables concerning EDs and general psychiatric status, and the database can be used for both clinical and research purposes. The system comprises semi-structured diagnostic interviews, clinical ratings and self-ratings, automated follow-up schedules, as well as administrative functions to facilitate registration compliance. As of June 2009, the system is in use at 20 treatment units and comprises 2776 patients. Diagnostic distribution (including subcategories of eating disorder not otherwise specified) and clinical characteristics are presented, as well as data on registration compliance. Obstacles and keys to successful implementation of the Stepwise system are discussed, including possible gains and on-going challenges inherent in large-scale, Internet-based quality assurance. PMID:20589767

  3. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  4. The Intego database: background, methods and basic results of a Flemish general practice-based continuous morbidity registration project

    PubMed Central

    2014-01-01

    Background Intego is the only operational computerized morbidity registration network in Belgium based on general practice data. Intego collects data from over 90 general practitioners. All the information is routinely collected in the electronic health record during daily practice. Methods In this article we describe the design and methods used within the Intego network together with some of its basic results. The collected data, the quality control procedures, the ethical-legal aspects and the statistical procedures are discussed. Results Intego contains longitudinal information on 285 357 different patients, corresponding to over 2.3% of the Flemish population representative in terms of age and sex. More than 3 million diagnoses, 12 million drug prescriptions and 29 million laboratory tests have been recorded. Conclusions Intego enables us to present and compare data on health parameters, incidence and prevalence rates, laboratory results, and prescribed drugs for all relevant subgroups on a routine basis and is unique in Belgium. PMID:24906941

  5. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  6. Performance evaluation of an automatic segmentation method of cerebral arteries in MRA images by use of a large image database

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru

    2009-02-01

    The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.

  7. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  8. Intensity of regionally applied tastes in relation to administration method: an investigation based on the "taste strips" test.

    PubMed

    Manzi, Brian; Hummel, Thomas

    2014-02-01

    To compare various methods to apply regional taste stimuli to the tongue. "Taste strips" are a clinical tool to determine gustatory function. How a patient perceives the chemical environment in the mouth is a result of many factors such as taste bud distribution and interactions between the cranial nerves. To date, there have been few studies describing the different approaches to administer taste strips to maximize taste identification accuracy and intensity. This is a normative value acquisition pilot and single-center study. The investigation involved 30 participants reporting a normal sense of smell and taste (18 women, 12 men, mean age 33 years). The taste test was based on spoon-shaped filter paper strips impregnated with four taste qualities (sweet, sour, salty, and bitter) at concentrations shown to be easily detectable by young healthy subjects. The strips were administered in three methods (held stationary on the tip of the tongue, applied across the tongue, held in the mouth), resulting in a total of 12 trials per participant. Subjects identified the taste from a list of four descriptors, (sweet, sour, salty, bitter) and ranked the intensity on a scale from 0 to 10. Statistical analyses were performed on the accuracy of taste identification and rated intensities. The participants perceived in order of most to least intense: salt, sour, bitter, sweet. Of the four tastes, sour consistently was least accurately identified. Presenting the taste strip inside the closed mouth of the participants produced the least accurate taste identification, whereas moving the taste strip across the tongue led to a significant increase in intensity for the sweet taste. In this study of 30 subjects at the second concentration, optimized accuracy and intensity of taste identification was observed through administration of taste strips laterally across the anterior third of the extended tongue. Further studies are required on more subjects and the additional concentrations

  9. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  10. Determining the Long-term Effect of Antibiotic Administration on the Human Normal Intestinal Microbiota Using Culture and Pyrosequencing Methods.

    PubMed

    Rashid, Mamun-Ur; Zaura, Egijia; Buijs, Mark J; Keijser, Bart J F; Crielaard, Wim; Nord, Carl Erik; Weintraub, Andrej

    2015-05-15

    The purpose of the study was to assess the effect of ciprofloxacin (500 mg twice daily for 10 days) or clindamycin (150 mg 4 times daily for 10 days) on the fecal microbiota of healthy humans for a period of 1 year as compared to placebo. Two different methods, culture and microbiome analysis, were used. Fecal samples were collected for analyses at 6 time-points. The interval needed for the normal microbiota to be normalized after ciprofloxacin or clindamycin treatment differed for various bacterial species. It took 1-12 months to normalize the human microbiota after antibiotic administration, with the most pronounced effect on day 11. Exposure to ciprofloxacin or clindamycin had a strong effect on the diversity of the microbiome, and changes in microbial composition were observed until the 12th month, with the most pronounced microbial shift at month 1. No Clostridium difficile colonization or C. difficile infections were reported. Based on the pyrosequencing results, it appears that clindamycin has more impact than ciprofloxacin on the intestinal microbiota. PMID:25922405

  11. Information engineering: Sandia's Computer Integrated Manufacturing (CIM) database

    SciTech Connect

    Sharp, J.K.

    1990-01-01

    The activities involved in establishing a Computer Integrated Manufacturing (CIM) database at Sandia National Laboratories (SNL) are part of a common effort to implement a proactive data administration function across administrative and technical databases. Data administration activities include the establishment of corporate data dictionary, a corporate information model, and a library of important objects and their relationships with other objects. Processes requiring information will be identified and supported with future information systems that share administrative and technical data. The process to create databases is being established based upon accepted engineering design practices. This paper discusses the CIM database, presents the selected information modeling technique and describes the information engineering process. 9 refs.

  12. 78 FR 8684 - Fifteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint with EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Federal Aviation Administration Fifteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  13. 78 FR 25134 - Sixteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Federal Aviation Administration Sixteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  14. 78 FR 51809 - Seventeenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-21

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  15. 78 FR 66418 - Eighteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-05

    ... Federal Aviation Administration Eighteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  16. Method for the Compound Annotation of Conjugates in Nontargeted Metabolomics Using Accurate Mass Spectrometry, Multistage Product Ion Spectra and Compound Database Searching

    PubMed Central

    Ogura, Tairo; Bamba, Takeshi; Tai, Akihiro; Fukusaki, Eiichiro

    2015-01-01

    Owing to biotransformation, xenobiotics are often found in conjugated form in biological samples such as urine and plasma. Liquid chromatography coupled with accurate mass spectrometry with multistage collision-induced dissociation provides spectral information concerning these metabolites in complex materials. Unfortunately, compound databases typically do not contain a sufficient number of records for such conjugates. We report here on the development of a novel protocol, referred to as ChemProphet, to annotate compounds, including conjugates, using compound databases such as PubChem and ChemSpider. The annotation of conjugates involves three steps: 1. Recognition of the type and number of conjugates in the sample; 2. Compound search and annotation of the deconjugated form; and 3. In silico evaluation of the candidate conjugate. ChemProphet assigns a spectrum to each candidate by automatically exploring the substructures corresponding to the observed product ion spectrum. When finished, it annotates the candidates assigning a rank for each candidate based on the calculated score that ranks its relative likelihood. We assessed our protocol by annotating a benchmark dataset by including the product ion spectra for 102 compounds, annotating the commercially available standard for quercetin 3-glucuronide, and by conducting a model experiment using urine from mice that had been administered a green tea extract. The results show that by using the ChemProphet approach, it is possible to annotate not only the deconjugated molecules but also the conjugated molecules using an automatic interpretation method based on deconjugation that involves multistage collision-induced dissociation and in silico calculated conjugation. PMID:26819907

  17. The Exoplanet Orbit Database

    NASA Astrophysics Data System (ADS)

    Wright, J. T.; Fakhouri, O.; Marcy, G. W.; Han, E.; Feng, Y.; Johnson, John Asher; Howard, A. W.; Fischer, D. A.; Valenti, J. A.; Anderson, J.; Piskunov, N.

    2011-04-01

    We present a database of well-determined orbital parameters of exoplanets, and their host stars' properties. This database comprises spectroscopic orbital elements measured for 427 planets orbiting 363 stars from radial velocity and transit measurements as reported in the literature. We have also compiled fundamental transit parameters, stellar parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets with robust, well measured orbital parameters reported in peer-reviewed articles. The database is available in a searchable, filterable, and sortable form online through the Exoplanets Data Explorer table, and the data can be plotted and explored through the Exoplanet Data Explorer plotter. We use the Data Explorer to generate publication-ready plots, giving three examples of the signatures of exoplanet migration and dynamical evolution: We illustrate the character of the apparent correlation between mass and period in exoplanet orbits, the different selection biases between radial velocity and transit surveys, and that the multiplanet systems show a distinct semimajor-axis distribution from apparently singleton systems.

  18. Using concept maps on the World-Wide Web to access a curriculum database for problem-based learning.

    PubMed

    Patrick, T B; Worth, E R; Hardin, L E

    1996-01-01

    Development of medical school curriculum databases continues to be challenging. Representation of the instructional unit is becoming increasingly difficult due to characteristics of the problem-based learning (PBL) curricula. Curriculum databases may be used to store materials for the PBL curricula, and also to provide a delivery mechanism for those materials. However, in order to take advantage of the curriculum database as a tool for PBL, methods for accessing the curriculum database that are better suited to the information needs of students, faculty, and administrators must be developed. Concept maps are directed graph representations of conceptual relationships, and may be used to represent the content of a curriculum database. In this paper, we describe a Web application that uses Java-based concept maps was the user interface to a curriculum database. PMID:8947622

  19. Staradmin -- Starlink User Database Maintainer

    NASA Astrophysics Data System (ADS)

    Fish, Adrian

    The subject of this SSN is a utility called STARADMIN. This utility allows the system administrator to build and maintain a Starlink User Database (UDB). The principal source of information for each user is a text file, named after their username. The content of each file is a list consisting of one keyword followed by the relevant user data per line. These user database files reside in a single directory. The STARADMIN program is used to manipulate these user data files and automatically generate user summary lists.

  20. Transferring a National Information System from the Public Sector to the Private Sector--How the Administration on Aging Did It.

    ERIC Educational Resources Information Center

    Halpin, Peter

    1985-01-01

    Describes the methods and results of the transfer of the SCAN bibliographic database, sponsored by the Administration on Aging (AoA) to the private sector American Association of Retired Persons (AARP) when Congress repealed authority for AoA. Steps involved in the establishment of a bibliographic database are outlined. Four sources are given.…

  1. A data-based exploration of the adverse outcome pathway for skin sensitization points to the necessary requirements for its prediction with alternative methods.

    PubMed

    Benigni, Romualdo; Bossa, Cecilia; Tcheremenskaia, Olga

    2016-07-01

    This paper presents new data-based analyses on the ability of alternative methods to predict the skin sensitization potential of chemicals. It appears that skin sensitization, as shown in humans and rodents, can be predicted with good accuracy both with in vitro assays and QSAR approaches. The accuracy is about the same: 85-90%. Given that every biological measure has inherent uncertainty, this performance is quite remarkable. Overall, there is a good correlation between human data and experimental in vivo systems, except for sensitizers of intermediate potency. This uncertainty/variability is probably the reason why alternative methods are quite efficient in predicting both strong and non-sensitizers, but not the intermediate potency sensitizers. A detailed analysis of the predictivity of the individual approaches shows that the biological in vitro assays have limited added value in respect to the in chemico/QSAR ones, and suggests that the primary interaction with proteins is the rate-limiting step of the entire process. This confirms evidence from other fields (e.g., carcinogenicity, QSAR) indicating that successful predictive models are based on the parameterization of a few mechanistic features/events, whereas the consideration of all events supposedly involved in a toxicity pathway contributes to increase the uncertainty of the predictions. PMID:27090483

  2. Overlap in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Hood, William W.; Wilson, Concepcion S.

    2003-01-01

    Examines the topic of Fuzzy Set Theory to determine the overlap of coverage in bibliographic databases. Highlights include examples of comparisons of database coverage; frequency distribution of the degree of overlap; records with maximum overlap; records unique to one database; intra-database duplicates; and overlap in the top ten databases.…

  3. PMAG: Relational Database Definition

    NASA Astrophysics Data System (ADS)

    Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.

    2002-12-01

    The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the

  4. Ontology building by dictionary database mining

    NASA Astrophysics Data System (ADS)

    Deliyska, B.; Rozeva, A.; Malamov, D.

    2012-11-01

    The paper examines the problem of building ontologies in automatic and semi-automatic way by means of mining a dictionary database. An overview of data mining tools and methods is presented. On this basis an extended and improved approach is proposed which involves operations for pre-processing the dictionary database, clustering and associating database entries for extracting hierarchical and nonhierarchical relations. The approach is applied on sample dictionary database in the environment of the Rapid Miner mining tool. As a result the dictionary database is complemented to thesaurus database which can be further on easily converted to reusable formal ontology.

  5. Applicability of large databases in outcomes research.

    PubMed

    Malay, Sunitha; Shauver, Melissa J; Chung, Kevin C

    2012-07-01

    Outcomes research serves as a mechanism to assess the quality of care, cost effectiveness of treatment, and other aspects of health care. The use of administrative databases in outcomes research is increasing in all medical specialties, including hand surgery. However, the real value of databases can be maximized with a thorough understanding of their contents, advantages, and limitations. We performed a literature review pertaining to databases in medical, surgical, and epidemiologic research, with special emphasis on orthopedic and hand surgery. This article provides an overview of the available database resources for outcomes research, their potential value to hand surgeons, and suggestions to improve their effective use. PMID:22522104

  6. The EXOSAT database system. Available databases.

    NASA Astrophysics Data System (ADS)

    Barron, C.

    1991-02-01

    This User's Guide describes the databases that are currently available by remote login to the EXOSAT/ESTEC site of the EXOSAT database system. This includes where ever possible the following: brief descriptions of each observatory, telescope and instrument references to more complete observatory descriptions a list of the contents of each database and how it was generated, parameter descriptions.

  7. Detection and elimination profile of cathinone in equine after norephedrine (Propalin®) administration using a validated liquid chromatography-tandem mass spectrometry method.

    PubMed

    Yi, Rong; Zhao, Sarah; Lam, Geoffrey; Sandhu, Jasmeet; Loganathan, Devan; Morrissey, Barbara

    2013-12-01

    Cathinone is the principal psychostimulant present in the leaves of khat shrub, which are widely used in East Africa and the Arab peninsula as an amphetamine-like stimulant. Cathinone readily undergoes metabolism in vivo to form less potent cathine and norephedrine as the metabolites. However, the presence of cathine and norephedrine in biological fluids cannot be used as an indicator of cathinone administration. The metabolism of pseudoephedrine and ephedrine, commonly used in cold and allergy medications, also produces cathine and norephedrine, respectively, as the metabolites. Besides, cathine and norephedrine may also originate from the ingestion of nutritional supplemental products containing extracts of Ephedra species. In Canada, ephedrine and norephedrine are available for veterinary use, whereas cathinone is not approved for human or veterinary use. In this article, the detection of cathinone in equine after administration of norephedrine is reported. To the best of our knowledge, this is the first such report in any species where administration of norephedrine or ephedrine generates cathinone as the metabolite. This observation is quite significant, because in equine detection of cathinone in biological fluids could be due to administration of the potent stimulant cathinone or the nonpotent stimulant norephedrine. A single oral dose of 450 mg norephedrine was administered to four Standardbred mares. Plasma and urine samples were collected up to 120 h after administration. The amount of cathinone and norephedrine detected in post administration samples was quantified using a highly sensitive, specific, and validated liquid chromatography-tandem mass spectrometry method. Using these results, we constructed elimination profiles for cathinone and norephedrine in equine plasma and urine. A mechanism that generates a geminal diol as an intermediate is postulated for this in vivo conversion of norephedrine to cathinone. Cathinone was also detected in samples

  8. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  9. Exploring Methods to Measure the Prevalence of Ménière's Disease in the US Clinformatics™ Database, 2010-2012.

    PubMed

    Ricchetti-Masterson, Kristen; Aldridge, Molly; Logie, John; Suppapanya, Nittaya; Cook, Suzanne F

    2016-01-01

    Recent studies on the epidemiology of the inner-ear disorder Ménière's disease (MD) use disparate methods for sample selection, case identification and length of observation. Prevalence estimates vary geographically from 17 to 513 cases per 100,000 people. We explored the impact of case detection strategies and observation periods in estimating the prevalence of MD in the USA, using data from a large insurance claims database. Using case detection strategies of ≥1, ≥2 and ≥3 ICD-9 claim codes for MD within a 1-year period, the 2012 prevalence estimates were 66, 27 and 14 cases per 100,000 people, respectively. For ≥1, ≥2 and ≥3 insurance claims within a 3-year observation period, the prevalence estimates were 200, 104 and 66 cases per 100,000 people, respectively. Estimates based on a single claim are likely to overestimate prevalence; this conclusion is aligned with the American Academy of Otolaryngology-Head and Neck Foundation criteria requiring ≥2 definitive episodes for a definite diagnosis, and it has implications for future epidemiologic research. We believe estimates for ≥2 claims may be a more conservative estimate of the prevalence of MD, and multiyear estimates may be needed to allow for adequate follow-up time. PMID:27245601

  10. A universal support vector machines based method for automatic event location in waveforms and video-movies: Applications to massive nuclear fusion databases

    NASA Astrophysics Data System (ADS)

    Vega, J.; Murari, A.; González, S.; Jet-Efda Contributors

    2010-02-01

    Big physics experiments can collect terabytes (even petabytes) of data under continuous or long pulse basis. The measurement systems that follow the temporal evolution of physical quantities translate their observations into very large time-series data and video-movies. This article describes a universal and automatic technique to recognize and locate inside waveforms and video-films both signal segments with data of potential interest for specific investigations and singular events. The method is based on regression estimations of the signals using support vector machines. A reduced number of the samples is shown as outliers in the regression process and these samples allow the identification of both special signatures and singular points. Results are given with the database of the JET fusion device: location of sawteeth in soft x-ray signals to automate the plasma incremental diffusivity computation, identification of plasma disruptive behaviors with its automatic time instant determination, and, finally, recognition of potential interesting plasma events from infrared video-movies.

  11. "Educational Administration Quarterly", 1979-2003: An Analysis of Types of Work, Methods of Investigation, and Influences

    ERIC Educational Resources Information Center

    Murphy, Joseph; Vriesenga, Michael; Storey, Valerie

    2007-01-01

    Purpose: The objective of this article is to provide an analysis of articles in "Education Administration Quarterly (EAQ)" over the 25-year period 1979-2003. Approach: The approach is document analysis. Findings: Information is presented on four key themes: (a) types of articles published; (b) methodologies employed; (c) topic areas emphasized;…

  12. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of...

  13. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of...

  14. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of...

  15. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  16. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  17. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  18. Software Application for Supporting the Education of Database Systems

    ERIC Educational Resources Information Center

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  19. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  20. MCS Systems Administration Toolkit

    2001-09-30

    This package contains a number of systems administration utilities to assist a team of system administrators in managing a computer environment by automating routine tasks and centralizing information. Included are utilities to help install software on a network of computers and programs to make an image of a disk drive, to manage and distribute configuration files for a number of systems, and to run self-testss on systems, as well as an example of using amore » database to manage host information and various utilities.« less

  1. Fast decision tree-based method to index large DNA-protein sequence databases using hybrid distributed-shared memory programming model.

    PubMed

    Jaber, Khalid Mohammad; Abdullah, Rosni; Rashid, Nur'Aini Abdul

    2014-01-01

    In recent times, the size of biological databases has increased significantly, with the continuous growth in the number of users and rate of queries; such that some databases have reached the terabyte size. There is therefore, the increasing need to access databases at the fastest rates possible. In this paper, the decision tree indexing model (PDTIM) was parallelised, using a hybrid of distributed and shared memory on resident database; with horizontal and vertical growth through Message Passing Interface (MPI) and POSIX Thread (PThread), to accelerate the index building time. The PDTIM was implemented using 1, 2, 4 and 5 processors on 1, 2, 3 and 4 threads respectively. The results show that the hybrid technique improved the speedup, compared to a sequential version. It could be concluded from results that the proposed PDTIM is appropriate for large data sets, in terms of index building time. PMID:24794073

  2. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  3. Pharmacokinetic Comparative Study of Gastrodin and Rhynchophylline after Oral Administration of Different Prescriptions of Yizhi Tablets in Rats by an HPLC-ESI/MS Method

    PubMed Central

    Ge, Zhaohui; Liang, Qionglin; Wang, Yiming; Luo, Guoan

    2014-01-01

    Pharmacokinetic characters of rhynchophylline (RIN), gastrodin (GAS), and gastrodigenin (p-hydroxybenzyl alcohol, HBA) were investigated after oral administration of different prescriptions of Yizhi: Yizhi tablets or effective parts of tianma (total saponins from Gastrodiae, EPT) and gouteng (rhynchophylla alkaloids, EPG). At different predetermined time points after administration, the concentrations of GAS, HBA, and RIN in rat plasma were determined by an HPLC-ESI/MS method, and the main pharmacokinetic parameters were investigated. The results showed that the pharmacokinetic parameters Cmax and AUC0–∞ (P < 0.05) were dramatically different after oral administration of different prescriptions of Yizhi. The data indicated that the pharmacokinetic processes of GAS, HBA, and RIN in rats would interact with each other or be affected by other components in Yizhi. The rationality of the compatibility of Uncaria and Gastrodia elata as a classic “herb pair” has been verified from the pharmacokinetic viewpoint. PMID:25610474

  4. Pharmacokinetic Comparative Study of Gastrodin and Rhynchophylline after Oral Administration of Different Prescriptions of Yizhi Tablets in Rats by an HPLC-ESI/MS Method.

    PubMed

    Ge, Zhaohui; Xie, Yuanyuan; Liang, Qionglin; Wang, Yiming; Luo, Guoan

    2014-01-01

    Pharmacokinetic characters of rhynchophylline (RIN), gastrodin (GAS), and gastrodigenin (p-hydroxybenzyl alcohol, HBA) were investigated after oral administration of different prescriptions of Yizhi: Yizhi tablets or effective parts of tianma (total saponins from Gastrodiae, EPT) and gouteng (rhynchophylla alkaloids, EPG). At different predetermined time points after administration, the concentrations of GAS, HBA, and RIN in rat plasma were determined by an HPLC-ESI/MS method, and the main pharmacokinetic parameters were investigated. The results showed that the pharmacokinetic parameters C max and AUC0-∞ (P < 0.05) were dramatically different after oral administration of different prescriptions of Yizhi. The data indicated that the pharmacokinetic processes of GAS, HBA, and RIN in rats would interact with each other or be affected by other components in Yizhi. The rationality of the compatibility of Uncaria and Gastrodia elata as a classic "herb pair" has been verified from the pharmacokinetic viewpoint. PMID:25610474

  5. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  6. Certifiable database generation for SVS

    NASA Astrophysics Data System (ADS)

    Schiefele, Jens; Damjanovic, Dejan; Kubbat, Wolfgang

    2000-06-01

    In future aircraft cockpits SVS will be used to display 3D physical and virtual information to pilots. A review of prototype and production Synthetic Vision Displays (SVD) from Euro Telematic, UPS Advanced Technologies, Universal Avionics, VDO-Luftfahrtgeratewerk, and NASA, are discussed. As data sources terrain, obstacle, navigation, and airport data is needed, Jeppesen-Sanderson, Inc. and Darmstadt Univ. of Technology currently develop certifiable methods for acquisition, validation, and processing methods for terrain, obstacle, and airport databases. The acquired data will be integrated into a High-Quality Database (HQ-DB). This database is the master repository. It contains all information relevant for all types of aviation applications. From the HQ-DB SVS relevant data is retried, converted, decimated, and adapted into a SVS Real-Time Onboard Database (RTO-DB). The process of data acquisition, verification, and data processing will be defined in a way that allows certication within DO-200a and new RTCA/EUROCAE standards for airport and terrain data. The open formats proposed will be established and evaluated for industrial usability. Finally, a NASA-industry cooperation to develop industrial SVS products under the umbrella of the NASA Aviation Safety Program (ASP) is introduced. A key element of the SVS NASA-ASP is the Jeppesen lead task to develop methods for world-wide database generation and certification. Jeppesen will build three airport databases that will be used in flight trials with NASA aircraft.

  7. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  8. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  9. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  10. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  11. Alternative method of oral administration by peanut butter pellet formulation results in target engagement of BACE1 and attenuation of gavage-induced stress responses in mice.

    PubMed

    Gonzales, C; Zaleska, M M; Riddell, D R; Atchison, K P; Robshaw, A; Zhou, H; Sukoff Rizzo, S J

    2014-11-01

    Development of novel therapeutic agents aimed at treating neurodegenerative disorders such as Alzheimer's and Parkinson's diseases require chronic and preferentially oral dosing in appropriate preclinical rodent models. Since many of these disease models involve transgenic mice that are frequently aged and fragile, the commonly used oro-gastric gavage method of drug administration often confounds measured outcomes due to repeated stress and high attrition rates caused by esophageal complications. We employed a novel drug formulation in a peanut butter (PB) pellet readily consumed by mice and compared the stress response as measured by plasma corticosterone levels relative to oral administration via traditional gavage. Acute gavage produced significant elevations in plasma corticosterone comparable to those observed in mice subjected to stress-induced hyperthermia. In contrast, corticosterone levels following consumption of PB pellets were similar to levels in naive mice and significantly lower than in mice subjected to traditional gavage. Following sub-chronic administration, corticosterone levels remained significantly higher in mice subjected to gavage, relative to mice administered PB pellets or naive controls. Furthermore, chronic 30day dosing of a BACE inhibitor administered via PB pellets to PSAPP mice resulted in expected plasma drug exposure and Aβ40 lowering consistent with drug treatment demonstrating target engagement. Taken together, this alternative method of oral administration by drug formulated in PB pellets results in the expected pharmacokinetics and pharmacodynamics with attenuated stress levels, and is devoid of the detrimental effects of repetitive oral gavage. PMID:25242810

  12. [Changes in features of diabetes care in Hungary in the period of years 2001-2014. Aims and methods of the database analysis of the National Health Insurance Fund].

    PubMed

    Jermendy, György; Kempler, Péter; Abonyi-Tóth, Zsolt; Rokszin, György; Wittmann, István

    2016-08-01

    In the last couple of years, database analyses have become increasingly popular among clinical-epidemiological investigations. In Hungary, the National Health Insurance Fund serves as central database of all medical attendances in state departments and purchases of drug prescriptions in pharmacies. Data from in- and outpatient departments as well as those from pharmacies are regularly collected in this database which is public and accessible on request. The aim of this retrospective study was to investigate the database of the National Health Insurance Fund in order to analyze the diabetes-associated morbidity and mortality in the period of years 2001-2014. Moreover, data of therapeutic costs, features of hospitalizations and practice of antidiabetic treatment were examined. The authors report now on the method of the database analysis. It is to be hoped that the upcoming results of this investigation will add some new data to recent knowledge about diabetes care in Hungary. Orv. Hetil., 2016, 157(32), 1259-1265. PMID:27499284

  13. Administrative Synergy

    ERIC Educational Resources Information Center

    Hewitt, Kimberly Kappler; Weckstein, Daniel K.

    2012-01-01

    One of the biggest obstacles to overcome in creating and sustaining an administrative professional learning community (PLC) is time. Administrators are constantly deluged by the tyranny of the urgent. It is a Herculean task to carve out time for PLCs, but it is imperative to do so. In this article, the authors describe how an administrative PLC…

  14. Data mining in forensic image databases

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien

    2002-07-01

    Forensic Image Databases appear in a wide variety. The oldest computer database is with fingerprints. Other examples of databases are shoeprints, handwriting, cartridge cases, toolmarks drugs tablets and faces. In these databases searches are conducted on shape, color and other forensic features. There exist a wide variety of methods for searching in images in these databases. The result will be a list of candidates that should be compared manually. The challenge in forensic science is to combine the information acquired. The combination of the shape of a partial shoe print with information on a cartridge case can result in stronger evidence. It is expected that searching in the combination of these databases with other databases (e.g. network traffic information) more crimes will be solved. Searching in image databases is still difficult, as we can see in databases of faces. Due to lighting conditions and altering of the face by aging, it is nearly impossible to find a right face from a database of one million faces in top position by a image searching method, without using other information. The methods for data mining in images in databases (e.g. MPEG-7 framework) are discussed, and the expectations of future developments are presented in this study.

  15. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... section, the term “Administrator” shall refer to each of the TRS Numbering administrator, the administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... entity that is impartial and not an affiliate of any Internet-based TRS provider. (2) Neither...

  16. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... section, the term “Administrator” shall refer to each of the TRS Numbering administrator, the administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... entity that is impartial and not an affiliate of any Internet-based TRS provider. (2) Neither...

  17. OSSI-PET: Open-Access Database of Simulated [(11)C]Raclopride Scans for the Inveon Preclinical PET Scanner: Application to the Optimization of Reconstruction Methods for Dynamic Studies.

    PubMed

    Garcia, Marie-Paule; Charil, Arnaud; Callaghan, Paul; Wimberley, Catriona; Busso, Florian; Gregoire, Marie-Claude; Bardies, Manuel; Reilhac, Anthonin

    2016-07-01

    A wide range of medical imaging applications benefits from the availability of realistic ground truth data. In the case of positron emission tomography (PET), ground truth data is crucial to validate processing algorithms and assessing their performances. The design of such ground truth data often relies on Monte-Carlo simulation techniques. Since the creation of a large dataset is not trivial both in terms of computing time and realism, we propose the OSSI-PET database containing 350 simulated [(11)C]Raclopride dynamic scans for rats, created specifically for the Inveon pre-clinical PET scanner. The originality of this database lies on the availability of several groups of scans with controlled biological variations in the striata. Besides, each group consists of a large number of realizations (i.e., noise replicates). We present the construction methodology of this database using rat pharmacokinetic and anatomical models. A first application using the OSSI-PET database is presented. Several commonly used reconstruction techniques were compared in terms of image quality, accuracy and variability of the activity estimates and of the computed kinetic parameters. The results showed that OP-OSEM3D iterative reconstruction method outperformed the other tested methods. Analytical methods such as FBP2D and 3DRP also produced satisfactory results. However, FORE followed by OSEM2D reconstructions should be avoided. Beyond the illustration of the potential of the database, this application will help scientists to understand the different sources of noise and bias that can occur at the different steps in the processing and will be very useful for choosing appropriate reconstruction methods and parameters. PMID:26863655

  18. A Curated Database of Rodent Uterotrophic Bioactivity

    PubMed Central

    Kleinstreuer, Nicole C.; Ceger, Patricia C.; Allen, David G.; Strickland, Judy; Chang, Xiaoqing; Hamm, Jonathan T.; Casey, Warren M.

    2015-01-01

    Background: Novel in vitro methods are being developed to identify chemicals that may interfere with estrogen receptor (ER) signaling, but the results are difficult to put into biological context because of reliance on reference chemicals established using results from other in vitro assays and because of the lack of high-quality in vivo reference data. The Organisation for Economic Co-operation and Development (OECD)-validated rodent uterotrophic bioassay is considered the “gold standard” for identifying potential ER agonists. Objectives: We performed a comprehensive literature review to identify and evaluate data from uterotrophic studies and to analyze study variability. Methods: We reviewed 670 articles with results from 2,615 uterotrophic bioassays using 235 unique chemicals. Study descriptors, such as species/strain, route of administration, dosing regimen, lowest effect level, and test outcome, were captured in a database of uterotrophic results. Studies were assessed for adherence to six criteria that were based on uterotrophic regulatory test guidelines. Studies meeting all six criteria (458 bioassays on 118 unique chemicals) were considered guideline-like (GL) and were subsequently analyzed. Results: The immature rat model was used for 76% of the GL studies. Active outcomes were more prevalent across rat models (74% active) than across mouse models (36% active). Of the 70 chemicals with at least two GL studies, 18 (26%) had discordant outcomes and were classified as both active and inactive. Many discordant results were attributable to differences in study design (e.g., injection vs. oral dosing). Conclusions: This uterotrophic database provides a valuable resource for understanding in vivo outcome variability and for evaluating the performance of in vitro assays that measure estrogenic activity. Citation: Kleinstreuer NC, Ceger PC, Allen DG, Strickland J, Chang X, Hamm JT, Casey WM. 2016. A curated database of rodent uterotrophic bioactivity. Environ

  19. Difference in method of administration did not significantly impact item response: an IRT-based analysis from the Patient-Reported Outcomes Measurement Information System (PROMIS) initiative

    PubMed Central

    Rose, Matthias; Gandek, Barbara; Stone, Arthur A.; Junghaenel, Doerte U.; Ware, John E.

    2013-01-01

    Purpose To test the impact of method of administration (MOA) on the measurement characteristics of items developed in the Patient-Reported Outcomes Measurement Information System (PROMIS). Methods Two non-overlapping parallel 8-item forms from each of three PROMIS domains (physical function, fatigue, and depression) were completed by 923 adults (age 18–89) with chronic obstructive pulmonary disease, depression, or rheumatoid arthritis. In a randomized crossover design, subjects answered one form by interactive voice response (IVR) technology, paper questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) on the Internet, and a second form by PC, in the same administration. Structural invariance, equivalence of item responses, and measurement precision were evaluated using confirmatory factor analysis and item response theory methods. Results Multigroup confirmatory factor analysis supported equivalence of factor structure across MOA. Analyses by item response theory found no differences in item location parameters and strongly supported the equivalence of scores across MOA. Conclusions We found no statistically or clinically significant differences in score levels in IVR, PQ, or PDA administration as compared to PC. Availability of large item response theory-calibrated PROMIS item banks allowed for innovations in study design and analysis. PMID:23877585

  20. Peroral administration of 5-bromo-2-deoxyuridine in drinking water is not a reliable method for labeling proliferating S-phase cells in rats.

    PubMed

    Ševc, Juraj; Matiašová, Anna; Smoleková, Ivana; Jendželovský, Rastislav; Mikeš, Jaromír; Tomášová, Lenka; Kútna, Viera; Daxnerová, Zuzana; Fedoročko, Peter

    2015-01-01

    In rodents, peroral (p.o.) administration of 5-bromo-2'-deoxyuridine (BrdU) dissolved in drinking water is a widely used method for labeling newly formed cells over a prolonged time-period. Despite the broad applicability of this method, the pharmacokinetics of BrdU in rats or mice after p.o. administration remains unknown. Moreover, the p.o. route of administration may be limited by the relatively low amount of BrdU consumed over 24h and the characteristic drinking pattern of rats, with water intake being observed predominantly during the dark phase. Therefore, we investigated the reliability of staining proliferating S-phase cells with BrdU after p.o. administration (1mg/ml) to rats using both in vitro and in vivo conditions. Flow cytometric analysis of tumor cells co-cultivated with sera from experimental animals exposed to BrdU dissolved in drinking water or 25% orange juice revealed that the concentration of BrdU in the blood sera of rats throughout the day was below the detection limits of our assay. Ingested BrdU was only sufficient to label approximately 4.2±0.3% (water) or 4.2±0.3% (25% juice) of all S-phase cells. Analysis of data from in vivo conditions indicates that only 7.6±3.3% or 15.5±2.3% of all S-phase cells in the dentate gyrus of the hippocampus was labeled in animals administered drinking water containing BrdU during the light and dark phases of the day. In addition, the intensity of BrdU-positive nuclei in animals receiving p.o. administration of BrdU was significantly lower than in control animals intraperitoneally injected with BrdU. Our data indicate that the conventional approach of p.o. administration of BrdU in the drinking water to rats provides strongly inaccurate information about the number of proliferating cells in target tissues. Therefore other administration routes, such as osmotic mini pumps, should be considered for labeling of proliferating cells over a prolonged time-period. PMID:26045061

  1. Leadership Styles of Nursing Home Administrators and Their Association with Staff Turnover

    ERIC Educational Resources Information Center

    Donoghue, Christopher; Castle, Nicholas G.

    2009-01-01

    Purpose: The purpose of this study was to examine the associations between nursing home administrator (NHA) leadership style and staff turnover. Design and Methods: We analyzed primary data from a survey of 2,900 NHAs conducted in 2005. The Online Survey Certification and Reporting database and the Area Resource File were utilized to extract…

  2. Conditions database system of the COMPASS experiment

    NASA Astrophysics Data System (ADS)

    Toeda, T.; Lamanna, M.; Duic, V.; Manara, A.

    2003-05-01

    The CERN SPS experiment COMPASS has integrated a Conditions Database System in its off-line software. The system is used to manage time-dependent information, detector condition, calibration, and geometrical alignment information, by using a package provided by the CERN IT/DB. This integrated system consists of administration tools, a data handling library, and data transfer software from the detector control system to the Conditions Database. In this paper, the status of the Conditions Database project is described, and the results of the performance test on the COMPASS computing farm are given.

  3. [DICOM data conversion technology research for database].

    PubMed

    Wang, Shiyu; Lin, Hao

    2010-12-01

    A comprehensive medical image platform built for medical images network access, measurements and virtual surgery navigation needs the support of medical image databases. The medical image database we built contains two-dimensional images and three-dimensional models. The common databases based on DICOM storing do not meet the requirements. We use the technology of DICOM conversion to convert DICOM into BMP images and indispensable data elements, and then we use the BMP images and indispensable data elements to reconstruct the three-dimensional model. The reliability of DICOM data conversion is verified, and on this basis, a human hip joint medical image database is built. Experimental results show that this method used in building the medical image database can not only meet the requirements of database application, but also greatly reduce the amount of database storage. PMID:21374999

  4. Modernizing Administration.

    ERIC Educational Resources Information Center

    Lombardi, Vincent L.; Hildebrand, Verna

    1981-01-01

    Suggests assignment of research duties and rotation of teaching and management roles for college administrators, to increase their effectiveness and diminish the negative effects of declining enrollments. (JD)

  5. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  6. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  7. Household Products Database: Pesticides

    MedlinePlus

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2015 by DeLima Associates. All rights reserved. ...

  8. MPlus Database system

    SciTech Connect

    Not Available

    1989-01-20

    The MPlus Database program was developed to keep track of mail received. This system was developed by TRESP for the Department of Energy/Oak Ridge Operations. The MPlus Database program is a PC application, written in dBase III+'' and compiled with Clipper'' into an executable file. The files you need to run the MPLus Database program can be installed on a Bernoulli, or a hard drive. This paper discusses the use of this database.

  9. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  10. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  11. Plant and Crop Databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Databases have become an integral part of all aspects of biological research, including basic and applied plant biology. The importance of databases continues to increase as the volume of data from direct and indirect genomics approaches expands. What is not always obvious to users of databases is t...

  12. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  13. Administrative Support.

    ERIC Educational Resources Information Center

    Doran, Dorothy; And Others

    This guide is intended to assist business education teachers in administrative support courses. The materials presented are based on the Arizona validated occupational competencies and tasks for the occupations of receptionist, secretary, and administrative assistant. Word processing skills have been infused into each of the three sections. The…

  14. Administrative Ecology

    ERIC Educational Resources Information Center

    McGarity, Augustus C., III; Maulding, Wanda

    2007-01-01

    This article discusses how all four facets of administrative ecology help dispel the claims about the "impossibility" of the superintendency. These are personal ecology, professional ecology, organizational ecology, and community ecology. Using today's superintendency as an administrative platform, current literature describes a preponderance of…

  15. Comparison of arterial pressure and plasma ANG II responses to three methods of subcutaneous ANG II administration

    PubMed Central

    Kuroki, Marcos T.; Fink, Gregory D.

    2014-01-01

    Angiotensin II (ANG II)-induced hypertension is a commonly studied model of experimental hypertension, particularly in rodents, and is often generated by subcutaneous delivery of ANG II using Alzet osmotic minipumps chronically implanted under the skin. We have observed that, in a subset of animals subjected to this protocol, mean arterial pressure (MAP) begins to decline gradually starting the second week of ANG II infusion, resulting in a blunting of the slow pressor response and reduced final MAP. We hypothesized that this variability in the slow pressor response to ANG II was mainly due to factors unique to Alzet pumps. To test this, we compared the pressure profile and changes in plasma ANG II levels during subcutaneous ANG II administration (150 ng·kg−1·min−1) using either Alzet minipumps, iPrecio implantable pumps, or a Harvard external infusion pump. At the end of 14 days of ANG II, MAP was highest in the iPrecio group (156 ± 3 mmHg) followed by Harvard (140 ± 3 mmHg) and Alzet (122 ± 3 mmHg) groups. The rate of the slow pressor response, measured as daily increases in pressure averaged over days 2–14 of ANG II, was similar between iPrecio and Harvard groups (2.7 ± 0.4 and 2.2 ± 0.4 mmHg/day) but was significantly blunted in the Alzet group (0.4 ± 0.4 mmHg/day) due to a gradual decline in MAP in a subset of rats. We also found differences in the temporal profile of plasma ANG II between infusion groups. We conclude that the gradual decline in MAP observed in a subset of rats during ANG II infusion using Alzet pumps is mainly due to pump-dependent factors when applied in this particular context. PMID:24993045

  16. The Education of Librarians for Data Administration.

    ERIC Educational Resources Information Center

    Koenig, Michael E. D.; Kochoff, Stephen T.

    1983-01-01

    Argues that the increasing importance of database management systems (DBMS) and recognition of the information dependency of business planning are creating new job opportunities for librarians/information technicians. Highlights include development and functions of DBMSs, data and database administration, potential for librarians, and implications…

  17. ORNL RAIL & BARGE DB. Network Database

    SciTech Connect

    Johnson, P.

    1991-07-01

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database. The database consists of 96 subnetworks. Each of the subnetworks represent an individual railroad, a waterway system, or a composite group of small railroads. Two subnetworks represent waterways; one being barge/intercoastal, and the other coastal merchant marine with access through the Great Lakes/Saint Lawrence Seaway, Atlantic and Gulf Coasts, the Panama Canal, and Pacific Coast. Two other subnetworks represent small shortline railroads and terminal railroad operations. One subnetwork is maintained for the representation of Amtrak operations. The remaining 91 subnetworks represent individual or corporate groups of railroads. Coordinate locations are included as part of the database. The rail portion of the database is similar to the original FRA rail network. The waterway coordinates are greatly enhanced in the current release. Inland waterway representation was extracted from the 1:2,000,000 United States Geological Survey data. An important aspect of the database is the transfer file. This file identifies where two railroads interline traffic between their systems. Also included are locations where rail/waterway intermodal transfers could occur. Other files in the database include a translation table between Association of American Railroad (AAR) codes to the 96 subnetworks in the database, a list of names of the 96 subnetworks, and a file of names for a large proportion of the nodes in the network.

  18. ORNL RAIL & BARGE DB. Network Database

    SciTech Connect

    Johnson, P.

    1992-03-16

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database. The database consists of 96 subnetworks. Each of the subnetworks represent an individual railroad, a waterway system, or a composite group of small railroads. Two subnetworks represent waterways; one being barge/intercoastal, and the other coastal merchant marine with access through the Great Lakes/Saint Lawrence Seaway, Atlantic and Gulf Coasts, the Panama Canal, and Pacific Coast. Two other subnetworks represent small shortline railroads and terminal railroad operations. One subnetwork is maintained for the representation of Amtrak operations. The remaining 91 subnetworks represent individual or corporate groups of railroads. Coordinate locations are included as part of the database. The rail portion of the database is similar to the original FRA rail network. The waterway coordinates are greatly enhanced in the current release. Inland waterway representation was extracted from the 1:2,000,000 United States Geological Survey data. An important aspect of the database is the transfer file. This file identifies where two railroads interline traffic between their systems. Also included are locations where rail/waterway intermodal transfers could occur. Other files in the database include a translation table between Association of American Railroad (AAR) codes to the 96 subnetworks in the database, a list of names of the 96 subnetworks, and a file of names for a large proportion of the nodes in the network.

  19. Effectiveness of different corticosterone administration methods to elevate corticosterone serum levels, induce depressive-like behavior, and affect neurogenesis levels in female rats.

    PubMed

    Kott, J M; Mooney-Leber, S M; Shoubah, F A; Brummelte, S

    2016-01-15

    High levels of chronic stress or stress hormones are associated with depressive-like behavior in animal models. However, slight elevations in corticosterone (CORT) - the major stress hormone in rodents - have also been associated with improved performances, albeit in a sex-dependent manner. Some of the discrepancies in the literature regarding the effects of high CORT levels may be due to different administrations methods. The current study aims to compare the effects of ∼40mg/kg given either via subcutaneous injection, through an implanted pellet, or in the drinking water, for ∼21days on CORT serum levels, depressive-like behavior in the forced swim test (FST), and neurogenesis levels in the dentate gyrus (DG) in adult female rats. We found that animals exposed to the daily injections showed elevated CORT levels throughout the administration period, while the pellet animals showed only a transient increase, and drinking water animals revealed no elevation in CORT in serum. In addition, only the injection group exhibited higher levels of immobility in the FST. Interestingly, animals receiving CORT via injection or drinking water had lower numbers of doublecortin-positive cells in the ventral DG one week after the last CORT administration compared to animals implanted with a CORT pellet. These results will contribute to the growing literature on the effects of chronic CORT exposure and may help to clarify some of the discrepancies among previous studies, particularly in females. PMID:26556064

  20. The Hidden Dimensions of Databases.

    ERIC Educational Resources Information Center

    Jacso, Peter

    1994-01-01

    Discusses methods of evaluating commercial online databases and provides examples that illustrate their hidden dimensions. Topics addressed include size, including the number of records or the number of titles; the number of years covered; and the frequency of updates. Comparisons of Readers' Guide Abstracts and Magazine Article Summaries are…

  1. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  2. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    SciTech Connect

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from

  3. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    PubMed Central

    Armato, Samuel G.; McLennan, Geoffrey; Bidaut, Luc; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; Zhao, Binsheng; Aberle, Denise R.; Henschke, Claudia I.; Hoffman, Eric A.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J. R.; Yankelevitz, David; Biancardi, Alberto M.; Bland, Peyton H.; Brown, Matthew S.; Engelmann, Roger M.; Laderach, Gary E.; Max, Daniel; Pais, Richard C.; Qing, David P.-Y.; Roberts, Rachael Y.; Smith, Amanda R.; Starkey, Adam; Batra, Poonam; Caligiuri, Philip; Farooqi, Ali; Gladish, Gregory W.; Jude, C. Matilda; Munden, Reginald F.; Petkovska, Iva; Quint, Leslie E.; Schwartz, Lawrence H.; Sundaram, Baskaran; Dodd, Lori E.; Fenimore, Charles; Gur, David; Petrick, Nicholas; Freymann, John; Kirby, Justin; Hughes, Brian; Vande Casteele, Alessi; Gupte, Sangeeta; Sallam, Maha; Heath, Michael D.; Kuhn, Michael H.; Dharaiya, Ekta; Burns, Richard; Fryd, David S.; Salganicoff, Marcos; Anand, Vikram; Shreter, Uri; Vastagh, Stephen; Croft, Barbara Y.; Clarke, Laurence P.

    2011-01-01

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC∕IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (“nodule≥3 mm,” “nodule<3 mm,” and “non-nodule≥3 mm”). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked “nodule” by at least one radiologist. 2669 of these lesions were marked “nodule≥3 mm” by at least one radiologist, of which 928 (34.7%) received such

  4. Data exploration systems for databases

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.; Hield, Christopher

    1992-01-01

    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems.

  5. A Web-based database for pathology faculty effort reporting.

    PubMed

    Dee, Fred R; Haugen, Thomas H; Wynn, Philip A; Leaven, Timothy C; Kemp, John D; Cohen, Michael B

    2008-04-01

    To ensure appropriate mission-based budgeting and equitable distribution of funds for faculty salaries, our compensation committee developed a pathology-specific effort reporting database. Principles included the following: (1) measurement should be done by web-based databases; (2) most entry should be done by departmental administration or be relational to other databases; (3) data entry categories should be aligned with funding streams; and (4) units of effort should be equal across categories of effort (service, teaching, research). MySQL was used for all data transactions (http://dev.mysql.com/downloads), and scripts were constructed using PERL (http://www.perl.org). Data are accessed with forms that correspond to fields in the database. The committee's work resulted in a novel database using pathology value units (PVUs) as a standard quantitative measure of effort for activities in an academic pathology department. The most common calculation was to estimate the number of hours required for a specific task, divide by 2080 hours (a Medicare year) and then multiply by 100. Other methods included assigning a baseline PVU for program, laboratory, or course directorship with an increment for each student or staff in that unit. With these methods, a faculty member should acquire approximately 100 PVUs. Some outcomes include (1) plotting PVUs versus salary to identify outliers for salary correction, (2) quantifying effort in activities outside the department, (3) documenting salary expenditure for unfunded research, (4) evaluating salary equity by plotting PVUs versus salary by sex, and (5) aggregating data by category of effort for mission-based budgeting and long-term planning. PMID:18342660

  6. [Evaluation of the Association of Hand-Foot Syndrome with Anticancer Drugs Using the US Food and Drug Administration Adverse Event Reporting System (FAERS) and Japanese Adverse Drug Event Report (JADER) Databases].

    PubMed

    Sasaoka, Sayaka; Matsui, Toshinobu; Abe, Junko; Umetsu, Ryogo; Kato, Yamato; Ueda, Natsumi; Hane, Yuuki; Motooka, Yumi; Hatahira, Haruna; Kinosada, Yasutomi; Nakamura, Mitsuhiro

    2016-01-01

    The Japanese Ministry of Health, Labor, and Welfare lists hand-foot syndrome as a serious adverse drug event. Therefore, we evaluated its association with anticancer drug therapy using case reports in the Japanese Adverse Drug Event Report (JADER) and the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS). In addition, we calculated the reporting odds ratio (ROR) of anticancer drugs potentially associated with hand-foot syndrome, and applied the Weibull shape parameter to time-to-event data from JADER. We found that JADER contained 338224 reports from April 2004 to November 2014, while FAERS contained 5821354 reports from January 2004 to June 2014. In JADER, the RORs [95% confidence interval (CI)] of hand-foot syndrome for capecitabine, tegafur-gimeracil-oteracil, fluorouracil, sorafenib, and regorafenib were 63.60 (95%CI, 56.19-71.99), 1.30 (95%CI, 0.89-1.89), 0.48 (95%CI, 0.30-0.77), 26.10 (95%CI, 22.86-29.80), and 133.27 (95%CI, 112.85-157.39), respectively. Adverse event symptoms of hand-foot syndrome were observed with most anticancer drugs, which carry warnings of the propensity to cause these effects in their drug information literature. The time-to-event analysis using the Weibull shape parameter revealed differences in the time-dependency of the adverse events of each drug. Therefore, anticancer drugs should be used carefully in clinical practice, and patients may require careful monitoring for symptoms of hand-foot syndrome. PMID:26935094

  7. WDDD: Worm Developmental Dynamics Database.

    PubMed

    Kyoda, Koji; Adachi, Eru; Masuda, Eriko; Nagai, Yoko; Suzuki, Yoko; Oguro, Taeko; Urai, Mitsuru; Arai, Ryoko; Furukawa, Mari; Shimada, Kumiko; Kuramochi, Junko; Nagai, Eriko; Onami, Shuichi

    2013-01-01

    During animal development, cells undergo dynamic changes in position and gene expression. A collection of quantitative information about morphological dynamics under a wide variety of gene perturbations would provide a rich resource for understanding the molecular mechanisms of development. Here, we created a database, the Worm Developmental Dynamics Database (http://so.qbic.riken.jp/wddd/), which stores a collection of quantitative information about cell division dynamics in early Caenorhabditis elegans embryos with single genes silenced by RNA-mediated interference. The information contains the three-dimensional coordinate values of the outlines of nuclear regions and the dynamics of the outlines over time. The database provides free access to 50 sets of quantitative data for wild-type embryos and 136 sets of quantitative data for RNA-mediated interference embryos corresponding to 72 of the 97 essential embryonic genes on chromosome III. The database also provides sets of four-dimensional differential interference contrast microscopy images on which the quantitative data were based. The database will provide a novel opportunity for the development of computational methods to obtain fresh insights into the mechanisms of development. The quantitative information and microscopy images can be synchronously viewed through a web browser, which is designed for easy access by experimental biologists. PMID:23172286

  8. Developing a DNA variant database.

    PubMed

    Fung, David C Y

    2008-01-01

    Disease- and locus-specific variant databases have been a valuable resource to clinical and research geneticists. With the recent rapid developments in technologies, the number of DNA variants detected in a typical molecular genetics laboratory easily exceeds 1,000. To keep track of the growing inventory of DNA variants, many laboratories employ information technology to store the data as well as distributing the data and its associated information to clinicians and researchers via the Web. While it is a valuable resource, the hosting of a web-accessible database requires collaboration between bioinformaticians and biologists and careful planning to ensure its usability and availability. In this chapter, a series of tutorials on building a local DNA variant database out of a sample dataset will be provided. However, this tutorial will not include programming details on building a web interface and on constructing the web application necessary for web hosting. Instead, an introduction to the two commonly used methods for hosting web-accessible variant databases will be described. Apart from the tutorials, this chapter will also consider the resources and planning required for making a variant database project successful. PMID:18453092

  9. Implementing a Microcomputer Database Management System.

    ERIC Educational Resources Information Center

    Manock, John J.; Crater, K. Lynne

    1985-01-01

    Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…

  10. Creative Classroom Assignment Through Database Management.

    ERIC Educational Resources Information Center

    Shah, Vivek; Bryant, Milton

    1987-01-01

    The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)

  11. Observational database for studies of nearby universe

    NASA Astrophysics Data System (ADS)

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  12. HPLC method for comparative study on tissue distribution in rat after oral administration of salvianolic acid B and phenolic acids from Salvia miltiorrhiza.

    PubMed

    Xu, Man; Fu, Gang; Qiao, Xue; Wu, Wan-Ying; Guo, Hui; Liu, Ai-Hua; Sun, Jiang-Hao; Guo, De-An

    2007-10-01

    A sensitive and selective high-performance liquid chromatography method was developed and validated to determine the prototype of salvianolic acid B and the metabolites of phenolic acids (protocatechuic acid, vanillic acid and ferulic acid) in rat tissues after oral administration of total phenolic acids and salvianolic acid B extracted from the roots of Salvia miltiorrhiza, respectively. The tissue samples were treated with a simple liquid-liquid extraction prior to HPLC. Analysis of the extract was performed on a reverse-phase C(18) column with a mobile phase consisting of acetonitrile and 0.05% trifluoracetic acid. The calibration curves for the four phenolic acids were linear in the given concentration ranges. The intra-day and inter-day relative standard deviations in the measurement of quality control samples were less than 10% and the accuracies were in the range of 88-115%. The average recoveries of all the tissues ranged from 78.0 to 111.8%. This method was successfully applied to evaluate the distribution of the four phenolic acids in rat tissues after oral administration of total phenolic acids of Salvia miltiorrhiza or salvianolic acid B and the possible metabolic pathway was illustrated. PMID:17549679

  13. Physics-Based GOES Satellite Product for Use in NREL's National Solar Radiation Database: Preprint

    SciTech Connect

    Sengupta, M.; Habte, A.; Gotseff, P.; Weekley, A.; Lopez, A.; Molling, C.; Heidinger, A.

    2014-07-01

    The National Renewable Energy Laboratory (NREL), University of Wisconsin, and National Oceanic Atmospheric Administration are collaborating to investigate the integration of the Satellite Algorithm for Shortwave Radiation Budget (SASRAB) products into future versions of NREL's 4-km by 4-km gridded National Solar Radiation Database (NSRDB). This paper describes a method to select an improved clear-sky model that could replace the current SASRAB global horizontal irradiance and direct normal irradiances reported during clear-sky conditions.

  14. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  15. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application. PMID:27197511

  16. An Introduction to Database Management Systems.

    ERIC Educational Resources Information Center

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  17. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C., Jr.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  18. Validation of chronic obstructive pulmonary disease (COPD) diagnoses in healthcare databases: a systematic review protocol

    PubMed Central

    Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro

    2016-01-01

    Introduction Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. Methods and analysis MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics and dissemination Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating

  19. 2010 Worldwide Gasification Database

    DOE Data Explorer

    The 2010 Worldwide Gasification Database describes the current world gasification industry and identifies near-term planned capacity additions. The database lists gasification projects and includes information (e.g., plant location, number and type of gasifiers, syngas capacity, feedstock, and products). The database reveals that the worldwide gasification capacity has continued to grow for the past several decades and is now at 70,817 megawatts thermal (MWth) of syngas output at 144 operating plants with a total of 412 gasifiers.

  20. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  1. Opening CEM vendor databases

    SciTech Connect

    Long, A.; Patel, D.

    1995-12-31

    CEM database performance requirements (i.e., voluminous data storage, rapid response times) often conflict with the concept of an open, accessible database. Utilities would like to use their CEM data for more purposes than simply submitting environmental reports. But in most cases, other uses are inhibited because today`s sophisticated CEM systems incorporate databases that have forsaken openness and accessibility in favor of performance. Several options are available for CEM vendors wishing to move in the direction of open, accessible CEM databases.

  2. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  3. Backing up DMF Databases

    NASA Technical Reports Server (NTRS)

    Cardo, Nicholas P.; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    A complete backup of the Cray Data Migration Facility (DMF) databases should include the data migration databases, all media specific process' (MSP's) databases, and the journal file. The backup should be able to accomplished without impacting users or stopping DMF. The High Speed Processors group at the Numerical Aerodynamics Simulation (NAS) Facility at NASA Ames Research Center undertook the task of finding an effective and efficient way to backup all DMF databases. This has been accomplished by taking advantage of new features introduced in DMF 2.0 and adding a minor modification to the dmdaemon. This paper discusses the investigation and the changes necessary to implement these enhancements.

  4. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  5. Automated tools for cross-referencing large databases. Final report

    SciTech Connect

    Clapp, N E; Green, P L; Bell, D

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  6. Creating a VAPEPS database: A VAPEPS tutorial

    NASA Technical Reports Server (NTRS)

    Graves, George

    1989-01-01

    A procedural method is outlined for creating a Vibroacoustic Payload Environment Prediction System (VAPEPS) Database. The method of presentation employs flowcharts of sequential VAPEPS Commands used to create a VAPEPS Database. The commands are accompanied by explanatory text to the right of the command in order to minimize the need for repetitive reference to the VAPEPS user's manual. The method is demonstrated by examples of varying complexity. It is assumed that the reader has acquired a basic knowledge of the VAPEPS software program.

  7. 77 FR 71089 - Pilot Loading of Aeronautical Database Updates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-29

    ... FAA issued regulations (61 FR 19498) categorizing pilot-performed updates of navigation databases as... regulations in the NPRM (76 FR 64859, October 19, 2011), by removing the task of updating databases used in... Federal Aviation Administration 14 CFR Part 43 RIN 2120-AJ91 Pilot Loading of Aeronautical...

  8. Video Databases: An Emerging Tool in Business Education

    ERIC Educational Resources Information Center

    MacKinnon, Gregory; Vibert, Conor

    2014-01-01

    A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…

  9. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved

  10. Comparative pharmacokinetics of active alkaloids after oral administration of Rhizoma Coptidis extract and Wuji Wan formulas in rat using a UPLC-MS/MS method.

    PubMed

    Chen, Ying; Li, Yuejie; Wang, Yajie; Yang, Qing; Dong, Yu; Weng, Xiaogang; Zhu, Xiaoxin; Wang, Yiwei; Gong, Zipeng; Zhang, Ruijie

    2015-03-01

    Wuji Wan (WJW), containing Rhizoma Coptidis (Huanglian in Chinese, HL), Frutus Evodiae Rutaecarpae (Wuzhuyu, WZY) and Radix Paeoniae Alba (Baishao, BS), is a classical traditional Chinese medical formula employed in treating intestinal disorders. Berberine (BBR) and palmatine (PMT) are the major active alkaloids in HL and have analgesic and anti-microbial effects. A sensitive, specific and validated ultra-performance liquid chromatography-tandem mass spectrometric method was developed to investigate the pharmacokinetic profiles of BBR and PMT in rat plasma and in situ intestinal perfusion solution. In comparison with the pharmacokinetic parameters of BBR and PMT, t(1/2), C(max), T(max), AUC, CL and MRT after intragastric (i.g.) administration with HL extract alone, those remarkably changed after i.g. administration with WJW formulas 1 and 2 (herb proportions are 12:2:3 and 12:1:12). Particularly, the oral bioavailability of PMT in WJW formula 1 was significantly increased. In rat intestinal perfusion experiments, the apparent permeability coefficient value of PMT was (1.45 ± 0.72) × 10(-5) cm/s when perfusion with HL was performed, and the value was significantly increased to (3.92 ± 0.52) × 10(-5) cm/s on perfusion with WJW formula 1. These results indicate that the pharmacokinetic parameters and absorption of BBR and PMT are affected by the other herbs or ingredients from WJW formulas. PMID:24577954

  11. Paleoepidemiologic investigation of Legionnaires disease at Wadsworth Veterans Administration Hospital by using three typing methods for comparison of legionellae from clinical and environmental sources.

    PubMed Central

    Edelstein, P H; Nakahama, C; Tobin, J O; Calarco, K; Beer, K B; Joly, J R; Selander, R K

    1986-01-01

    Multilocus enzyme electrophoresis, monoclonal antibody typing for Legionella pneumophila serogroup 1, and plasmid analysis were used to type 89 L. pneumophila strains isolated from nosocomial cases of Legionnaires disease at the Veterans Administration Wadsworth Medical Center (VAWMC) and from the hospital environment. Twelve L. pneumophila clinical isolates, obtained from patients at non-VAWMC hospitals, were also typed by the same methods to determine typing specificity. Seventy-nine percent of 33 VAWMC L. pneumophila serogroup 1 clinical isolates and 70% of 23 environmental isolates were found in only one of the five monoclonal subgroups. Similar clustering was found for the other two typing methods, with excellent correlation between all methods. Enzyme electrophoretic typing divided the isolates into the greatest number of distinct groups, resulting in the identification of 10 different L. pneumophila types and 5 types not belonging to L. pneumophila, which probably constitute an undescribed Legionella species; 7 clinical and 34 environmental VAWMC isolates and 2 non-VAWMC clinical isolates were found to be members of the new species. Twelve different plasmid patterns were found; 95% of VAWMC clinical isolates contained plasmids. Major VAWMC epidemic-bacterial types were common in the hospital potable-water distribution system and cooling towers. Strains of L. pneumophila which persisted after disinfection of contaminated environmental sites were of a different type from the prechlorination strains. All three typing methods were useful in the epidemiologic analysis of the VAWMC outbreak. PMID:3711303

  12. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  13. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  14. Database Searching by Managers.

    ERIC Educational Resources Information Center

    Arnold, Stephen E.

    Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…

  15. Morchella MLST database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Welcome to the Morchella MLST database. This dedicated database was set up at the CBS-KNAW Biodiversity Center by Vincent Robert in February 2012, using BioloMICS software (Robert et al., 2011), to facilitate DNA sequence-based identifications of Morchella species via the Internet. The current datab...

  16. First Look: TRADEMARKSCAN Database.

    ERIC Educational Resources Information Center

    Fernald, Anne Conway; Davidson, Alan B.

    1984-01-01

    Describes database produced by Thomson and Thomson and available on Dialog which contains over 700,000 records representing all active federal trademark registrations and applications for registrations filed in United States Patent and Trademark Office. A typical record, special features, database applications, learning to use TRADEMARKSCAN, and…

  17. HIV Structural Database

    National Institute of Standards and Technology Data Gateway

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  18. Biological Macromolecule Crystallization Database

    National Institute of Standards and Technology Data Gateway

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  19. Assignment to database industy

    NASA Astrophysics Data System (ADS)

    Abe, Kohichiroh

    Various kinds of databases are considered to be essential part in future large sized systems. Information provision only by databases is also considered to be growing as the market becomes mature. This paper discusses how such circumstances have been built and will be developed from now on.

  20. Dictionary as Database.

    ERIC Educational Resources Information Center

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  1. BioImaging Database

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  2. Build Your Own Database.

    ERIC Educational Resources Information Center

    Jacso, Peter; Lancaster, F. W.

    This book is intended to help librarians and others to produce databases of better value and quality, especially if they have had little previous experience in database construction. Drawing upon almost 40 years of experience in the field of information retrieval, this book emphasizes basic principles and approaches rather than in-depth and…

  3. Database Reviews: Legal Information.

    ERIC Educational Resources Information Center

    Seiser, Virginia

    Detailed reviews of two legal information databases--"Laborlaw I" and "Legal Resource Index"--are presented in this paper. Each database review begins with a bibliographic entry listing the title; producer; vendor; cost per hour contact time; offline print cost per citation; time period covered; frequency of updates; and size of file. A detailed…

  4. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  5. National Vulnerability Database (NVD)

    National Institute of Standards and Technology Data Gateway

    National Vulnerability Database (NVD) (Web, free access)   NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.

  6. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…

  7. Online Database Searching Workbook.

    ERIC Educational Resources Information Center

    Littlejohn, Alice C.; Parker, Joan M.

    Designed primarily for use by first-time searchers, this workbook provides an overview of online searching. Following a brief introduction which defines online searching, databases, and database producers, five steps in carrying out a successful search are described: (1) identifying the main concepts of the search statement; (2) selecting a…

  8. CPDB: Carcinogenic Potency Database.

    PubMed

    Fitzpatrick, Roberta Bronson

    2008-01-01

    The Carcinogenic Potency Database reports analyses of animal cancer tests on 1,547 chemicals. These tests are used in support of cancer risk assessments for humans. Results are searchable and are made available via the National Library of Medicine's (NLM) TOXNET system. This column will provide background information on the database, as well as present search basics. PMID:19042710

  9. Separation of input function for rapid measurement of quantitative CMRO2 and CBF in a single PET scan with a dual tracer administration method.

    PubMed

    Kudomi, Nobuyuki; Watabe, Hiroshi; Hayashi, Takuya; Iida, Hidehiro

    2007-04-01

    Cerebral metabolic rate of oxygen (CMRO(2)), oxygen extraction fraction (OEF) and cerebral blood flow (CBF) images can be quantified using positron emission tomography (PET) by administrating (15)O-labelled water (H(15)(2)O) and oxygen ((15)O(2)). Conventionally, those images are measured with separate scans for three tracers C(15)O for CBV, H(15)(2)O for CBF and (15)O(2) for CMRO(2), and there are additional waiting times between the scans in order to minimize the influence of the radioactivity from the previous tracers, which results in a relatively long study period. We have proposed a dual tracer autoradiographic (DARG) approach (Kudomi et al 2005), which enabled us to measure CBF, OEF and CMRO(2) rapidly by sequentially administrating H(15)(2)O and (15)O(2) within a short time. Because quantitative CBF and CMRO(2) values are sensitive to arterial input function, it is necessary to obtain accurate input function and a drawback of this approach is to require separation of the measured arterial blood time-activity curve (TAC) into pure water and oxygen input functions under the existence of residual radioactivity from the first injected tracer. For this separation, frequent manual sampling was required. The present paper describes two calculation methods: namely a linear and a model-based method, to separate the measured arterial TAC into its water and oxygen components. In order to validate these methods, we first generated a blood TAC for the DARG approach by combining the water and oxygen input functions obtained in a series of PET studies on normal human subjects. The combined data were then separated into water and oxygen components by the present methods. CBF and CMRO(2) were calculated using those separated input functions and tissue TAC. The quantitative accuracy in the CBF and CMRO(2) values by the DARG approach did not exceed the acceptable range, i.e., errors in those values were within 5%, when the area under the curve in the input function of the

  10. Administrative IT

    ERIC Educational Resources Information Center

    Grayson, Katherine, Ed.

    2006-01-01

    When it comes to Administrative IT solutions and processes, best practices range across the spectrum. Enterprise resource planning (ERP), student information systems (SIS), and tech support are prominent and continuing areas of focus. But widespread change can also be accomplished via the implementation of campuswide document imaging and sharing,…

  11. Engineering Administration.

    ERIC Educational Resources Information Center

    Naval Personnel Program Support Activity, Washington, DC.

    This book is intended to acquaint naval engineering officers with their duties in the engineering department. Standard shipboard organizations are analyzed in connection with personnel assignments, division operations, and watch systems. Detailed descriptions are included for the administration of directives, ship's bills, damage control, training…

  12. ADMINISTRATIVE CLIMATE.

    ERIC Educational Resources Information Center

    BRUCE, ROBERT L.; CARTER, G.L., JR.

    IN THE COOPERATIVE EXTENSION SERVICE, STYLES OF LEADERSHIP PROFOUNDLY AFFECT THE QUALITY OF THE SERVICE RENDERED. ACCORDINGLY, MAJOR INFLUENCES ON ADMINISTRATIVE CLIMATE AND EMPLOYEE PRODUCTIVITY ARE EXAMINED IN ESSAYS ON (1) SOURCES OF JOB SATISFACTION AND DISSATISFACTION, (2) MOTIVATIONAL THEORIES BASED ON JOB-RELATED SATISFACTIONS AND NEEDS,…

  13. Cascadia Tsunami Deposit Database

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce; Gelfenbaum, Guy; Peterson, Curt

    2003-01-01

    The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have been compiled from 52 studies, documenting 59 sites from northern California to Vancouver Island, British Columbia that contain known or potential tsunami deposits. Bibliographical references are provided for all sites included in the database. Cascadia tsunami deposits are usually seen as anomalous sand layers in coastal marsh or lake sediments. The studies cited in the database use numerous criteria based on sedimentary characteristics to distinguish tsunami deposits from sand layers deposited by other processes, such as river flooding and storm surges. Several studies cited in the database contain evidence for more than one tsunami at a site. Data categories include age, thickness, layering, grainsize, and other sedimentological characteristics of Cascadia tsunami deposits. The database documents the variability observed in tsunami deposits found along the Cascadia margin.

  14. Protein sequence databases.

    PubMed

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium. PMID:15036160

  15. Time Dependent Antinociceptive Effects of Morphine and Tramadol in the Hot Plate Test: Using Different Methods of Drug Administration in Female Rats

    PubMed Central

    Gholami, Morteza; Saboory, Ehsan; Mehraban, Sogol; Niakani, Afsaneh; Banihabib, Nafiseh; Azad, Mohamad-Reza; Fereidoni, Javid

    2015-01-01

    Morphine and tramadol which have analgesic effects can be administered acutely or chronically. This study tried to investigate the effect of these drugs at various times by using different methods of administration (intraperitoneal, oral, acute and chronic). Sixty adult female rats were divided into six groups. They received saline, morphine or tramadol (20 to 125 mg/Kg) daily for 15 days. A hot plate test was performed for the rats at the 1st, 8th and 15th days. After drug withdrawal, the hot plate test was repeated at the 17th, 19th, and 22nd days. There was a significant correlation between the day, drug, group, and their interaction (P<0.001). At 1st day (d1), both morphine, and tramadol caused an increase in the hot plate time comparing to the saline groups (P<0.001), while there was no correlation between drug administration methods of morphine and/or tramadol. At the 8th day (d8), morphine and tramadol led to the most powerful analgesic effect comparing to the other experimental days (P<0.001). At the 15th day (d15), their effects diminished comparing to the d8. After drug withdrawal, analgesic effect of morphine, and tramadol disappeared. It can be concluded that the analgesic effect of morphine and tramadol increases with the repeated use of them. Thereafter, it may gradually decrease and reach to a level compatible to d1. The present data also indicated that although the analgesic effect of morphine and tramadol is dose-and-time dependent, but chronic exposure to them may not lead to altered nociceptive responses later in life. PMID:25561936

  16. Knowledge Abstraction in Chinese Chess Endgame Databases

    NASA Astrophysics Data System (ADS)

    Chen, Bo-Nian; Liu, Pangfeng; Hsu, Shun-Chin; Hsu, Tsan-Sheng

    Retrograde analysis is a well known approach to construct endgame databases. However, the size of the endgame databases are too large to be loaded into the main memory of a computer during tournaments. In this paper, a novel knowledge abstraction strategy is proposed to compress endgame databases. The goal is to obtain succinct knowledge for practical endgames. A specialized goal-oriented search method is described and applied on the important endgame KRKNMM. The method of combining a search algorithm with a small size of knowledge is used to handle endgame positions up to a limited depth, but with a high degree of correctness.

  17. Method and system for normalizing biometric variations to authenticate users from a public database and that ensures individual biometric data privacy

    DOEpatents

    Strait, Robert S.; Pearson, Peter K.; Sengupta, Sailes K.

    2000-01-01

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a "reference value." To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced value is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.

  18. Hazard Analysis Database Report

    SciTech Connect

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  19. PADB : Published Association Database

    PubMed Central

    Rhee, Hwanseok; Lee, Jin-Sung

    2007-01-01

    Background Although molecular pathway information and the International HapMap Project data can help biomedical researchers to investigate the aetiology of complex diseases more effectively, such information is missing or insufficient in current genetic association databases. In addition, only a few of the environmental risk factors are included as gene-environment interactions, and the risk measures of associations are not indexed in any association databases. Description We have developed a published association database (PADB; ) that includes both the genetic associations and the environmental risk factors available in PubMed database. Each genetic risk factor is linked to a molecular pathway database and the HapMap database through human gene symbols identified in the abstracts. And the risk measures such as odds ratios or hazard ratios are extracted automatically from the abstracts when available. Thus, users can review the association data sorted by the risk measures, and genetic associations can be grouped by human genes or molecular pathways. The search results can also be saved to tab-delimited text files for further sorting or analysis. Currently, PADB indexes more than 1,500,000 PubMed abstracts that include 3442 human genes, 461 molecular pathways and about 190,000 risk measures ranging from 0.00001 to 4878.9. Conclusion PADB is a unique online database of published associations that will serve as a novel and powerful resource for reviewing and interpreting huge association data of complex human diseases. PMID:17877839

  20. ResPlan Database

    NASA Technical Reports Server (NTRS)

    Zellers, Michael L.

    2003-01-01

    The main project I was involved in was new application development for the existing CIS0 Database (ResPlan). This database application was developed in Microsoft Access. Initial meetings with Greg Follen, Linda McMillen, Griselle LaFontaine and others identified a few key weaknesses with the existing database. The weaknesses centered around that while the database correctly modeled the structure of Programs, Projects and Tasks, once the data was entered, the database did not capture any dynamic status information, and as such was of limited usefulness. After the initial meetings my goals were identified as follows: Enhance the ResPlan Database to include qualitative and quantitative status information about the Programs, Projects and Tasks Train staff members about the ResPlan database from both the user perspective and the developer perspective Give consideration to a Web Interface for reporting. Initially, the thought was that there would not be adequate time to actually develop the Web Interface, Greg wanted it understood that this was an eventual goal and as such should be a consideration throughout the development process.

  1. Hazard Analysis Database Report

    SciTech Connect

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  2. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  3. A relaxometric method for the assessment of intestinal permeability based on the oral administration of gadolinium-based MRI contrast agents.

    PubMed

    Gianolio, Eliana; Boffa, Cinzia; Orecchia, Valeria; Bardini, Paola; Catanzaro, Valeria; Poli, Valeria; Aime, Silvio

    2016-04-01

    Herein, a new relaxometric method for the assessment of intestinal permeability based on the oral administration of clinically approved gadolinium (Gd)-based MRI contrast agents (CAs) is proposed. The fast, easily performed and cheap measurement of the longitudinal water proton relaxation rate (R1 ) in urine reports the amount of paramagnetic probe that has escaped the gastrointestinal tract. The proposed method appears to be a compelling alternative to the available methods for the assessment of intestinal permeability. The method was tested on the murine model of dextran sulfate sodium (DSS)-induced colitis in comparison with healthy mice. Three CAs were tested, namely ProHance®, MultiHance® and Magnevist®. Urine was collected for 24 h after the oral ingestion of the Gd-containing CA at day 3-4 (severe damage stage) and day 8-9 (recovery stage) after treatment with DSS. The Gd content in urine measured by (1) H relaxometry was confirmed by inductively coupled plasma-mass spectrometry (ICP-MS). The extent of urinary excretion was given as a percentage of excreted Gd over the total ingested dose. The method was validated by comparing the results obtained with the established methodology based on the lactulose/mannitol and sucralose tests. For ProHance and Magnevist, the excreted amounts in the severe stage of damage were 2.5-3 times higher than in control mice. At the recovery stage, no significant differences were observed with respect to healthy mice. Overall, a very good correlation with the lactulose/mannitol and sucralose results was obtained. In the case of MultiHance, the percentage of excreted Gd complex was not significantly different from that of control mice in either the severe or recovery stages. The difference from ProHance and Magnevist was explained on the basis of the (known) partial biliary excretion of MultiHance in mice. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26866929

  4. Database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1991-01-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  5. The Gaia Parameter Database

    NASA Astrophysics Data System (ADS)

    de Bruijne, J. H. J.; Lammers, U.; Perryman, M. A. C.

    2005-01-01

    The parallel development of many aspects of a complex mission like Gaia, which includes numerous participants in ESA, industrial companies, and a large and active scientific collaboration throughout Europe, makes keeping track of the many design changes, instrument and operational complexities, and numerical values for the data analysis a very challenging problem. A comprehensive, easily-accessible, up-to-date, and definitive compilation of a large range of numerical quantities is required, and the Gaia parameter database has been established to satisfy these needs. The database is a centralised repository containing, besides mathematical, physical, and astronomical constants, many satellite and subsystem design parameters. At the end of 2004, more than 1600 parameters had been included. Version control has been implemented, providing, next to a `live' version with the most recent parameters, well-defined reference versions of the full database contents. The database can be queried or browsed using a regular Web browser (http://www.rssd.esa.int/Gaia/paramdb). Query results are formated by default in HTML. Data can also be retrieved as Fortran-77, Fortran-90, Java, ANSIC, C++, or XML structures for direct inclusion into software codes in these languages. The idea is that all collaborating scientists can use the database parameters and values, once retrieved, directly linked to computational routines. An off-line access mode is also available, enabling users to automatically download the contents of the database. The database will be maintained actively, and significant extensions of the contents are planned. Consistent use in the future of the database by the Gaia community at large, including all industrial teams, will ensure correct numerical values throughout the complex software systems being built up as details of the Gaia design develop. The database is already being used for the telemetry simulation chain in ESTEC, and in the data simulations for GDAAS2.

  6. Human variation databases

    PubMed Central

    Küntzer, Jan; Eggle, Daniela; Klostermann, Stefan; Burtscher, Helmut

    2010-01-01

    More than 100 000 human genetic variations have been described in various genes that are associated with a wide variety of diseases. Such data provides invaluable information for both clinical medicine and basic science. A number of locus-specific databases have been developed to exploit this huge amount of data. However, the scope, format and content of these databases differ strongly and as no standard for variation databases has yet been adopted, the way data is presented varies enormously. This review aims to give an overview of current resources for human variation data in public and commercial resources. PMID:20639550

  7. International Comparisions Database

    National Institute of Standards and Technology Data Gateway

    International Comparisions Database (Web, free access)   The International Comparisons Database (ICDB) serves the U.S. and the Inter-American System of Metrology (SIM) with information based on Appendices B (International Comparisons), C (Calibration and Measurement Capabilities) and D (List of Participating Countries) of the Comit� International des Poids et Mesures (CIPM) Mutual Recognition Arrangement (MRA). The official source of the data is The BIPM key comparison database. The ICDB provides access to results of comparisons of measurements and standards organized by the consultative committees of the CIPM and the Regional Metrology Organizations.

  8. Hybrid Terrain Database

    NASA Technical Reports Server (NTRS)

    Arthur, Trey

    2006-01-01

    A prototype hybrid terrain database is being developed in conjunction with other databases and with hardware and software that constitute subsystems of aerospace cockpit display systems (known in the art as synthetic vision systems) that generate images to increase pilots' situation awareness and eliminate poor visibility as a cause of aviation accidents. The basic idea is to provide a clear view of the world around an aircraft by displaying computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information.

  9. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  10. JICST Factual Database(2)

    NASA Astrophysics Data System (ADS)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  11. Databases for materials selection

    SciTech Connect

    1996-06-01

    The Cambridge Materials Selector (CMS2.0) materials database was developed by the Engineering Dept. at Cambridge University in the United Kingdom. This database makes it possible to select a material for a specific application from essentially all classes of materials. Genera, Predict, and Socrates software programs from CLI International, Houston, Texas, automate materials selection and corrosion problem-solving tasks. They are said to significantly reduce the time necessary to select a suitable material and/or to assess a corrosion problem and reach cost-effective solutions. This article describes both databases and tells how to use them.

  12. Information Release Administration Database (IRAD), Software Design Description (SDD)

    SciTech Connect

    CAREY, D.S.

    2000-05-10

    The IRAD system is a client server system that is written in Paradox for DOS. This system will be replaced with a Visual Basic and SQL Server in order to update the technology, eliminate obsolete functions, as well as to automate the manual interfaces.

  13. Signal detection in FDA AERS database using Dirichlet process.

    PubMed

    Hu, Na; Huang, Lan; Tiwari, Ram C

    2015-08-30

    In the recent two decades, data mining methods for signal detection have been developed for drug safety surveillance, using large post-market safety data. Several of these methods assume that the number of reports for each drug-adverse event combination is a Poisson random variable with mean proportional to the unknown reporting rate of the drug-adverse event pair. Here, a Bayesian method based on the Poisson-Dirichlet process (DP) model is proposed for signal detection from large databases, such as the Food and Drug Administration's Adverse Event Reporting System (AERS) database. Instead of using a parametric distribution as a common prior for the reporting rates, as is the case with existing Bayesian or empirical Bayesian methods, a nonparametric prior, namely, the DP, is used. The precision parameter and the baseline distribution of the DP, which characterize the process, are modeled hierarchically. The performance of the Poisson-DP model is compared with some other models, through an intensive simulation study using a Bayesian model selection and frequentist performance characteristics such as type-I error, false discovery rate, sensitivity, and power. For illustration, the proposed model and its extension to address a large amount of zero counts are used to analyze statin drugs for signals using the 2006-2011 AERS data. PMID:25924820

  14. Bioanalytical method validation of rapamycin in ocular matrix by QTRAP LC–MS/MS: Application to rabbit anterior tissue distribution by topical administration of rapamycin nanomicellar formulation

    PubMed Central

    Earla, Ravinder; Cholkar, Kishore; Gundaaa, Sriram; Earla, Rajya Lakshmi; Mitra, Ashim K.

    2012-01-01

    A novel, fast and sensitive 3200 QTRAP LC–MS/MS method was validated for rapamycin analysis in the rabbit eye following 0.2% administration of nanomicellar eye drop formulation. The LC–MS/MS technique was developed with electrospray ionization (ESI) in positive mode. Rapamycin was extracted from individual eye tissues and fluids by a simple protein precipitation method. Samples were reconstituted in 200 μL of 80% of acetonitrile in water containing 0.05% formic acid. Twenty microliter of the sample was injected on LC–MS/MS. Chromatographic separations was achieved on reversed phase C 8 Xterra column, 50 mm × 4.6 mm, 5 μm. Multiple reactions monitoring (MRM) transition m/z 936.6/409.3 for rapamycin and 734.4/576.5 for erythromycin were employed as internal standard. The calibration curves were linear r2 > 0.9998 over the concentration range from 2.3 ng/mL to 1000.0 ng/mL. Rapamycin was found to be stable in ocular tissue homogenates for 6 weeks at a refrigerated −80 °C and −20 °C temperatures. Rapamycin concentration was found to be 2260.7 ± 507.1 (mean ± S.D.) ng/g tissue and 585.5 ± 80.1 (mean ± S.D.) ng/g tissue in the cornea and iris ciliary muscle, respectively. This method has two advantages. First, a volatile base was used in the extraction procedure, which is easy to evaporate and generate consistent results. Second, the sodium adduct is employed that was stable in non-ammoniated mobile phase. The method demonstrates that absorption of rapamycin by a topical application of 0.2% rapamycin nanomicellar formulation generates therapeutically effective concentrations in the anterior segment of the eye. PMID:23122404

  15. NCCDPHP PUBLICATION DATABASE

    EPA Science Inventory

    This database provides bibliographic citations and abstracts of publications produced by the CDC's National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) including journal articles, monographs, book chapters, reports, policy documents, and fact sheets. Full...

  16. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  17. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  18. Hawaii bibliographic database

    USGS Publications Warehouse

    Wright, T.L.; Takahashi, T.J.

    1998-01-01

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  19. Chemical Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  20. Requirements Management Database

    2009-08-13

    This application is a simplified and customized version of the RBA and CTS databases to capture federal, site, and facility requirements, link to actions that must be performed to maintain compliance with their contractual and other requirements.

  1. Navigating Public Microarray Databases

    PubMed Central

    Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources. PMID:18629145

  2. Nuclear Science References Database

    SciTech Connect

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  3. Navigating public microarray databases.

    PubMed

    Penkett, Christopher J; Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources. PMID:18629145

  4. Household Products Database

    MedlinePlus

    ... Commercial / Institutional Product Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More Resources What's under your kitchen sink, in your garage, in your bathroom, and ...

  5. TREATABILITY DATABASE DESCRIPTION

    EPA Science Inventory

    The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities, first responders to spills or emergencies, treatment process designers, research organizations, academics, regulato...

  6. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1995-06-01

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  7. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1995-02-01

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase-out of chemical compounds of environmental concern.

  8. Querying genomic databases

    SciTech Connect

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  9. Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  10. The ribosomal database project.

    PubMed Central

    Larsen, N; Olsen, G J; Maidak, B L; McCaughey, M J; Overbeek, R; Macke, T J; Marsh, T L; Woese, C R

    1993-01-01

    The Ribosomal Database Project (RDP) is a curated database that offers ribosome data along with related programs and services. The offerings include phylogenetically ordered alignments of ribosomal RNA (rRNA) sequences, derived phylogenetic trees, rRNA secondary structure diagrams and various software packages for handling, analyzing and displaying alignments and trees. The data are available via ftp and electronic mail. Certain analytic services are also provided by the electronic mail server. PMID:8332524

  11. Database computing in HEP

    SciTech Connect

    Day, C.T.; Loken, S.; MacFarlane, J.F. ); May, E.; Lifka, D.; Lusk, E.; Price, L.E. ); Baden, A. . Dept. of Physics); Grossman, R.; Qin, X. . Dept. of Mathematics, Statistics and Computer Science); Cormell, L.; Leibold, P.; Liu, D

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples.

  12. Database computing in HEP

    NASA Technical Reports Server (NTRS)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  13. Fluid administration and morbidity in transhiatal esophagectomy

    PubMed Central

    Eng, Oliver S.; Arlow, Renee L.; Moore, Dirk; Chen, Chunxia; Langenfeld, John E.; August, David A.; Carpizo, Darren R.

    2016-01-01

    Background Esophagectomy is associated with significant morbidity. Optimizing perioperative fluid administration is one potential strategy to mitigate morbidity. We sought to investigate the relationship of intraoperative fluid (IOF) administration to outcomes in patients undergoing transhiatal esophagectomy with particular attention to malnourished patients, who may be more susceptible to the effects of fluid overload. Material and methods Patients who underwent transhiatal esophagectomy from 2000–2013 were identified from a retrospective database. IOF rates (mL/kg/hr) were determined and their relationship to outcomes compared. To examine the impact of malnutrition, we stratified patients based on median preoperative serum albumin and compared outcomes. Results and discussion 211 patients comprised the cohort. 74% of patients underwent esophagectomy for esophageal adenocarcinoma. Linear regression analyses were performed comparing independent perioperative variables to four outcomes variables: length of stay, complications per patient, major complications, and Clavien-Dindo classification. IOF rate was significantly associated with three of four outcomes on univariate analysis. Significantly more patients with a preoperative albumin level ≤3.7 g/dL who received more than the median IOF rate experienced more severe complications. Conclusions Increased intraoperative fluid administration is associated with perioperative morbidity in patients undergoing transhiatal esophagectomy. Patients with lower preoperative albumin levels may be particularly sensitive to the effects of volume overload. PMID:26319974

  14. Databases: Peter's Picks and Pans.

    ERIC Educational Resources Information Center

    Jacso, Peter

    1995-01-01

    Reviews the best and worst in databases on disk, CD-ROM, and online, and offers judgments and observations on database characteristics. Two databases are praised and three are criticized. (Author/JMV)

  15. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  16. Crude Oil Analysis Database

    DOE Data Explorer

    Shay, Johanna Y.

    The composition and physical properties of crude oil vary widely from one reservoir to another within an oil field, as well as from one field or region to another. Although all oils consist of hydrocarbons and their derivatives, the proportions of various types of compounds differ greatly. This makes some oils more suitable than others for specific refining processes and uses. To take advantage of this diversity, one needs access to information in a large database of crude oil analyses. The Crude Oil Analysis Database (COADB) currently satisfies this need by offering 9,056 crude oil analyses. Of these, 8,500 are United States domestic oils. The database contains results of analysis of the general properties and chemical composition, as well as the field, formation, and geographic location of the crude oil sample. [Taken from the Introduction to COAMDATA_DESC.pdf, part of the zipped software and database file at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the zipped file to your PC. When opened, it will contain PDF documents and a large Excel spreadsheet. It will also contain the database in Microsoft Access 2002.

  17. The comprehensive peptaibiotics database.

    PubMed

    Stoppacher, Norbert; Neumann, Nora K N; Burgstaller, Lukas; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2013-05-01

    Peptaibiotics are nonribosomally biosynthesized peptides, which - according to definition - contain the marker amino acid α-aminoisobutyric acid (Aib) and possess antibiotic properties. Being known since 1958, a constantly increasing number of peptaibiotics have been described and investigated with a particular emphasis on hypocrealean fungi. Starting from the existing online 'Peptaibol Database', first published in 1997, an exhaustive literature survey of all known peptaibiotics was carried out and resulted in a list of 1043 peptaibiotics. The gathered information was compiled and used to create the new 'The Comprehensive Peptaibiotics Database', which is presented here. The database was devised as a software tool based on Microsoft (MS) Access. It is freely available from the internet at http://peptaibiotics-database.boku.ac.at and can easily be installed and operated on any computer offering a Windows XP/7 environment. It provides useful information on characteristic properties of the peptaibiotics included such as peptide category, group name of the microheterogeneous mixture to which the peptide belongs, amino acid sequence, sequence length, producing fungus, peptide subfamily, molecular formula, and monoisotopic mass. All these characteristics can be used and combined for automated search within the database, which makes The Comprehensive Peptaibiotics Database a versatile tool for the retrieval of valuable information about peptaibiotics. Sequence data have been considered as to December 14, 2012. PMID:23681723

  18. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  19. The Transporter Classification Database

    PubMed Central

    Saier, Milton H.; Reddy, Vamsee S.; Tamang, Dorjee G.; Västermark, Åke

    2014-01-01

    The Transporter Classification Database (TCDB; http://www.tcdb.org) serves as a common reference point for transport protein research. The database contains more than 10 000 non-redundant proteins that represent all currently recognized families of transmembrane molecular transport systems. Proteins in TCDB are organized in a five level hierarchical system, where the first two levels are the class and subclass, the second two are the family and subfamily, and the last one is the transport system. Superfamilies that contain multiple families are included as hyperlinks to the five tier TC hierarchy. TCDB includes proteins from all types of living organisms and is the only transporter classification system that is both universal and recognized by the International Union of Biochemistry and Molecular Biology. It has been expanded by manual curation, contains extensive text descriptions providing structural, functional, mechanistic and evolutionary information, is supported by unique software and is interconnected to many other relevant databases. TCDB is of increasing usefulness to the international scientific community and can serve as a model for the expansion of database technologies. This manuscript describes an update of the database descriptions previously featured in NAR database issues. PMID:24225317

  20. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  1. Federal Register document image database

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.; Janet, Stanley A.; Klein, William W.

    1999-01-01

    A new, fully-automated process has been developed at NIST to derive ground truth for document images. The method involves matching optical character recognition (OCR) results from a page with typesetting files for an entire book. Public domain software used to derive the ground truth is provided in the form of Perl scripts and C source code, and includes new, more efficient string alignment technology and a word- level scoring package. With this ground truthing technology, it is now feasible to produce much larger data sets, at much lower cost, than was ever possible with previous labor- intensive, manual data collection projects. Using this method, NIST has produced a new document image database for evaluating Document Analysis and Recognition technologies and Information Retrieval systems. The database produced contains scanned images, SGML-tagged ground truth text, commercial OCR results, and image quality assessment results for pages published in the 1994 Federal Register. These data files are useful in a wide variety of experiments and research. There were roughly 250 issues, comprised of nearly 69,000 pages, published in the Federal Register in 1994. This volume of the database contains the pages of 20 books published in January of that year. In all, there are 4711 page images provided, with 4519 of them having corresponding ground truth. This volume is distributed on two ISO-9660 CD- ROMs. Future volumes may be released, depending on the level of interest.

  2. QIS: A Framework for Biomedical Database Federation

    PubMed Central

    Marenco, Luis; Wang, Tzuu-Yi; Shepherd, Gordon; Miller, Perry L.; Nadkarni, Prakash

    2004-01-01

    Query Integrator System (QIS) is a database mediator framework intended to address robust data integration from continuously changing heterogeneous data sources in the biosciences. Currently in the advanced prototype stage, it is being used on a production basis to integrate data from neuroscience databases developed for the SenseLab project at Yale University with external neuroscience and genomics databases. The QIS framework uses standard technologies and is intended to be deployable by administrators with a moderate level of technological expertise: It comes with various tools, such as interfaces for the design of distributed queries. The QIS architecture is based on a set of distributed network-based servers, data source servers, integration servers, and ontology servers, that exchange metadata as well as mappings of both metadata and data elements to elements in an ontology. Metadata version difference determination coupled with decomposition of stored queries is used as the basis for partial query recovery when the schema of data sources alters. PMID:15298995

  3. MAC, material accounting database user guide

    SciTech Connect

    Russell, V.K.

    1994-09-22

    The K Basins Material Accounting (MAC) database system user guide describes the user features and functions, and the document is structured like the database menus. This document presents the MAC database system user instructions which explain how to record the movements and configuration of canisters and materials within the K Basins on the computer, the mechanics of handling encapsulation tracking, and administrative functions associated with the system. This document includes the user instructions, which also serve as the software requirements specification for the system implemented on the microcomputer. This includes suggested user keystrokes, examples of screens displayed by the system, and reports generated by the system. It shows how the system is organized, via menus and screens. It does not explain system design nor provide programmer instructions.

  4. Development and validation of HPLC method with fluorometric detection for quantification of bisnaphthalimidopropyldiaminooctane in animal tissues following administration in polymeric nanoparticles.

    PubMed

    Segundo, Marcela A; Abreu, Vera L R G; Osório, Marcelo V; Nogueira, Sonia; Lin, Paul Kong Thoo; Cordeiro-da-Silva, Anabela; Lima, Sofia A C

    2016-02-20

    A simple, sensitive and specific high-performance liquid chromatography method for the quantification of bisnaphthalimidopropyldiaminooctane (BNIPDaoct), a potent anti-Leishmania compound, incorporated into poly(d,l-lactide-co-glycolic acid) (PLGA) nanoparticles was developed and validated toward bioanalysis application. Biological tissue extracts were injected into a reversed-phase monolithic column coupled to a fluorimetric detector (λexc=234nm, λem=394nm), using isocratic elution with aqueous buffer (acetic acid/acetate 0.10M, pH 4.5, 0.010M octanesulfonic acid) and acetonitrile, 60:40 (v/v) at a flow rate of 1.5mLmin(-1). The run time was 6min, with a BNIPDaoct retention time of 3.3min. Calibration curves were linear for BNIPDaoct concentrations ranging from 0.002 to 0.100μM. Matrix effects were observed and calibration curves were performed using the different organ (spleen, liver, kidney, heart and lung) extracts. The method was found to be specific, accurate (97.3-106.8% of nominal values) and precise for intra-day (RSD<1.9%) and inter-day assays (RSD<7.2%) in all matrices. Stability studies showed that BNIPDaoct was stable in all matrices after standing for 24h at room temperature (20°C) or in the autosampler, and after three freeze-thaw cycles. Mean recoveries of BNIPDaoct spiked in mice organs were >88.4%. The LOD and LOQ for biological matrices were ≤0.8 and ≤1.8nM, respectively, corresponding to values ≤4 and ≤9nmolg(-1) in mice organs. The method developed was successfully applied to biodistribution assessment following intravenous administration of BNIPDaoct in solution or incorporated in PLGA nanoparticles. PMID:26765266

  5. Simultaneous quantification of flavonoids in blood plasma by a high-performance liquid chromatography method after oral administration of Blumea balsamifera leaf extracts in rats.

    PubMed

    Nessa, Fazilatun; Ismail, Zhari; Mohamed, Nornisah; Karupiah, Sundram

    2013-03-01

    The leaves of Blumea balsamifera are used as a folk medicine in kidney stone diseases in South-East Asia. Phytochemical investigation revealed leaves contained a number of flavonoids. In view of these, the present work was aimed to quantify and preliminary pharmacokinetic investigation of five flavonoids viz. dihydroquercetin-7,4¢-dimethyl ether (I), dihydroquercetin-4¢-methyl ether (II), 5,7,3¢,5¢-tetrahydroxyflavanone (III), blumeatin (IV) and quercetin (V) in rat plasma following oral administration (0.5g/Kg) of B. balsamifera leaf extract in rats. Quantification was achieved by using a validated, reproducible high-performance liquid chromatographic method. The mean recoveries of I, II, III, IV and V were 90.6, 93.4, 93.5, 91.2 and 90.3% respectively. The limit of quantification was 25 ng/mL for I and IV, 10 ng/mL for II and III and 100 ng/mL for V respectively. The within day and day-to-day precision for all the compounds were < 10%. The validated HPLC method herein was applied for pharmacokinetic studies and the main pharmacokinetic parameters were: t1/2 (hr) 5.8, 4.3, 2.9, 5.7 and 7.3, Cmax (ng/mL) 594.9, 1542.9 1659.9, 208.9 and 3040.4; Tmax (hr) 4.7, 1.0, 1.0, 3.5 and 2.3; AUC0-oo (ng hr/mL) 5040, 5893, 9260, 1064 and 27233 for I, II, III, IV and V respectively. The developed method was suitable for pharmacokinetic studies and this preliminary study also revealed significant absorption after oral dosing in rats. PMID:23455210

  6. The CARLSBAD database: a confederated database of chemical bioactivities.

    PubMed

    Mathias, Stephen L; Hines-Kay, Jarrett; Yang, Jeremy J; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G; Ursu, Oleg; Oprea, Tudor I

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical-protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical-protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure-target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure-target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. 'drugs'). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD database

  7. The CARLSBAD Database: A Confederated Database of Chemical Bioactivities

    PubMed Central

    Mathias, Stephen L.; Hines-Kay, Jarrett; Yang, Jeremy J.; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G.; Ursu, Oleg; Oprea, Tudor I.

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical–protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical–protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure–target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure–target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. ‘drugs’). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD

  8. New tools for discovery from old databases

    SciTech Connect

    Brown, J.P. )

    1990-05-01

    Very large quantities of information have been accumulated as a result of petroleum exploration and the practice of petroleum geology. New and more powerful methods to build and analyze databases have been developed. The new tools must be tested, and, as quickly as possible, combined with traditional methods to the full advantage of currently limited funds in the search for new and extended hydrocarbon reserves. A recommended combined sequence is (1) database validating, (2) category separating, (3) machine learning, (4) graphic modeling, (5) database filtering, and (6) regression for predicting. To illustrate this procedure, a database from the Railroad Commission of Texas has been analyzed. Clusters of information have been identified to prevent apples and oranges problems from obscuring the conclusions. Artificial intelligence has checked the database for potentially invalid entries and has identified rules governing the relationship between factors, which can be numeric or nonnumeric (words), or both. Graphic 3-Dimensional modeling has clarified relationships. Database filtering has physically separated the integral parts of the database, which can then be run through the sequence again, increasing the precision. Finally, regressions have been run on separated clusters giving equations, which can be used with confidence in making predictions. Advances in computer systems encourage the learning of much more from past records, and reduce the danger of prejudiced decisions. Soon there will be giant strides beyond current capabilities to the advantage of those who are ready for them.

  9. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    ERIC Educational Resources Information Center

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  10. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... Federal Aviation Administration Passenger Facility Charge Database System for Air Carrier Reporting AGENCY... interested parties of the availability of the Passenger Facility Charge (PFC) database system to report PFC... public agency. The FAA has developed a national PFC database system in order to more easily track the...

  11. Great Basin paleontological database

    USGS Publications Warehouse

    Zhang, N.; Blodgett, R.B.; Hofstra, A.H.

    2008-01-01

    The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.

  12. Chemical Explosion Database

    NASA Astrophysics Data System (ADS)

    Johansson, Peder; Brachet, Nicolas

    2010-05-01

    A database containing information on chemical explosions, recorded and located by the International Data Center (IDC) of the CTBTO, should be established in the IDC prior to entry into force of the CTBT. Nearly all of the large chemical explosions occur in connection with mining activity. As a first step towards the establishment of this database, a survey of presumed mining areas where sufficiently large explosions are conducted has been done. This is dominated by the large coal mining areas like the Powder River (U.S.), Kuznetsk (Russia), Bowen (Australia) and Ekibastuz (Kazakhstan) basins. There are also several other smaller mining areas, in e.g. Scandinavia, Poland, Kazakhstan and Australia, with large enough explosions for detection. Events in the Reviewed Event Bulletin (REB) of the IDC that are located in or close to these mining areas, and which therefore are candidates for inclusion in the database, have been investigated. Comparison with a database of infrasound events has been done as many mining blasts generate strong infrasound signals and therefore also are included in the infrasound database. Currently there are 66 such REB events in 18 mining areas in the infrasound database. On a yearly basis several hundreds of events in mining areas have been recorded and included in the REB. Establishment of the database of chemical explosions requires confirmation and ground truth information from the States Parties regarding these events. For an explosion reported in the REB, the appropriate authority in whose country the explosion occurred is encouraged, on a voluntary basis, to seek out information on the explosion and communicate this information to the IDC.

  13. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  14. GOTTCHA Database, Version 1

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discoverymore » rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which

  15. GOTTCHA Database, Version 1

    SciTech Connect

    Freitas, Tracey; Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  16. Barriers and Facilitators to Career Advancement by Top-Level, Entry-Level and Non-Administrative Women in Public School Districts: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Ahmed, Eman Ibrahim El-Desouki

    2011-01-01

    The purpose of this study was to investigate the barriers and facilitators to career advancement among women administrators occupying top-level positions, those occupying entry-level positions and those in non-administrative positions in both rural and urban public school districts in central Pennsylvania. The need to increase the awareness of the…

  17. Administrative Methods for Reducing Crime in Primary and Secondary Schools: A Regression Analysis of the U.S. Department of Education School Survey of Crime and Safety

    ERIC Educational Resources Information Center

    Noonan, James H.

    2011-01-01

    Since the 1999 Columbine High School shooting school administrators have been tasked with creating positive education environments while also maximizing the safety of the students and staff. However, limited resources require school administrators to only employ safety policies which are actually effective in reducing crime. In order to help…

  18. Synthesized Population Databases: A US Geospatial Database for Agent-Based Models

    PubMed Central

    Wheaton, William D.; Cajka, James C.; Chasteen, Bernadette M.; Wagener, Diane K.; Cooley, Philip C.; Ganapathi, Laxminarayana; Roberts, Douglas J.; Allpress, Justine L.

    2010-01-01

    Agent-based models simulate large-scale social systems. They assign behaviors and activities to “agents” (individuals) within the population being modeled and then allow the agents to interact with the environment and each other in complex simulations. Agent-based models are frequently used to simulate infectious disease outbreaks, among other uses. RTI used and extended an iterative proportional fitting method to generate a synthesized, geospatially explicit, human agent database that represents the US population in the 50 states and the District of Columbia in the year 2000. Each agent is assigned to a household; other agents make up the household occupants. For this database, RTI developed the methods for generating synthesized households and personsassigning agents to schools and workplaces so that complex interactions among agents as they go about their daily activities can be taken into accountgenerating synthesized human agents who occupy group quarters (military bases, college dormitories, prisons, nursing homes).In this report, we describe both the methods used to generate the synthesized population database and the final data structure and data content of the database. This information will provide researchers with the information they need to use the database in developing agent-based models. Portions of the synthesized agent database are available to any user upon request. RTI will extract a portion (a county, region, or state) of the database for users who wish to use this database in their own agent-based models. PMID:20505787

  19. Study on the pharmacokinetics profiles of polyphyllin I and its bioavailability enhancement through co-administration with P-glycoprotein inhibitors by LC-MS/MS method.

    PubMed

    Zhu, He; Zhu, Si-Can; Shakya, Shailendra; Mao, Qian; Ding, Chuan-Hua; Long, Min-Hui; Li, Song-Lin

    2015-03-25

    Polyphyllin I (PPI), one of the steroidal saponins in Paris polyphylla, is a promising natural anticancer candidate. Although the anticancer activity of PPI has been well demonstrated, information regarding the pharmacokinetics and bioavailability is limited. In this study, a series of reliable and rapid liquid chromatography-tandem mass spectrometry methods were developed and successfully applied to determinate PPI in rat plasma, cell incubation media and cell homogenate. Then the pharmacokinetics of PPI in rats was studied and the result revealed that PPI was slowly eliminated with low oral bioavailability (about 0.62%) at a dose of 50 mg/kg, and when co-administrated with verapamil (VPL) and cyclosporine A (CYA), the oral bioavailability of PPI could increase from 0.62% to 3.52% and 3.79% respectively. In addition, in vitro studies showed that with the presence of VPL and CYA in Caco-2 cells, the efflux ratio of PPI decreased from 12.5 to 2.96 and 2.22, and the intracellular concentrations increased 5.8- and 5.0-fold respectively. These results demonstrated that PPI, with poor oral bioavailability, is greatly impeded by P-gp efflux, and inhibition of P-gp can enhance its bioavailability. PMID:25590941

  20. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.