Science.gov

Sample records for administrative databases methods

  1. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  2. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  3. Redis database administration tool

    SciTech Connect

    Martinez, J. J.

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  4. Methods for systematic reviews of administrative database studies capturing health outcomes of interest.

    PubMed

    McPheeters, Melissa L; Sathe, Nila A; Jerome, Rebecca N; Carnahan, Ryan M

    2013-12-30

    This report provides an overview of methods used to conduct systematic reviews for the US Food and Drug Administration (FDA) Mini-Sentinel project, which is designed to inform the development of safety monitoring tools for FDA-regulated products including vaccines. The objective of these reviews was to summarize the literature describing algorithms (e.g., diagnosis or procedure codes) to identify health outcomes in administrative and claims data. A particular focus was the validity of the algorithms when compared to reference standards such as diagnoses in medical records. The overarching goal was to identify algorithms that can accurately identify the health outcomes for safety surveillance. We searched the MEDLINE database via PubMed and required dual review of full text articles and of data extracted from studies. We also extracted data on each study's methods for case validation. We reviewed over 5600 abstracts/full text studies across 15 health outcomes of interest. Nearly 260 studies met our initial criteria (conducted in the US or Canada, used an administrative database, reported case-finding algorithm). Few studies (N=45), however, reported validation of case-finding algorithms (sensitivity, specificity, positive or negative predictive value). Among these, the most common approach to validation was to calculate positive predictive values, based on a review of medical records as the reference standard. Of the studies reporting validation, the ease with which a given clinical condition could be identified in administrative records varied substantially, both by the clinical condition and by other factors such as the clinical setting, which relates to the disease prevalence.

  5. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All...

  6. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  7. TWRS information locator database system administrator`s manual

    SciTech Connect

    Knutson, B.J., Westinghouse Hanford

    1996-09-13

    This document is a guide for use by the Tank Waste Remediation System (TWRS) Information Locator Database (ILD) System Administrator. The TWRS ILD System is an inventory of information used in the TWRS Systems Engineering process to represent the TWRS Technical Baseline. The inventory is maintained in the form of a relational database developed in Paradox 4.5.

  8. A Database System for Course Administration.

    ERIC Educational Resources Information Center

    Benbasat, Izak; And Others

    1982-01-01

    Describes a computer-assisted testing system which produces multiple-choice examinations for a college course in business administration. The system uses SPIRES (Stanford Public Information REtrieval System) to manage a database of questions and related data, mark-sense cards for machine grading tests, and ACL (6) (Audit Command Language) to…

  9. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database...

  10. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  11. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  12. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  13. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  14. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database...

  15. Administrators Say Funding Inhibits Use of Databases.

    ERIC Educational Resources Information Center

    Gerhard, Michael E.

    1990-01-01

    Surveys journalism and mass communication department heads to address questions related to the use of online databases in journalism higher education, database policy, resources used in providing online services, and satisfaction with database service. Reports that electronic information retrieval is just beginning to penetrate journalism at the…

  16. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... carriers shall have equal and open access to the regional databases. (c) The NANC shall select a local number portability administrator(s) (LNPA(s)) to administer the regional databases within seven months of the initial meeting of the NANC. (d) The NANC shall determine whether one or multiple...

  17. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... carriers shall have equal and open access to the regional databases. (c) The NANC shall select a local number portability administrator(s) (LNPA(s)) to administer the regional databases within seven months of the initial meeting of the NANC. (d) The NANC shall determine whether one or multiple...

  18. VIEWCACHE: An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Sellis, Timoleon

    1991-01-01

    The objective is to illustrate the concept of incremental access to distributed databases. An experimental database management system, ADMS, which has been developed at the University of Maryland, in College Park, uses VIEWCACHE, a database access method based on incremental search. VIEWCACHE is a pointer-based access method that provides a uniform interface for accessing distributed databases and catalogues. The compactness of the pointer structures formed during database browsing and the incremental access method allow the user to search and do inter-database cross-referencing with no actual data movement between database sites. Once the search is complete, the set of collected pointers pointing to the desired data are dereferenced.

  19. Capturing and classifying functional status information in administrative databases.

    PubMed

    Iezzoni, Lisa I; Greenberg, Marjorie S

    2003-01-01

    The health care delivery system aims to improve the functioning of Americans, but little information exists to judge progress toward meeting this goal. Administrative data generated through running and overseeing health care delivery offer considerable information about diagnoses and procedures in coded formats comparable across settings of care. This article explores the issues raised when considering adding coded information about functional status to administrative databases throughout the health care system. The National Committee on Vital and Health Statistics (NCVHS) identified the International Classification of Functioning, Disability and Health (ICF) as the only viable code set for consistently reporting functional status.

  20. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but

  1. Veterans Health Administration multiple sclerosis surveillance registry: The problem of case-finding from administrative databases.

    PubMed

    Culpepper, William J; Ehrmantraut, Mary; Wallin, Mitchell T; Flannery, Kathleen; Bradham, Douglas D

    2006-01-01

    Establishment of a national multiple sclerosis (MS) surveillance registry (MSSR) is a primary goal of the Department of Veterans Affairs (VA) MS Center of Excellence. The initial query of Veterans Health Administration (VHA) databases identified 25,712 patients (labeled "VHA MS User Cohort") from fiscal years 1998 to 2002 based on International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code; service-connection for MS; and/or disease-modifying agent (DMA) use. Because of ICD-9-CM limitations, the initial query was overinclusive and resulted in many non-MS cases. Thus, we needed a more rigorous case-finding method. Our gold standard was chart review of the Computerized Patient Record System for the mid-Atlantic VA medical centers. After chart review, we classified patients as not having MS or having MS/possible MS. We also applied a statistical algorithm to classify cases based on service-connection for MS, DMA use, and/or at least one healthcare encounter a year with MS coded as the primary diagnosis. We completed two analyses with kappa coefficient and sensitivity analysis. The first analysis (efficacy) was limited to cases with a definitive classification based on chart review (n = 600). The kappa coefficient was 0.85, sensitivity was 0.93, and specificity was 0.92. The second analysis (effectiveness) included unknown cases that were classified as MS/possible MS (N = 682). The kappa coefficient was 0.82, sensitivity was 0.93, and specificity was 0.90. These findings suggest that the database algorithm reliably eliminated non-MS cases from the initial MSSR population and is a reasonable case-finding method at this intermediate stage of MSSR development.

  2. Creating a computerized database from administrative claims data.

    PubMed

    Piecoro, L T; Wang, L S; Dixon, W S; Crovo, R J

    1999-07-01

    The creation of a computerized database from Medicaid administrative claims data for research purposes is described. Researchers should consult with computer experts at their institution before selecting software for data manipulation and conversion. It is essential to have an accurate layout of the file record before attempting to convert raw claims data into data sets or other data formats. The location of data elements within the claim will vary depending on whether the record comes from a provider, an institution, or a pharmacy. Each claim contains a common header, a variable header, and a claim detail section. The difficulty in analyzing data elements within a claim detail lies in locating the starting point of the claim detail section. So that data elements not in character or numeric formats can be converted, the file record layout must describe the exact format of each data element and its COBOL notation. A data element dictionary is necessary for translating data element coding into usable data. Data elements not necessary for any planned analysis must be eliminated. The data are then "cleaned" to remove any denied or reversed claims and claims that contain incomplete or erroneous data. Regardless of the format data are obtained in, an accurate file record layout and a data element dictionary are essential to the conversion of administrative claims data into a computerized database for data analysis and research purposes.

  3. Stroke surveillance in Manitoba, Canada: estimates from administrative databases.

    PubMed

    Moore, D F; Lix, L M; Yogendran, M S; Martens, P; Tamayo, A

    2008-01-01

    This study investigated the use of population-based administrative databases for stroke surveillance. First, a meta-analysis was conducted of four studies, identified via a PubMed search, which estimated the sensitivity and specificity of hospital data for ascertaining cases of stroke when clinical registries or medical charts were the gold standard. Subsequently, case-ascertainment algorithms based on hospital, physician and prescription drug records were developed and applied to Manitoba's administrative data, and prevalence estimates were obtained for fiscal years 1995/96 to 2003/04 by age group, sex, region of residence and income quintile. The meta-analysis results revealed some over-ascertainment of stroke cases from hospital data when the algorithm was based on diagnosis codes for any type of cerebrovascular disease (Mantel-Haenszel Odds-Ratio [OR] - 1.70 [95% confidence interval (CI): 1.53 - 1.88]). Analyses of Manitoba administrative data revealed that while the total number of stroke cases varied substantially across the algorithms, the trend in prevalence was stable regardless of the algorithm adopted.

  4. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  5. Classified Computer Configuration Control System (C{sup 4}S), Revision 3, Database Administrator`s Guide

    SciTech Connect

    O`Callaghan, P.B.; Nelson, R.A.; Grambihler, A.J.

    1994-04-01

    This document provides a guide for database administration and specific information for the Classified Computer Configuration Control System (C{sup 4}S). As a guide, this document discusses required database administration functions for the set up of database tables and for users of the system. It is assumed that general and user information has been obtained from the Classified Computer Configuration Control System (C{sup 4}S), Revision 3, User`s Information (WHC 1994).

  6. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  7. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... functions of a TV bands database, such as a data repository, registration, and query services, to be divided... inaccuracies in the database to its attention. This requirement applies only to information that the...

  8. Validity of chronic obstructive pulmonary disease diagnoses in a large administrative database

    PubMed Central

    Lacasse, Yves; Daigle, Jean-Marc; Martin, Sylvie; Maltais, François

    2012-01-01

    BACKGROUND: Administrative databases are often used for research purposes, with minimal attention devoted to the validity of the included diagnoses. AIMS: To determine whether the principal diagnoses of chronic obstructive pulmonary disease (COPD) made in hospitalized patients and recorded in a large administrative database are valid. METHODS: The medical charts of 1221 patients hospitalized in 40 acute care centres in Quebec and discharged between April 1, 2003 and March 31, 2004, with a principal discharge diagnosis of COPD (International Classification of Diseases, Ninth Revision codes 491, 492 or 496) were reviewed. The diagnosis of COPD was independently adjudicated by two pulmonologists using clinical history (including smoking status) and spirometry. The primary outcome measure was the positive predictive value (PPV) of the database for the diagnosis of COPD (ie, the proportion of patients with an accurate diagnosis of COPD corroborated by clinical history and spirometry). RESULTS: The diagnosis of COPD was validated in 616 patients (PPV 50.4% [95% CI 47.7% to 53.3%]), with 372 patients (30.5%) classified as ‘indeterminate’. Older age and female sex were associated with a lower probability of an accurate diagnosis of COPD. Hospitalization in a teaching institution was associated with a twofold increase in the probability of a correct diagnosis. CONCLUSIONS: The results support the routine ascertainment of the validity of diagnoses before using administrative databases in clinical and health services research. PMID:22536584

  9. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) based on their geographic location and provide accurate lists of available channels to fixed and Mode II... functions of a TV bands database, such as a data repository, registration, and query services, to be divided... responsible for coordination of the overall functioning of a database and providing services to TVBDs....

  10. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the initial meeting of the NANC. (d) The NANC shall determine whether one or multiple administrator(s... user interface between telecommunications carriers and the LNPA(s), the network interface between...

  11. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the initial meeting of the NANC. (d) The NANC shall determine whether one or multiple administrator(s... user interface between telecommunications carriers and the LNPA(s), the network interface between...

  12. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... functions of a TV bands database, such as a data repository, registration, and query services, to be divided... its attention. This requirement applies only to information that the Commission requires to be...

  13. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... functions of a TV bands database, such as a data repository, registration, and query services, to be divided... its attention. This requirement applies only to information that the Commission requires to be...

  14. Good agreement between questionnaire and administrative databases for health care use and costs in patients with osteoarthritis

    PubMed Central

    2011-01-01

    Background Estimating costs is essential to the economic analysis of health care programs. Health care costs are often captured from administrative databases or by patient report. Administrative records only provide a partial representation of health care costs and have additional limitations. Patient-completed questionnaires may allow a broader representation of health care costs; however the validity and feasibility of such methods have not been firmly established. This study was conducted to assess the validity and feasibility of using a patient-completed questionnaire to capture health care use and costs for patients with osteoarthritis, and to compare the research costs of the data-capture methods. Methods We designed a patient questionnaire and applied it in a clinical trial. We captured equivalent data from four administrative databases. We evaluated aspects of the questionnaire's validity using sensitivity and specificity, Lin's concordance correlation coefficient (ρc), and Bland-Altman comparisons. Results The questionnaire's response rate was 89%. Acceptable sensitivity and specificity levels were found for all types of health care use. The numbers of visits and the majority of medications reported by patients were in agreement with the database-derived estimates (ρc > 0.40). Total cost estimates from the questionnaire agreed with those from the databases. Patient-reported co-payments agreed with administrative records with respect to GP office transactions, but not pharmaceutical co-payments. Research costs for the questionnaire-based method were less than one-third of the costs for the databases method. Conclusion A patient-completed questionnaire is feasible for capturing health care use and costs for patients with osteoarthritis, and data collected using it mostly agree with administrative databases. Caution should be exercised when applying unit costs and collecting co-payment data. PMID:21489280

  15. Comparison of scientific and administrative database management systems

    NASA Technical Reports Server (NTRS)

    Stoltzfus, J. C.

    1983-01-01

    Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.

  16. Use of an administrative database to determine clinical management and outcomes in congenital heart disease.

    PubMed

    Gutgesell, Howard P; Hillman, Diane G; McHugh, Kimberly E; Dean, Peter; Matherne, G Paul

    2011-10-01

    We review our 16-year experience using the large, multi-institutional database of the University HealthSystem Consortium to study management and outcomes in congenital heart surgery for hypoplastic left heart syndrome, transposition of the great arteries, and neonatal coarctation. The advantages, limitations, and use of administrative databases by others to study congenital heart surgery are reviewed.

  17. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods. A public accommodation shall not, directly or through contractual or other arrangements,...

  18. Planning the future of JPL's management and administrative support systems around an integrated database

    NASA Technical Reports Server (NTRS)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  19. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  20. Validity of peptic ulcer disease and upper gastrointestinal bleeding diagnoses in administrative databases: a systematic review protocol

    PubMed Central

    Montedori, Alessandro; Abraha, Iosief; Chiatti, Carlos; Cozzolino, Francesco; Orso, Massimiliano; Luchetta, Maria Laura; Rimland, Joseph M; Ambrosio, Giuseppe

    2016-01-01

    Introduction Administrative healthcare databases are useful to investigate the epidemiology, health outcomes, quality indicators and healthcare utilisation concerning peptic ulcers and gastrointestinal bleeding, but the databases need to be validated in order to be a reliable source for research. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases, 9th Revision and 10th version (ICD-9 and ICD-10) codes for peptic ulcer and upper gastrointestinal bleeding diagnoses. Methods and analysis MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched, using appropriate search strategies. We will include validation studies that used administrative data to identify peptic ulcer disease and upper gastrointestinal bleeding diagnoses or studies that evaluated the validity of peptic ulcer and upper gastrointestinal bleeding codes in administrative data. The following inclusion criteria will be used: (a) the presence of a reference standard case definition for the diseases of interest; (b) the presence of at least one test measure (eg, sensitivity, etc) and (c) the use of an administrative database as a source of data. Pairs of reviewers will independently abstract data using standardised forms and will evaluate quality using the checklist of the Standards for Reporting of Diagnostic Accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA-P) 2015 statement. Ethics and dissemination Ethics approval is not required given that this is a protocol for a systematic review. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide for researchers validating administrative healthcare databases to determine appropriate case definitions for peptic ulcer disease and upper gastrointestinal

  1. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  2. Development of an Ada programming support environment database SEAD (Software Engineering and Ada Database) administration manual

    NASA Technical Reports Server (NTRS)

    Liaw, Morris; Evesson, Donna

    1988-01-01

    Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.

  3. Methods to Secure Databases Against Vulnerabilities

    DTIC Science & Technology

    2015-12-01

    source means that any organization may use the software free of charge and modify the software to fit the needs of the organization as long as the...The Department of Defense (DOD) and open source software ,” Oracle, Redwood Shores, CA, Sep. 2013. [5] DOD IG - Digital Strategy. (2015...misconfigured databases, HTTP interfaces, encryption, and authentication and authorization. This thesis also examines three open source database

  4. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  5. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false TRS User Registration Database and administrator. 64.615 Section 64.615 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Telecommunications...

  6. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false TRS User Registration Database and administrator. 64.615 Section 64.615 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) MISCELLANEOUS RULES RELATING TO COMMON CARRIERS Telecommunications...

  7. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    PubMed Central

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  8. Inaccurate Ascertainment of Morbidity and Mortality due to Influenza in Administrative Databases: A Population-Based Record Linkage Study

    PubMed Central

    Muscatello, David J.; Amin, Janaki; MacIntyre, C. Raina; Newall, Anthony T.; Rawlinson, William D.; Sintchenko, Vitali; Gilmour, Robin; Thackway, Sarah

    2014-01-01

    Background Historically, counting influenza recorded in administrative health outcome databases has been considered insufficient to estimate influenza attributable morbidity and mortality in populations. We used database record linkage to evaluate whether modern databases have similar limitations. Methods Person-level records were linked across databases of laboratory notified influenza, emergency department (ED) presentations, hospital admissions and death registrations, from the population (∼6.9 million) of New South Wales (NSW), Australia, 2005 to 2008. Results There were 2568 virologically diagnosed influenza infections notified. Among those, 25% of 40 who died, 49% of 1451 with a hospital admission and 7% of 1742 with an ED presentation had influenza recorded on the respective database record. Compared with persons aged ≥65 years and residents of regional and remote areas, respectively, children and residents of major cities were more likely to have influenza coded on their admission record. Compared with older persons and admitted patients, respectively, working age persons and non-admitted persons were more likely to have influenza coded on their ED record. On both ED and admission records, persons with influenza type A infection were more likely than those with type B infection to have influenza coded. Among death registrations, hospital admissions and ED presentations with influenza recorded as a cause of illness, 15%, 28% and 1.4%, respectively, also had laboratory notified influenza. Time trends in counts of influenza recorded on the ED, admission and death databases reflected the trend in counts of virologically diagnosed influenza. Conclusions A minority of the death, hospital admission and ED records for persons with a virologically diagnosed influenza infection identified influenza as a cause of illness. Few database records with influenza recorded as a cause had laboratory confirmation. The databases have limited value for estimating incidence

  9. Nursing leadership succession planning in Veterans Health Administration: creating a useful database.

    PubMed

    Weiss, Lizabeth M; Drake, Audrey

    2007-01-01

    An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.

  10. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  11. Protocol for validating cardiovascular and cerebrovascular ICD-9-CM codes in healthcare administrative databases: the Umbria Data Value Project

    PubMed Central

    Cozzolino, Francesco; Orso, Massimiliano; Mengoni, Anna; Cerasa, Maria Francesca; Eusebi, Paolo; Ambrosio, Giuseppe; Montedori, Alessandro

    2017-01-01

    Introduction Administrative healthcare databases can provide a comprehensive assessment of the burden of diseases in terms of major outcomes, such as mortality, hospital readmissions and use of healthcare resources, thus providing answers to a wide spectrum of research questions. However, a crucial issue is the reliability of information gathered. Aim of this protocol is to validate International Classification of Diseases, 9th Revision—Clinical Modification (ICD-9-CM) codes for major cardiovascular diseases, including acute myocardial infarction (AMI), heart failure (HF), atrial fibrillation (AF) and stroke. Methods and analysis Data from the centralised administrative database of the entire Umbria Region (910 000 residents, located in Central Italy) will be considered. Patients with a first hospital discharge for AMI, HF, AF or stroke, between 2012 and 2014, will be identified in the administrative database using the following groups of ICD-9-CM codes located in primary position: (1) 410.x for AMI; (2) 427.31 for AF; (3) 428 for HF; (4) 433.x1, 434 (excluding 434.x0), 436 for ischaemic stroke, 430 and 431 for haemorrhagic stroke (subarachnoid haemorrhage and intracerebral haemorrhage). A random sample of cases, and of non-cases, will be selected, and the corresponding medical charts retrieved and reviewed for validation by pairs of trained, independent reviewers. For each condition considered, case adjudication of disease will be based on symptoms, laboratory and diagnostic tests, as available in medical charts. Divergences will be resolved by consensus. Sensitivity and specificity with 95% CIs will be calculated. Ethics and dissemination Research protocol has been granted approval by the Regional Ethics Committee. Study results will be disseminated widely through peer-reviewed publications and presentations at national and international conferences. PMID:28360241

  12. Database design using NIAM (Nijssen Information Analysis Method) modeling

    SciTech Connect

    Stevens, N.H.

    1989-01-01

    The Nissjen Information Analysis Method (NIAM) is an information modeling technique based on semantics and founded in set theory. A NIAM information model is a graphical representation of the information requirements for some universe of discourse. Information models facilitate data integration and communication within an organization about data semantics. An information model is sometimes referred to as the semantic model or the conceptual schema. It helps in the logical and physical design and implementation of databases. NIAM information modeling is used at Sandia National Laboratories to design and implement relational databases containing engineering information which meet the users' information requirements. The paper focuses on the design of one database which satisfied the data needs of four disjoint but closely related applications. The applications as they existed before did not talk to each other even though they stored much of the same data redundantly. NIAM was used to determine the information requirements and design the integrated database. 6 refs., 7 figs.

  13. Quantifying limitations in chemotherapy data in administrative health databases: implications for measuring the quality of colorectal cancer care.

    PubMed

    Urquhart, Robin; Rayson, Daniel; Porter, Geoffrey A; Grunfeld, Eva

    2011-08-01

    Reliable chemotherapy data are critical to evaluate the quality of care for patients with colorectal cancer who are treated with curative intent. In Canada, limitations in the availability and completeness of chemotherapy data exist in many administrative health databases. In this paper, we discuss these limitations and present findings from a chart review in Nova Scotia that quantifies the completeness of chemotherapy capture in existing databases. The results demonstrate that even basic information on cancer treatment in administrative databases can be insufficient to perform the types of analyses that most decision-makers require for quality-of-care measurement.

  14. Construction of crystal structure prototype database: methods and applications

    NASA Astrophysics Data System (ADS)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  15. Construction of crystal structure prototype database: methods and applications.

    PubMed

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  16. Administrator Evaluation: Concepts, Methods, Cases in Higher Education.

    ERIC Educational Resources Information Center

    Farmer, Charles H.

    Designed for faculty and administration in higher education, the book describes concepts, methods, and case studies in the field of administrative assessment. The first section explores issues and perspectives in three chapters authored by Charles H. Farmer: "Why Evaluate Administrators?", "How Can Administrators be…

  17. Incidence of Appendicitis over Time: A Comparative Analysis of an Administrative Healthcare Database and a Pathology-Proven Appendicitis Registry

    PubMed Central

    Clement, Fiona; Zimmer, Scott; Dixon, Elijah; Ball, Chad G.; Heitman, Steven J.; Swain, Mark; Ghosh, Subrata

    2016-01-01

    Importance At the turn of the 21st century, studies evaluating the change in incidence of appendicitis over time have reported inconsistent findings. Objectives We compared the differences in the incidence of appendicitis derived from a pathology registry versus an administrative database in order to validate coding in administrative databases and establish temporal trends in the incidence of appendicitis. Design We conducted a population-based comparative cohort study to identify all individuals with appendicitis from 2000 to2008. Setting & Participants Two population-based data sources were used to identify cases of appendicitis: 1) a pathology registry (n = 8,822); and 2) a hospital discharge abstract database (n = 10,453). Intervention & Main Outcome The administrative database was compared to the pathology registry for the following a priori analyses: 1) to calculate the positive predictive value (PPV) of administrative codes; 2) to compare the annual incidence of appendicitis; and 3) to assess differences in temporal trends. Temporal trends were assessed using a generalized linear model that assumed a Poisson distribution and reported as an annual percent change (APC) with 95% confidence intervals (CI). Analyses were stratified by perforated and non-perforated appendicitis. Results The administrative database (PPV = 83.0%) overestimated the incidence of appendicitis (100.3 per 100,000) when compared to the pathology registry (84.2 per 100,000). Codes for perforated appendicitis were not reliable (PPV = 52.4%) leading to overestimation in the incidence of perforated appendicitis in the administrative database (34.8 per 100,000) as compared to the pathology registry (19.4 per 100,000). The incidence of appendicitis significantly increased over time in both the administrative database (APC = 2.1%; 95% CI: 1.3, 2.8) and pathology registry (APC = 4.1; 95% CI: 3.1, 5.0). Conclusion & Relevance The administrative database overestimated the incidence of appendicitis

  18. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS STATE ORGANIZATION AND GENERAL ADMINISTRATION Single State Agency § 431.15 Methods of administration. A State...

  19. Using linked electronic data to validate algorithms for health outcomes in administrative databases.

    PubMed

    Lee, Wan-Ju; Lee, Todd A; Pickard, Alan Simon; Shoaibi, Azadeh; Schumock, Glen T

    2015-08-01

    The validity of algorithms used to identify health outcomes in claims-based and administrative data is critical to the reliability of findings from observational studies. The traditional approach to algorithm validation, using medical charts, is expensive and time-consuming. An alternative method is to link the claims data to an external, electronic data source that contains information allowing confirmation of the event of interest. In this paper, we describe this external linkage validation method and delineate important considerations to assess the feasibility and appropriateness of validating health outcomes using this approach. This framework can help investigators decide whether to pursue an external linkage validation method for identifying health outcomes in administrative/claims data.

  20. A Proposed Framework of Test Administration Methods

    ERIC Educational Resources Information Center

    Thompson, Nathan A.

    2008-01-01

    The widespread application of personal computers to educational and psychological testing has substantially increased the number of test administration methodologies available to testing programs. Many of these mediums are referred to by their acronyms, such as CAT, CBT, CCT, and LOFT. The similarities between the acronyms and the methods…

  1. Present epidemiology of chronic subdural hematoma in Japan: analysis of 63,358 cases recorded in a national administrative database.

    PubMed

    Toi, Hiroyuki; Kinoshita, Keita; Hirai, Satoshi; Takai, Hiroki; Hara, Keijiro; Matsushita, Nobuhisa; Matsubara, Shunji; Otani, Makoto; Muramatsu, Keiji; Matsuda, Shinya; Fushimi, Kiyohide; Uno, Masaaki

    2017-02-03

    OBJECTIVE Aging of the population may lead to epidemiological changes with respect to chronic subdural hematoma (CSDH). The objectives of this study were to elucidate the current epidemiology and changing trends of CSDH in Japan. The authors analyzed patient information based on reports using a Japanese administrative database associated with the diagnosis procedure combination (DPC) system. METHODS This study included patients with newly diagnosed CSDH who were treated in hospitals participating in the DPC system. The authors collected data from the administrative database on the following clinical and demographic characteristics: patient age, sex, and level of consciousness on admission; treatment procedure; and outcome at discharge. RESULTS A total of 63,358 patients with newly diagnosed CSDH and treated in 1750 DPC participation hospitals were included in this study. Analysis according to patient age showed that the most common age range for these patients was the 9th decade of life (in their 80s). More than half of patients 70 years old or older presented with some kind of disturbance of consciousness. Functional outcomes at discharge were good in 71.6% (modified Rankin Scale [mRS] score 0-2) of cases and poor in 28.4% (mRS score 3-6). The percentage of poor outcomes tended to be higher in elderly patients. Approximately 40% of patients 90 years old or older could not be discharged to home. The overall recurrence rate for CSDH was 13.1%. CONCLUSIONS This study shows a chronological change in the age distribution of CSDH among Japanese patients, which may be affecting the prognosis of this condition. In the aging population of contemporary Japan, patients in their 80s were affected more often than patients in other age categories, and approximately 30% of patients with CSDH required some help at discharge. CSDH thus may no longer have as good a prognosis as had been thought.

  2. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC ASSISTANCE PROGRAMS §...

  3. Benchmarks for measurement of duplicate detection methods in nucleotide databases.

    PubMed

    Chen, Qingyu; Zobel, Justin; Verspoor, Karin

    2017-01-08

    Duplication of information in databases is a major data quality challenge. The presence of duplicates, implying either redundancy or inconsistency, can have a range of impacts on the quality of analyses that use the data. To provide a sound basis for research on this issue in databases of nucleotide sequences, we have developed new, large-scale validated collections of duplicates, which can be used to test the effectiveness of duplicate detection methods. Previous collections were either designed primarily to test efficiency, or contained only a limited number of duplicates of limited kinds. To date, duplicate detection methods have been evaluated on separate, inconsistent benchmarks, leading to results that cannot be compared and, due to limitations of the benchmarks, of questionable generality. In this study, we present three nucleotide sequence database benchmarks, based on information drawn from a range of resources, including information derived from mapping to two data sections within the UniProt Knowledgebase (UniProtKB), UniProtKB/Swiss-Prot and UniProtKB/TrEMBL. Each benchmark has distinct characteristics. We quantify these characteristics and argue for their complementary value in evaluation. The benchmarks collectively contain a vast number of validated biological duplicates; the largest has nearly half a billion duplicate pairs (although this is probably only a tiny fraction of the total that is present). They are also the first benchmarks targeting the primary nucleotide databases. The records include the 21 most heavily studied organisms in molecular biology research. Our quantitative analysis shows that duplicates in the different benchmarks, and in different organisms, have different characteristics. It is thus unreliable to evaluate duplicate detection methods against any single benchmark. For example, the benchmark derived from UniProtKB/Swiss-Prot mappings identifies more diverse types of duplicates, showing the importance of expert curation, but

  4. Enhancing Clinical Content and Race/Ethnicity Data in Statewide Hospital Administrative Databases: Obstacles Encountered, Strategies Adopted, and Lessons Learned

    PubMed Central

    Pine, Michael; Kowlessar, Niranjana M; Salemi, Jason L; Miyamura, Jill; Zingmond, David S; Katz, Nicole E; Schindler, Joe

    2015-01-01

    Objectives Eight grant teams used Agency for Healthcare Research and Quality infrastructure development research grants to enhance the clinical content of and improve race/ethnicity identifiers in statewide all-payer hospital administrative databases. Principal Findings Grantees faced common challenges, including recruiting data partners and ensuring their continued effective participation, acquiring and validating the accuracy and utility of new data elements, and linking data from multiple sources to create internally consistent enhanced administrative databases. Successful strategies to overcome these challenges included aggressively engaging with providers of critical sources of data, emphasizing potential benefits to participants, revising requirements to lessen burdens associated with participation, maintaining continuous communication with participants, being flexible when responding to participants’ difficulties in meeting program requirements, and paying scrupulous attention to preparing data specifications and creating and implementing protocols for data auditing, validation, cleaning, editing, and linking. In addition to common challenges, grantees also had to contend with unique challenges from local environmental factors that shaped the strategies they adopted. Conclusions The creation of enhanced administrative databases to support comparative effectiveness research is difficult, particularly in the face of numerous challenges with recruiting data partners such as competing demands on information technology resources. Excellent communication, flexibility, and attention to detail are essential ingredients in accomplishing this task. Additional research is needed to develop strategies for maintaining these databases when initial funding is exhausted. PMID:26119470

  5. Distance correlation methods for discovering associations in large astrophysical databases

    SciTech Connect

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P. E-mail: mrichards@astro.psu.edu

    2014-01-20

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension, can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.

  6. Workshop on laboratory protocol standards for the Molecular Methods Database.

    PubMed

    Klingström, Tomas; Soldatova, Larissa; Stevens, Robert; Roos, T Erik; Swertz, Morris A; Müller, Kristian M; Kalaš, Matúš; Lambrix, Patrick; Taussig, Michael J; Litton, Jan-Eric; Landegren, Ulf; Bongcam-Rudloff, Erik

    2013-01-25

    Management of data to produce scientific knowledge is a key challenge for biological research in the 21st century. Emerging high-throughput technologies allow life science researchers to produce big data at speeds and in amounts that were unthinkable just a few years ago. This places high demands on all aspects of the workflow: from data capture (including the experimental constraints of the experiment), analysis and preservation, to peer-reviewed publication of results. Failure to recognise the issues at each level can lead to serious conflicts and mistakes; research may then be compromised as a result of the publication of non-coherent protocols, or the misinterpretation of published data. In this report, we present the results from a workshop that was organised to create an ontological data-modelling framework for Laboratory Protocol Standards for the Molecular Methods Database (MolMeth). The workshop provided a set of short- and long-term goals for the MolMeth database, the most important being the decision to use the established EXACT description of biomedical ontologies as a starting point.

  7. System, Method and Apparatus for Discovering Phrases in a Database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase discovery is a method of identifying sequences of terms in a database. First, a selection of one or more relevant sequences of terms. such as relevant text, is provided. Next, several shorter sequences of terms, such as phrases, are extracted from the provided relevant sequences of terms. The extracted sequences of terms are then reduced through a culling process. A gathering process then emphasizes the more relevant of the extracted and culled sequences of terms and de-emphasizes the more generic of the extracted and culled sequences of terms. The gathering process can also include iteratively retrieving additional selections of relevant sequences (e.g.. text). extracting and culling additional sequences of terms (e.g.. phrases). emphasizing and de-emphasizing extracted and culled sequences of terms and accumulating all gathered sequences of terms. The resulting gathered sequences of terms are then output.

  8. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  9. A Dynamic Integration Method for Borderland Database using OSM data

    NASA Astrophysics Data System (ADS)

    Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.

    2013-11-01

    Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.

  10. Validity of ICD-9-CM codes for breast, lung and colorectal cancers in three Italian administrative healthcare databases: a diagnostic accuracy study protocol

    PubMed Central

    Abraha, Iosief; Serraino, Diego; Giovannini, Gianni; Stracci, Fabrizio; Casucci, Paola; Alessandrini, Giuliana; Bidoli, Ettore; Chiari, Rita; Cirocchi, Roberto; De Giorgi, Marcello; Franchini, David; Vitale, Maria Francesca; Fusco, Mario; Montedori, Alessandro

    2016-01-01

    Introduction Administrative healthcare databases are useful tools to study healthcare outcomes and to monitor the health status of a population. Patients with cancer can be identified through disease-specific codes, prescriptions and physician claims, but prior validation is required to achieve an accurate case definition. The objective of this protocol is to assess the accuracy of International Classification of Diseases Ninth Revision—Clinical Modification (ICD-9-CM) codes for breast, lung and colorectal cancers in identifying patients diagnosed with the relative disease in three Italian administrative databases. Methods and analysis Data from the administrative databases of Umbria Region (910 000 residents), Local Health Unit 3 of Napoli (1 170 000 residents) and Friuli-Venezia Giulia Region (1 227 000 residents) will be considered. In each administrative database, patients with the first occurrence of diagnosis of breast, lung or colorectal cancer between 2012 and 2014 will be identified using the following groups of ICD-9-CM codes in primary position: (1) 233.0 and (2) 174.x for breast cancer; (3) 162.x for lung cancer; (4) 153.x for colon cancer and (5) 154.0–154.1 and 154.8 for rectal cancer. Only incident cases will be considered, that is, excluding cases that have the same diagnosis in the 5 years (2007–2011) before the period of interest. A random sample of cases and non-cases will be selected from each administrative database and the corresponding medical charts will be assessed for validation by pairs of trained, independent reviewers. Case ascertainment within the medical charts will be based on (1) the presence of a primary nodular lesion in the breast, lung or colon–rectum, documented with imaging or endoscopy and (2) a cytological or histological documentation of cancer from a primary or metastatic site. Sensitivity and specificity with 95% CIs will be calculated. Dissemination Study results will be disseminated widely through

  11. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    PubMed

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NBSW=NBBI-GRAM=SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we

  12. Comparing the performance of propensity score methods in healthcare database studies with rare outcomes.

    PubMed

    Franklin, Jessica M; Eddings, Wesley; Austin, Peter C; Stuart, Elizabeth A; Schneeweiss, Sebastian

    2017-02-16

    Nonrandomized studies of treatments from electronic healthcare databases are critical for producing the evidence necessary to making informed treatment decisions, but often rely on comparing rates of events observed in a small number of patients. In addition, studies constructed from electronic healthcare databases, for example, administrative claims data, often adjust for many, possibly hundreds, of potential confounders. Despite the importance of maximizing efficiency when there are many confounders and few observed outcome events, there has been relatively little research on the relative performance of different propensity score methods in this context. In this paper, we compare a wide variety of propensity-based estimators of the marginal relative risk. In contrast to prior research that has focused on specific statistical methods in isolation of other analytic choices, we instead consider a method to be defined by the complete multistep process from propensity score modeling to final treatment effect estimation. Propensity score model estimation methods considered include ordinary logistic regression, Bayesian logistic regression, lasso, and boosted regression trees. Methods for utilizing the propensity score include pair matching, full matching, decile strata, fine strata, regression adjustment using one or two nonlinear splines, inverse propensity weighting, and matching weights. We evaluate methods via a 'plasmode' simulation study, which creates simulated datasets on the basis of a real cohort study of two treatments constructed from administrative claims data. Our results suggest that regression adjustment and matching weights, regardless of the propensity score model estimation method, provide lower bias and mean squared error in the context of rare binary outcomes. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Validation of ICD-9 Code 787.2 for identification of individuals with dysphagia from administrative databases.

    PubMed

    González-Fernández, Marlís; Gardyn, Michael; Wyckoff, Shamolie; Ky, Paul K S; Palmer, Jeffrey B

    2009-12-01

    The aim of this study was to determine the accuracy of dysphagia coding using the International Classification of Diseases version 9 (ICD-9) code 787.2. We used the administrative database of a tertiary hospital and sequential videofluorographic swallowing study (VFSS) reports for patients admitted to the same hospital from January to June 2007. The VFSS reports were abstracted and the hospital's database was queried to abstract the coding associated with the admission during which the VFSS was performed. The VFSS and administrative data were merged for data analysis. Dysphagia was coded (using code 787.2) in 36 of 168 cases that had a VFSS. Of these, 34 had dysphagia diagnosed by VFSS (our gold standard) and one had a prior history of dysphagia. Code 787.2 had sensitivity of 22.8, specificity of 89.5, and positive and negative predictive values of 94.4 and 12.9, respectively. Dysphagia was largely undercoded in this database, but when the code was present those individuals were very likely to be dysphagic. Selection of dysphagic cases using the ICD-9 code is appropriate for within-group comparisons. Absence of the code, however, is not a good predictor of the absence of dysphagia.

  14. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.

  15. A European Flood Database: facilitating comprehensive flood research beyond administrative boundaries

    NASA Astrophysics Data System (ADS)

    Hall, J.; Arheimer, B.; Aronica, G. T.; Bilibashi, A.; Boháč, M.; Bonacci, O.; Borga, M.; Burlando, P.; Castellarin, A.; Chirico, G. B.; Claps, P.; Fiala, K.; Gaál, L.; Gorbachova, L.; Gül, A.; Hannaford, J.; Kiss, A.; Kjeldsen, T.; Kohnová, S.; Koskela, J. J.; Macdonald, N.; Mavrova-Guirguinova, M.; Ledvinka, O.; Mediero, L.; Merz, B.; Merz, R.; Molnar, P.; Montanari, A.; Osuch, M.; Parajka, J.; Perdigão, R. A. P.; Radevski, I.; Renard, B.; Rogger, M.; Salinas, J. L.; Sauquet, E.; Šraj, M.; Szolgay, J.; Viglione, A.; Volpi, E.; Wilson, D.; Zaimi, K.; Blöschl, G.

    2015-06-01

    The current work addresses one of the key building blocks towards an improved understanding of flood processes and associated changes in flood characteristics and regimes in Europe: the development of a comprehensive, extensive European flood database. The presented work results from ongoing cross-border research collaborations initiated with data collection and joint interpretation in mind. A detailed account of the current state, characteristics and spatial and temporal coverage of the European Flood Database, is presented. At this stage, the hydrological data collection is still growing and consists at this time of annual maximum and daily mean discharge series, from over 7000 hydrometric stations of various data series lengths. Moreover, the database currently comprises data from over 50 different data sources. The time series have been obtained from different national and regional data sources in a collaborative effort of a joint European flood research agreement based on the exchange of data, models and expertise, and from existing international data collections and open source websites. These ongoing efforts are contributing to advancing the understanding of regional flood processes beyond individual country boundaries and to a more coherent flood research in Europe.

  16. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  17. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 441.105 Section 441.105 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: REQUIREMENTS AND LIMITS APPLICABLE TO SPECIFIC SERVICES Medicaid for Individuals Age 65 or Over...

  18. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  19. The Saccharomyces Genome Database: Advanced Searching Methods and Data Mining.

    PubMed

    Cherry, J Michael

    2015-12-02

    At the core of the Saccharomyces Genome Database (SGD) are chromosomal features that encode a product. These include protein-coding genes and major noncoding RNA genes, such as tRNA and rRNA genes. The basic entry point into SGD is a gene or open-reading frame name that leads directly to the locus summary information page. A keyword describing function, phenotype, selective condition, or text from abstracts will also provide a door into the SGD. A DNA or protein sequence can be used to identify a gene or a chromosomal region using BLAST. Protein and DNA sequence identifiers, PubMed and NCBI IDs, author names, and function terms are also valid entry points. The information in SGD has been gathered and is maintained by a group of scientific biocurators and software developers who are devoted to providing researchers with up-to-date information from the published literature, connections to all the major research resources, and tools that allow the data to be explored. All the collected information cannot be represented or summarized for every possible question; therefore, it is necessary to be able to search the structured data in the database. This protocol describes the YeastMine tool, which provides an advanced search capability via an interactive tool. The SGD also archives results from microarray expression experiments, and a strategy designed to explore these data using the SPELL (Serial Pattern of Expression Levels Locator) tool is provided.

  20. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database instance segment version 8.1.0.0 for solaris 7.

    SciTech Connect

    Dritz, K.

    2002-03-06

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Instance Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to change the password for the SYSTEM account of the database instance after the instance is created, and it discusses the creation of a suitable database instance for ELIST by means other than the installation of the segment. The latter subject is covered in more depth than its introductory discussion in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. By the same token, the need to create a database instance for ELIST by means other than the installation of the segment is expected to be the exception, rather than the rule. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it.

  1. Computer systems and methods for the query and visualization of multidimensional database

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2010-05-11

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  2. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2006-08-08

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  3. Data-Based Decision-Making: Developing a Method for Capturing Teachers' Understanding of CBM Graphs

    ERIC Educational Resources Information Center

    Espin, Christine A.; Wayman, Miya Miura; Deno, Stanley L.; McMaster, Kristen L.; de Rooij, Mark

    2017-01-01

    In this special issue, we explore the decision-making aspect of "data-based decision-making". The articles in the issue address a wide range of research questions, designs, methods, and analyses, but all focus on data-based decision-making for students with learning difficulties. In this first article, we introduce the topic of…

  4. Method and system for data clustering for very large databases

    NASA Technical Reports Server (NTRS)

    Zhang, Tian (Inventor); Ramakrishnan, Raghu (Inventor); Livny, Miron (Inventor)

    1998-01-01

    Multi-dimensional data contained in very large databases is efficiently and accurately clustered to determine patterns therein and extract useful information from such patterns. Conventional computer processors may be used which have limited memory capacity and conventional operating speed, allowing massive data sets to be processed in a reasonable time and with reasonable computer resources. The clustering process is organized using a clustering feature tree structure wherein each clustering feature comprises the number of data points in the cluster, the linear sum of the data points in the cluster, and the square sum of the data points in the cluster. A dense region of data points is treated collectively as a single cluster, and points in sparsely occupied regions can be treated as outliers and removed from the clustering feature tree. The clustering can be carried out continuously with new data points being received and processed, and with the clustering feature tree being restructured as necessary to accommodate the information from the newly received data points.

  5. Development of the Veterans Healthcare Administration (VHA) Ophthalmic Surgical Outcome Database (OSOD) project and the role of ophthalmic nurse reviewers.

    PubMed

    Lara-Smalling, Agueda; Cakiner-Egilmez, Tulay; Miller, Dawn; Redshirt, Ella; Williams, Dale

    2011-01-01

    Currently, ophthalmic surgical cases are not included in the Veterans Administration Surgical Quality Improvement Project data collection. Furthermore, there is no comprehensive protocol in the health system for prospectively measuring outcomes for eye surgery in terms of safety and quality. There are 400,000 operative cases in the system per year. Of those, 48,000 (12%) are ophthalmic surgical cases, with 85% (41,000) of those being cataract cases. The Ophthalmic Surgical Outcome Database Pilot Project was developed to incorporate ophthalmology into VASQIP, thus evaluating risk factors and improving cataract surgical outcomes. Nurse reviewers facilitate the monitoring and measuring of these outcomes. Since its inception in 1778, the Veterans Administration (VA) Health System has provided comprehensive healthcare to millions of deserving veterans throughout the U.S. and its territories. Historically, the quality of healthcare provided by the VA has been the main focus of discussion because it did not meet a standard of care comparable to that of the private sector. Information regarding quality of healthcare services and outcomes data had been unavailable until 1986, when Congress mandated the VA to compare its surgical outcomes to those of the private sector (PL-99-166). 1 Risk adjustment of VA surgical outcomes began in 1987 with the Continuous Improvement in Cardiac Surgery Program (CICSP) in which cardiac surgical outcomes were reported and evaluated. 2 Between 1991 and 1993, the National VA Surgical Risk Study (NVASRS) initiated a validated risk-adjustment model for predicting surgical outcomes and comparative assessment of the quality of surgical care in 44 VA medical centers. 3 The success of NVASRS encouraged the VA to establish an ongoing program for monitoring and improving the quality of surgical care, thus developing the National Surgical Quality Improvement Program (NSQIP) in 1994. 4 According to a prospective study conducted between 1991-1997 in 123

  6. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  7. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  8. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  9. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  10. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  11. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  12. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  13. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest...

  14. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including...

  15. Methods for 17β-oestradiol administration to rats.

    PubMed

    Isaksson, Ida-Maria; Theodorsson, Annette; Theodorsson, Elvar; Strom, Jakob O

    2011-11-01

    Several studies indicate that the beneficial or harmful effects of oestrogens in stroke are dose-dependent. Rats are amongst the most frequently used animals in these studies, which calls for thoroughly validated methods for administering 17β-oestradiol to rats. In an earlier study we characterised three different administration methods for 17β-oestradiol over 42 days. The present study assesses the concentrations in a short time perspective, with the addition of a novel peroral method. Female Sprague-Dawley rats were ovariectomised and administered 17β-oestradiol by subcutaneous injections, silastic capsules, pellets and orally (in the nut-cream Nutella(®)), respectively. One group received 17β-oestradiol by silastic capsules without previous washout time. Blood samples were obtained after 30 minutes, 1, 2, 4, 8, 12, 24, 48 and 168 hours and serum 17β-oestradiol (and oestrone sulphate in some samples) was subsequently analysed. For long-term characterisation, one group treated perorally was blood sampled after 2, 7, 14, 21, 28, 35 and 42 days. At sacrifice, uterine horns were weighed and subcutaneous tissue samples were taken for histological assessment. The pellets, silastic capsule and injection groups produced serum 17β-oestradiol concentrations that were initially several orders of magnitude higher than physiological levels, while the peroral groups had 17β-oestradiol levels that were within the physiological range during the entire experiment. The peroral method is a promising option for administering 17β-oestradiol if physiological levels or similarity to women's oral hormone therapy are desired. Uterine weights were found to be a very crude measure of oestrogen exposure.

  16. Circumstance of endoscopic and laparoscopic treatments for gastric cancer in Japan: A review of epidemiological studies using a national administrative database.

    PubMed

    Murata, Atsuhiko; Matsuda, Shinya

    2015-02-16

    Currently, endoscopic submucosal dissection (ESD) and laparoscopic gastrectomy (LG) have become widely accepted and increasingly play important roles in the treatment of gastric cancer. Data from an administrative database associated with the diagnosis procedure combination (DPC) system have revealed some circumstances of ESD and LG in Japan. Some studies demonstrated that medical costs or length of stay of patients receiving ESD for gastric cancer had become significantly reduced while length of hospitalization and costs were significantly increased in older patients. With respect to LG, some recent reports have shown that this has been a cost-beneficial treatment for patients compared with open gastrectomy while simultaneous LG and cholecystectomy is a safe procedure for patients with both gastric cancer and gallbladder stones. These epidemiological studies using the administrative database in the DPC system closely reflect clinical circumstances of endoscopic and surgical treatment for gastric cancer in Japan. However, DPC database does not contain detailed clinical data such as histological types and lesion size of gastric cancer. The link between the DPC database and another detailed clinical database may be vital for future research into endoscopic and laparoscopic treatments for gastric cancer.

  17. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.

  18. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations

    PubMed Central

    Kim, Seung Won; Kim, Bae-Hwan

    2016-01-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  19. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  20. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  1. GMOMETHODS: the European Union database of reference methods for GMO analysis.

    PubMed

    Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre

    2012-01-01

    In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.

  2. Effectiveness of Four Methods of Handling Missing Data Using Samples from a National Database.

    ERIC Educational Resources Information Center

    Witta, E. Lea

    The effectiveness of four methods of handling missing data in reproducing the target sample covariance matrix and mean vector was tested using three levels of incomplete cases: 30%, 50%, and 70%. Data were selected from the National Education Longitudinal Study (NELS) database. Three levels of sample sizes (500, 1000, and 2000) were used. The…

  3. Development of a Publicly Available, Comprehensive Database of Fiber and Health Outcomes: Rationale and Methods

    PubMed Central

    Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.

    2016-01-01

    Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733

  4. Prevalence and Costs of Multimorbidity by Deprivation Levels in the Basque Country: A Population Based Study Using Health Administrative Databases

    PubMed Central

    Orueta, Juan F.; García-Álvarez, Arturo; García-Goñi, Manuel; Paolucci, Francesco; Nuño-Solinís, Roberto

    2014-01-01

    Background Multimorbidity is a major challenge for healthcare systems. However, currently, its magnitude and impact in healthcare expenditures is still mostly unknown. Objective To present an overview of the prevalence and costs of multimorbidity by socioeconomic levels in the whole Basque population. Methods We develop a cross-sectional analysis that includes all the inhabitants of the Basque Country (N = 2,262,698). We utilize data from primary health care electronic medical records, hospital admissions, and outpatient care databases, corresponding to a 4 year period. Multimorbidity was defined as the presence of two or more chronic diseases out of a list of 52 of the most important and common chronic conditions given in the literature. We also use socioeconomic and demographic variables such as age, sex, individual healthcare cost, and deprivation level. Predicted adjusted costs were obtained by log-gamma regression models. Results Multimorbidity of chronic diseases was found among 23.61% of the total Basque population and among 66.13% of those older than 65 years. Multimorbid patients account for 63.55% of total healthcare expenditures. Prevalence of multimorbidity is higher in the most deprived areas for all age and sex groups. The annual cost of healthcare per patient generated for any chronic disease depends on the number of coexisting comorbidities, and varies from 637 € for the first pathology in average to 1,657 € for the ninth one. Conclusion Multimorbidity is very common for the Basque population and its prevalence rises in age, and unfavourable socioeconomic environment. The costs of care for chronic patients with several conditions cannot be described as the sum of their individual pathologies in average. They usually increase dramatically according to the number of comorbidities. Given the ageing population, multimorbidity and its consequences should be taken into account in healthcare policy, the organization of care and medical research

  5. Ecological Methods in the Study of Administrative Behavior.

    ERIC Educational Resources Information Center

    Scott, Myrtle; Eklund, Susan J.

    Qualitative/naturalistic inquiry intends to discover whatever naturally occurring order exists rather than to test various theories or conceptual frameworks held by the investigator. Naturalistic, ecological data are urgently needed concerning the behavior of educational administrators. Such data can considerably change the knowledge base of the…

  6. The Institute of Public Administration's Document Center: From Paper to Electronic Records--A Full Image Government Documents Database.

    ERIC Educational Resources Information Center

    Al-Zahrani, Rashed S.

    Since its establishment in 1960, the Institute of Public Administration (IPA) in Riyadh, Saudi Arabia has had responsibility for documenting Saudi administrative literature, the official publications of Saudi Arabia, and the literature of regional and international organizations through establishment of the Document Center in 1961. This paper…

  7. Development of a Cast Iron Fatigue Properties Database for use with Modern Design Methods

    SciTech Connect

    DeLa'O, James, D.; Gundlach, Richard, B.; Tartaglia, John, M.

    2003-09-18

    A reliable and comprehensive database of design properties for cast iron is key to full and efficient utilization of this versatile family of high production-volume engineering materials. A database of strain-life fatigue properties and supporting data for a wide range of structural cast irons representing industry standard quality was developed in this program. The database primarily covers ASTM/SAE standard structural grades of ADI, CGI, ductile iron and gray iron as well as an austempered gray iron. Twenty-two carefully chosen materials provided by commercial foundries were tested and fifteen additional datasets were contributed by private industry. The test materials are principally distinguished on the basis of grade designation; most grades were tested in a 25 mm section size and in a single material condition common for the particular grade. Selected grades were tested in multiple sections-sizes and/or material conditions to delineate the properties associated with a range of materials for the given grade. The cyclic properties are presented in terms of the conventional strain-life formalism (e.g., SAE J1099). Additionally, cyclic properties for gray iron and CGI are presented in terms of the Downing Model, which was specifically developed to treat the unique stress-strain response associated with gray iron (and to a lesser extent with CGI). The test materials were fully characterized in terms of alloy composition, microstructure and monotonic properties. The CDROM database presents the data in various levels of detail including property summaries for each material, detailed data analyses for each specimen and raw monotonic and cyclic stress-strain data. The CDROM database has been published by the American Foundry Society (AFS) as an AFS Research Publication entitled ''Development of a Cast Iron Fatigue Properties Database for Use in Modern Design Methods'' (ISDN 0-87433-267-2).

  8. Use of Fibrates Monotherapy in People with Diabetes and High Cardiovascular Risk in Primary Care: A French Nationwide Cohort Study Based on National Administrative Databases

    PubMed Central

    Roussel, Ronan; Chaignot, Christophe; Weill, Alain; Travert, Florence; Hansel, Boris; Marre, Michel; Ricordeau, Philippe; Alla, François; Allemand, Hubert

    2015-01-01

    Background and Aim According to guidelines, diabetic patients with high cardiovascular risk should receive a statin. Despite this consensus, fibrate monotherapy is commonly used in this population. We assessed the frequency and clinical consequences of the use of fibrates for primary prevention in patients with diabetes and high cardiovascular risk. Design Retrospective cohort study based on nationwide data from the medical and administrative databases of French national health insurance systems (07/01/08-12/31/09) with a follow-up of up to 30 months. Methods Lipid-lowering drug-naive diabetic patients initiating fibrate or statin monotherapy were identified. Patients at high cardiovascular risk were then selected: patients with a diagnosis of diabetes and hypertension, and >50 (men) or 60 (women), but with no history of cardiovascular events. The composite endpoint comprised myocardial infarction, stroke, amputation, or death. Results Of the 31,652 patients enrolled, 4,058 (12.8%) received a fibrate. Age- and gender-adjusted annual event rates were 2.42% (fibrates) and 2.21% (statins). The proportionality assumption required for the Cox model was not met for the fibrate/statin variable. A multivariate model including all predictors was therefore calculated by dividing data into two time periods, allowing Hazard Ratios to be calculated before (HR<540) and after 540 days (HR>540) of follow-up. Multivariate analyses showed that fibrates were associated with an increased risk for the endpoint after 540 days: HR<540 = 0.95 (95% CI: 0.78–1.16) and HR>540 = 1.73 (1.28–2.32). Conclusion Fibrate monotherapy is commonly prescribed in diabetic patients with high cardiovascular risk and is associated with poorer outcomes compared to statin therapy. PMID:26398765

  9. A Case for Cases: Using the Case Method in the Preparation of Administrators.

    ERIC Educational Resources Information Center

    Diamantes, Thomas

    This paper describes why the use of a modified case method is useful in teaching concepts of school administration to educators entering public school administration. The paper defines the case method and differentiates it from other techniques and purposes; offers a history of case-study methodology that explains why the method caught on in law…

  10. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  11. DOE/MSU composite material fatigue database: Test methods, materials, and analysis

    SciTech Connect

    Mandell, J.F.; Samborsky, D.D.

    1997-12-01

    This report presents a detailed analysis of the results from fatigue studies of wind turbine blade composite materials carried out at Montana State University (MSU) over the last seven years. It is intended to be used in conjunction with the DOE/MSU composite Materials Fatigue Database. The fatigue testing of composite materials requires the adaptation of standard test methods to the particular composite structure of concern. The stranded fabric E-glass reinforcement used by many blade manufacturers has required the development of several test modifications to obtain valid test data for materials with particular reinforcement details, over the required range of tensile and compressive loadings. Additionally, a novel testing approach to high frequency (100 Hz) testing for high cycle fatigue using minicoupons has been developed and validated. The database for standard coupon tests now includes over 4,100 data points for over 110 materials systems. The report analyzes the database for trends and transitions in static and fatigue behavior with various materials parameters. Parameters explored are reinforcement fabric architecture, fiber content, content of fibers oriented in the load direction, matrix material, and loading parameters (tension, compression, and reversed loading). Significant transitions from good fatigue resistance to poor fatigue resistance are evident in the range of materials currently used in many blades. A preliminary evaluation of knockdowns for selected structural details is also presented. The high frequency database provides a significant set of data for various loading conditions in the longitudinal and transverse directions of unidirectional composites out to 10{sup 8} cycles. The results are expressed in stress and strain based Goodman Diagrams suitable for design. A discussion is provided to guide the user of the database in its application to blade design.

  12. Combining fuzzy logic and voting methods for matching pixel spectra with signature databases

    NASA Astrophysics Data System (ADS)

    Raeth, Peter G.; Pilati, Martin L.

    2002-08-01

    This paper discusses a method for searching a database of known material signatures to find the closest match with an unknown signature. This database search method combines fuzzy logic and voting methods to achieve a high level of classification accuracy with the signatures and data cubes tested. This paper discusses the method in detail to include background and test results. It makes reference to public literature concerning components used by the method but developed elsewhere. This paper results from a project whose main objective is to produce an easily integrated software tool that makes an accurate best-guess as to the material(s) indicated by the signature of a pixel found to be interesting according to some analysis method, such as anomaly detection and scene characterization. Anomaly detection examines a spectral cube and determines which pixels are unusual relative to the majority background. Scene characterization finds pixels whose signatures are representative of the unique pixel groups. The current project fully automates the process of determining unknown pixels of interest, taking the signatures from the flagged pixels, searching a database of known signatures, and making a best guess as to the material(s) represented by each pixel's signature. The method ranks the possible materials by order of likelihood with the purpose of accounting for multiple materials existing in the same pixel. In this way it is possible to deliver multiple reportings when more than one material is closely matched within some threshold. This facilitates human analysis and decision-making for productions purposes. The implementation facilitates rapid response to interactive analysis need in support of strategic and tactical operational requirements in both the civil and defense sectors.

  13. Extended likelihood ratio test-based methods for signal detection in a drug class with application to FDA's adverse event reporting system database.

    PubMed

    Zhao, Yueqin; Yi, Min; Tiwari, Ram C

    2016-05-02

    A likelihood ratio test, recently developed for the detection of signals of adverse events for a drug of interest in the FDA Adverse Events Reporting System database, is extended to detect signals of adverse events simultaneously for all the drugs in a drug class. The extended likelihood ratio test methods, based on Poisson model (Ext-LRT) and zero-inflated Poisson model (Ext-ZIP-LRT), are discussed and are analytically shown, like the likelihood ratio test method, to control the type-I error and false discovery rate. Simulation studies are performed to evaluate the performance characteristics of Ext-LRT and Ext-ZIP-LRT. The proposed methods are applied to the Gadolinium drug class in FAERS database. An in-house likelihood ratio test tool, incorporating the Ext-LRT methodology, is being developed in the Food and Drug Administration.

  14. Hospitalizations of Infants and Young Children with Down Syndrome: Evidence from Inpatient Person-Records from a Statewide Administrative Database

    ERIC Educational Resources Information Center

    So, S. A.; Urbano, R. C.; Hodapp, R. M.

    2007-01-01

    Background: Although individuals with Down syndrome are increasingly living into the adult years, infants and young children with the syndrome continue to be at increased risk for health problems. Using linked, statewide administrative hospital discharge records of all infants with Down syndrome born over a 3-year period, this study "follows…

  15. Discovery of novel mesangial cell proliferation inhibitors using a three-dimensional database searching method.

    PubMed

    Kurogi, Y; Miyata, K; Okamura, T; Hashimoto, K; Tsutsumi, K; Nasu, M; Moriyasu, M

    2001-07-05

    A three-dimensional pharmacophore model of mesangial cell (MC) proliferation inhibitors was generated from a training set of 4-(diethoxyphosphoryl)methyl-N-(3-phenyl-[1,2,4]thiadiazol-5-yl)benzamide, 2, and its derivatives using the Catalyst/HIPHOP software program. On the basis of the in vitro MC proliferation inhibitory activity, a pharmacophore model was generated as seven features consisting of two hydrophobic regions, two hydrophobic aromatic regions, and three hydrogen bond acceptors. Using this model as a three-dimensional query to search the Maybridge database, structurally novel 41 compounds were identified. The evaluation of MC proliferation inhibitory activity using available samples from the 41 identified compounds exhibited over 50% inhibitory activity at the 100 nM range. Interestingly, the newly identified compounds by the 3D database searching method exhibited the reduced inhibition of normal proximal tubular epithelial cell proliferation compared to a training set of compounds.

  16. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    NASA Astrophysics Data System (ADS)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  17. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    NASA Astrophysics Data System (ADS)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-02-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  18. Development of Database Assisted Structure Identification (DASI) Methods for Nontargeted Metabolomics

    PubMed Central

    Menikarachchi, Lochana C.; Dubey, Ritvik; Hill, Dennis W.; Brush, Daniel N.; Grant, David F.

    2016-01-01

    Metabolite structure identification remains a significant challenge in nontargeted metabolomics research. One commonly used strategy relies on searching biochemical databases using exact mass. However, this approach fails when the database does not contain the unknown metabolite (i.e., for unknown-unknowns). For these cases, constrained structure generation with combinatorial structure generators provides a potential option. Here we evaluated structure generation constraints based on the specification of: (1) substructures required (i.e., seed structures); (2) substructures not allowed; and (3) filters to remove incorrect structures. Our approach (database assisted structure identification, DASI) used predictive models in MolFind to find candidate structures with chemical and physical properties similar to the unknown. These candidates were then used for seed structure generation using eight different structure generation algorithms. One algorithm was able to generate correct seed structures for 21/39 test compounds. Eleven of these seed structures were large enough to constrain the combinatorial structure generator to fewer than 100,000 structures. In 35/39 cases, at least one algorithm was able to generate a correct seed structure. The DASI method has several limitations and will require further experimental validation and optimization. At present, it seems most useful for identifying the structure of unknown-unknowns with molecular weights <200 Da. PMID:27258318

  19. Evaluation of a CFD Method for Aerodynamic Database Development using the Hyper-X Stack Configuration

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert

    2004-01-01

    A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.

  20. A novel method to handle the effect of uneven sampling effort in biodiversity databases.

    PubMed

    Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.

  1. Serotonin-Norepinephrine Reuptake Inhibitors and the Risk of AKI: A Cohort Study of Eight Administrative Databases and Meta-Analysis

    PubMed Central

    Renoux, Christel; Lix, Lisa M.; Patenaude, Valérie; Bresee, Lauren C.; Paterson, J. Michael; Lafrance, Jean-Philippe; Tamim, Hala; Mahmud, Salaheddin M.; Alsabbagh, Mhd. Wasem; Hemmelgarn, Brenda; Dormuth, Colin R.

    2015-01-01

    Background and objectives A safety signal regarding cases of AKI after exposure to serotonin-norepinephrine reuptake inhibitors (SNRIs) was identified by Health Canada. Therefore, this study assessed whether the use of SNRIs increases the risk of AKI compared with selective serotonin reuptake inhibitors (SSRIs) and examined the risk associated with each individual SNRI. Design, setting, participants, & measurements Multiple retrospective population-based cohort studies were conducted within eight administrative databases from Canada, the United States, and the United Kingdom between January 1997 and March 2010. Within each cohort, a nested case-control analysis was performed to estimate incidence rate ratios (RRs) of AKI associated with SNRIs compared with SSRIs using conditional logistic regression, with adjustment for high-dimensional propensity scores. The overall effect across sites was estimated using meta-analytic methods. Results There were 38,974 cases of AKI matched to 384,034 controls. Current use of SNRIs was not associated with a higher risk of AKI compared with SSRIs (fixed-effect RR, 0.97; 95% confidence interval [95% CI], 0.94 to 1.01). Current use of venlafaxine and desvenlafaxine considered together was not associated with a higher risk of AKI (RR, 0.96; 95% CI, 0.92 to 1.00). For current use of duloxetine, there was significant heterogeneity among site-specific estimates such that a random-effects meta-analysis was performed showing a 16% higher risk, although this risk was not statistically significant (RR, 1.16; 95% CI, 0.96 to 1.40). This result is compatible with residual confounding, because there was a substantial imbalance in the prevalence of diabetes between users of duloxetine and users of others SNRIs or SSRIs. After further adjustment by including diabetes as a covariate in the model along with propensity scores, the fixed-effect RR was 1.02 (95% CI, 0.95 to 1.10). Conclusions There is no evidence that use of SNRIs is associated with a

  2. Topical medication utilization and health resources consumption in adult patients affected by psoriasis: findings from the analysis of administrative databases of local health units

    PubMed Central

    Perrone, Valentina; Sangiorgi, Diego; Buda, Stefano; Degli Esposti, Luca

    2017-01-01

    Aim The objectives of this study were to: 1) analyze the drug utilization pattern among adult psoriasis patients who were newly prescribed with topical medication; and 2) assess their adherence to topical therapy and the possibility of switching to other strategies in the treatment process. Methods An observational retrospective analysis was conducted based on administrative databases of two Italian local health units. All adult subjects who were diagnosed with psoriasis or who were newly prescribed for topical medication with at least one prescription between January 1, 2010, and December 31, 2014, were screened. Only patients who were “non-occasional users of topical drugs” (if they had at least two prescriptions of topical drugs in a time space of 2 years) were considered for the first and second objectives in the analysis. The date of the first prescription of topical agents was identified as the index date (ID), which was then followed for all time available from ID (follow-up period). The adherence to therapy was assessed on the basis of cycles of treatment covered in the 6 months before the end of the follow-up period. The mean health care costs in patients who switched to disease-modifying antirheumatic drugs (DMARDs) or biologics after the ID were evaluated. Results A total of 17,860 patients with psoriasis who were newly prescribed for topical medication were identified. A total of 2,477 were identified as “non-occasional users of topical drugs”, of whom 70.2% had a prescription for a topical fixed combination regimen at ID. Around 19% adhered to their medication, whereas 6% switched to other options of psoriasis treatment. Multivariable logistic regression model shows that patients on fixed combination treatment were less likely to be non-adherent to treatment and less likely to switch to other treatments. The annual mean pharmaceutical costs were €567.70 and €10,606.10 for patients who switched to DMARDs and biologics, respectively

  3. Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database.

    PubMed

    Davis, Allan Peter; Johnson, Robin J; Lennon-Hopkins, Kelley; Sciaky, Daniela; Rosenstein, Michael C; Wiegers, Thomas C; Mattingly, Carolyn J

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and manually curate a triad of chemical-gene, chemical-disease and gene-disease interactions. Typically, articles for CTD are selected using a chemical-centric approach by querying PubMed to retrieve a corpus containing the chemical of interest. Although this technique ensures adequate coverage of knowledge about the chemical (i.e. data completeness), it does not necessarily reflect the most current state of all toxicological research in the community at large (i.e. data currency). Keeping databases current with the most recent scientific results, as well as providing a rich historical background from legacy articles, is a challenging process. To address this issue of data currency, CTD designed and tested a journal-centric approach of curation to complement our chemical-centric method. We first identified priority journals based on defined criteria. Next, over 7 weeks, three biocurators reviewed 2425 articles from three consecutive years (2009-2011) of three targeted journals. From this corpus, 1252 articles contained relevant data for CTD and 52 752 interactions were manually curated. Here, we describe our journal selection process, two methods of document delivery for the biocurators and the analysis of the resulting curation metrics, including data currency, and both intra-journal and inter-journal comparisons of research topics. Based on our results, we expect that curation by select journals can (i) be easily incorporated into the curation pipeline to complement our chemical-centric approach; (ii) build content more evenly for chemicals, genes and diseases in CTD (rather than biasing data by chemicals-of-interest); (iii) reflect developing areas in environmental health and (iv) improve overall data currency for chemicals, genes and diseases. Database URL

  4. Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database

    PubMed Central

    Davis, Allan Peter; Johnson, Robin J.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Rosenstein, Michael C.; Wiegers, Thomas C.; Mattingly, Carolyn J.

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and manually curate a triad of chemical–gene, chemical–disease and gene–disease interactions. Typically, articles for CTD are selected using a chemical-centric approach by querying PubMed to retrieve a corpus containing the chemical of interest. Although this technique ensures adequate coverage of knowledge about the chemical (i.e. data completeness), it does not necessarily reflect the most current state of all toxicological research in the community at large (i.e. data currency). Keeping databases current with the most recent scientific results, as well as providing a rich historical background from legacy articles, is a challenging process. To address this issue of data currency, CTD designed and tested a journal-centric approach of curation to complement our chemical-centric method. We first identified priority journals based on defined criteria. Next, over 7 weeks, three biocurators reviewed 2425 articles from three consecutive years (2009–2011) of three targeted journals. From this corpus, 1252 articles contained relevant data for CTD and 52 752 interactions were manually curated. Here, we describe our journal selection process, two methods of document delivery for the biocurators and the analysis of the resulting curation metrics, including data currency, and both intra-journal and inter-journal comparisons of research topics. Based on our results, we expect that curation by select journals can (i) be easily incorporated into the curation pipeline to complement our chemical-centric approach; (ii) build content more evenly for chemicals, genes and diseases in CTD (rather than biasing data by chemicals-of-interest); (iii) reflect developing areas in environmental health and (iv) improve overall data currency for chemicals, genes and diseases. Database

  5. 45 CFR 235.50 - State plan requirements for methods of personnel administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false State plan requirements for methods of personnel administration. 235.50 Section 235.50 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY... established and maintained in public agencies administering or supervising the administration of the...

  6. A Method to Calculate and Analyze Residents' Evaluations by Using a Microcomputer Data-Base Management System.

    ERIC Educational Resources Information Center

    Mills, Myron L.

    1988-01-01

    A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)

  7. Medical procedures and outcomes of Japanese patients with trisomy 18 or trisomy 13: analysis of a nationwide administrative database of hospitalized patients.

    PubMed

    Ishitsuka, Kazue; Matsui, Hiroki; Michihata, Nobuaki; Fushimi, Kiyohide; Nakamura, Tomoo; Yasunaga, Hideo

    2015-08-01

    The choices of aggressive treatment for trisomy 18 (T18) and trisomy 13 (T13) remain controversial. Here, we describe the current medical procedures and outcomes of patients with T18 and T13 from a nationwide administrative database of hospitalized patients in Japan. We used the database to identify eligible patients with T18 (n = 438) and T13 (n = 133) who were first admitted to one of 200 hospitals between July 2010 and March 2013. Patients were divided into admission at day <7 (early neonatal) and admission at day ≥7 (late neonatal and post neonatal) groups, and we described the medical intervention and status at discharge for each group. In the day <7 groups, surgical interventions were performed for 56 (19.9%) T18 patients and 22 (34.4%) T13 patients, including pulmonary artery banding, and procedures for esophageal atresia and omphalocele. None received intracardiac surgery. The rate of patients discharged to home was higher in the day ≥7 groups than the day <7 groups (T18: 72.6 vs. 38.8%; T13: 73.9 vs. 21.9%, respectively). Our data show that a substantial number of patients with trisomy received surgery and were then discharged home, but, of these, a considerable number required home medical care. This included home oxygen therapy, home mechanical ventilation, and tube feeding. These findings will be useful to clinicians or families who care for patients with T18 and T13.

  8. NPEST: a nonparametric method and a database for transcription start site prediction

    PubMed Central

    Tatarinova, Tatiana; Kryshchenko, Alona; Triska, Martin; Hassan, Mehedi; Murphy, Denis; Neely, Michael; Schumitzky, Alan

    2014-01-01

    In this paper we present NPEST, a novel tool for the analysis of expressed sequence tags (EST) distributions and transcription start site (TSS) prediction. This method estimates an unknown probability distribution of ESTs using a maximum likelihood (ML) approach, which is then used to predict positions of TSS. Accurate identification of TSS is an important genomics task, since the position of regulatory elements with respect to the TSS can have large effects on gene regulation, and performance of promoter motif-finding methods depends on correct identification of TSSs. Our probabilistic approach expands recognition capabilities to multiple TSS per locus that may be a useful tool to enhance the understanding of alternative splicing mechanisms. This paper presents analysis of simulated data as well as statistical analysis of promoter regions of a model dicot plant Arabidopsis thaliana. Using our statistical tool we analyzed 16520 loci and developed a database of TSS, which is now publicly available at www.glacombio.net/NPEST. PMID:25197613

  9. NPEST: a nonparametric method and a database for transcription start site prediction.

    PubMed

    Tatarinova, Tatiana; Kryshchenko, Alona; Triska, Martin; Hassan, Mehedi; Murphy, Denis; Neely, Michael; Schumitzky, Alan

    2013-12-01

    In this paper we present NPEST, a novel tool for the analysis of expressed sequence tags (EST) distributions and transcription start site (TSS) prediction. This method estimates an unknown probability distribution of ESTs using a maximum likelihood (ML) approach, which is then used to predict positions of TSS. Accurate identification of TSS is an important genomics task, since the position of regulatory elements with respect to the TSS can have large effects on gene regulation, and performance of promoter motif-finding methods depends on correct identification of TSSs. Our probabilistic approach expands recognition capabilities to multiple TSS per locus that may be a useful tool to enhance the understanding of alternative splicing mechanisms. This paper presents analysis of simulated data as well as statistical analysis of promoter regions of a model dicot plant Arabidopsis thaliana. Using our statistical tool we analyzed 16520 loci and developed a database of TSS, which is now publicly available at www.glacombio.net/NPEST.

  10. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  11. A Privacy-Preserved Analytical Method for eHealth Database with Minimized Information Loss

    PubMed Central

    Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun

    2012-01-01

    Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification. PMID:22969273

  12. Supervised method to build an atlas database for multi-atlas segmentation-propagation

    NASA Astrophysics Data System (ADS)

    Shen, Kaikai; Bourgeat, Pierrick; Fripp, Jurgen; Mériaudeau, Fabrice; Ames, David; Ellis, Kathryn A.; Masters, Colin L.; Villemagne, Victor L.; Rowe, Christopher C.; Salvado, Olivier

    2010-03-01

    Multiatlas based segmentation-propagation approaches have been shown to obtain accurate parcelation of brain structures. However, this approach requires a large number of manually delineated atlases, which are often not available. We propose a supervised method to build a population specific atlas database, using the publicly available Internet Brain Segmentation Repository (IBSR). The set of atlases grows iteratively as new atlases are added, so that its segmentation capability may be enhanced in the multiatlas based approach. Using a dataset of 210 MR images of elderly subjects (170 elderly control, 40 Alzheimer's disease) from the Australian Imaging, Biomarkers and Lifestyle (AIBL) study, 40 MR images were segmented to build a population specific atlas database for the purpose of multiatlas segmentation-propagation. The population specific atlases were used to segment the elderly population of 210 MR images, and were evaluated in terms of the agreement among the propagated labels. The agreement was measured by using the entropy H of the probability image produced when fused by voting rule and the partial moment μ2 of the histogram. Compared with using IBSR atlases, the population specific atlases obtained a higher agreement when dealing with images of elderly subjects.

  13. A data-based method to determine the regional impact of agriculture on fire seasonality

    NASA Astrophysics Data System (ADS)

    Magi, B. I.; Rabin, S.; Shevliakova, E.; Pacala, S.

    2012-04-01

    Anthropological work and studies of satellite-observed fire occurrence have shown that the timing of human burning practices in many regions does not correspond with what would be expected based on indices of fire weather and fuel load alone. To date, large-scale observed differences in fire seasonality between agricultural land (i.e., cropland or pasture) and non-agricultural land have not been fully quantified. This will be necessary if fire modules in the next generation of dynamic global vegetation models (DGVMs) are to take advantage of those models' ability to keep track of different types of land cover and land use. The work described in this paper compares observed fire seasonality on agricultural and non-agricultural land in 14 world regions, using a statistical method to separate burning on the two different land types. Active fire detections from the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) sensors and burned area estimates from the Global Fire Emissions Database (GFED, version 3.1) serve as observations of fire activity for the years 2000-2009. Global estimates of the areal extent of cropland and pasture are provided by the History Database of the Global Environment (HYDE) database (version 3). We use the TIMESAT analysis program, which is designed to estimate seasonality in remote-sensed data, to determine the length and timing of the fire season. We find quantitative differences in fire seasonality between agricultural and non-agricultural land in many regions. The agricultural fire season in northern hemisphere Africa, for example, begins 1-2 months before the fire season on non-agricultural land. Attributable to a preference for early-season burning on managed land, this pattern is not captured by current global fire models that do not consider land use. Qualitative differences in the number of peaks in fire occurrence are also apparent in several regions - for example, South America north of the equator shows two peaks in non

  14. Integration of first-principles methods and crystallographic database searches for new ferroelectrics: Strategies and explorations

    NASA Astrophysics Data System (ADS)

    Bennett, Joseph W.; Rabe, Karin M.

    2012-11-01

    In this concept paper, the development of strategies for the integration of first-principles methods with crystallographic database mining for the discovery and design of novel ferroelectric materials is discussed, drawing on the results and experience derived from exploratory investigations on three different systems: (1) the double perovskite Sr(Sb1/2Mn1/2)O3 as a candidate semiconducting ferroelectric; (2) polar derivatives of schafarzikite MSb2O4; and (3) ferroelectric semiconductors with formula M2P2(S,Se)6. A variety of avenues for further research and investigation are suggested, including automated structure type classification, low-symmetry improper ferroelectrics, and high-throughput first-principles searches for additional representatives of structural families with desirable functional properties.

  15. Clinical experimentation with aerosol antibiotics: current and future methods of administration

    PubMed Central

    Zarogoulidis, Paul; Kioumis, Ioannis; Porpodis, Konstantinos; Spyratos, Dionysios; Tsakiridis, Kosmas; Huang, Haidong; Li, Qiang; Turner, J Francis; Browning, Robert; Hohenforst-Schmidt, Wolfgang; Zarogoulidis, Konstantinos

    2013-01-01

    Currently almost all antibiotics are administered by the intravenous route. Since several systems and situations require more efficient methods of administration, investigation and experimentation in drug design has produced local treatment modalities. Administration of antibiotics in aerosol form is one of the treatment methods of increasing interest. As the field of drug nanotechnology grows, new molecules have been produced and combined with aerosol production systems. In the current review, we discuss the efficiency of aerosol antibiotic studies along with aerosol production systems. The different parts of the aerosol antibiotic methodology are presented. Additionally, information regarding the drug molecules used is presented and future applications of this method are discussed. PMID:24115836

  16. Comparative Research: An Approach to Teaching Research Methods in Political Science and Public Administration

    ERIC Educational Resources Information Center

    Engbers, Trent A

    2016-01-01

    The teaching of research methods has been at the core of public administration education for almost 30 years. But since 1990, this journal has published only two articles on the teaching of research methods. Given the increasing emphasis on data driven decision-making, greater insight is needed into the best practices for teaching public…

  17. 76 FR 74804 - Federal Housing Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... availability of a universal NAID to aid eligible purchasers under the First Look Sales method. DATES: The dates... the universal NAID to aid eligible NSP purchasers in the purchase of properties under First Look Sales... URBAN DEVELOPMENT Federal Housing Administration (FHA) First Look Sales Method Under the...

  18. Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)

    2004-01-01

    Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.

  19. A method to add richness to the National Landslide Database of Great Britain

    NASA Astrophysics Data System (ADS)

    Taylor, Faith; Freeborough, Katy; Malamud, Bruce; Demeritt, David

    2014-05-01

    Landslides in Great Britain (GB) pose a risk to infrastructure, property and livelihoods. Our understanding of where landslide hazard and impact will be greatest is based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in GB by searching electronic archives of local and regional newspapers. In Great Britain, the British Geological Survey (BGS) are responsible for updating and maintaining records of GB landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of approximately 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. Here we aim to supplement the richness of the NLD by (i) identifying additional landslide events and (ii) adding more detail to existing database entries. This is done by systematically searching the LexisNexis digital archive of 568 local and regional newspapers published in the UK. The first step in the methodology was to construct Boolean search criteria that optimised the balance between minimising the number of irrelevant articles (e.g. "a landslide victory") and maximising those referring to landslide events. This keyword search was then applied to the LexisNexis archive of newspapers for all articles published between 1 January and 31 December 2012, resulting in 1,668 articles. These articles were assessed to determine whether they related to a landslide event. Of the 1,668 articles, approximately 30% (~700) referred to landslide events, with others referring to landslides more generally or themes unrelated to landslides. Examples of information obtained from newspaper articles included: date/time of landslide occurrence, spatial location, size, impact, landslide type and triggering mechanism, although the amount of detail and precision attainable from individual articles was variable. Of the 700 articles found for

  20. Leveraging Administrative Data for Program Evaluations: A Method for Linking Data Sets Without Unique Identifiers.

    PubMed

    Lorden, Andrea L; Radcliff, Tiffany A; Jiang, Luohua; Horel, Scott A; Smith, Matthew L; Lorig, Kate; Howell, Benjamin L; Whitelaw, Nancy; Ory, Marcia

    2016-06-01

    In community-based wellness programs, Social Security Numbers (SSNs) are rarely collected to encourage participation and protect participant privacy. One measure of program effectiveness includes changes in health care utilization. For the 65 and over population, health care utilization is captured in Medicare administrative claims data. Therefore, methods as described in this article for linking participant information to administrative data are useful for program evaluations where unique identifiers such as SSN are not available. Following fuzzy matching methodologies, participant information from the National Study of the Chronic Disease Self-Management Program was linked to Medicare administrative data. Linking variables included participant name, date of birth, gender, address, and ZIP code. Seventy-eight percent of participants were linked to their Medicare claims data. Linking program participant information to Medicare administrative data where unique identifiers are not available provides researchers with the ability to leverage claims data to better understand program effects.

  1. A practical tool for public health surveillance: Semi-automated coding of short injury narratives from large administrative databases using Naïve Bayes algorithms.

    PubMed

    Marucci-Wellman, Helen R; Lehto, Mark R; Corns, Helen L

    2015-11-01

    Public health surveillance programs in the U.S. are undergoing landmark changes with the availability of electronic health records and advancements in information technology. Injury narratives gathered from hospital records, workers compensation claims or national surveys can be very useful for identifying antecedents to injury or emerging risks. However, classifying narratives manually can become prohibitive for large datasets. The purpose of this study was to develop a human-machine system that could be relatively easily tailored to routinely and accurately classify injury narratives from large administrative databases such as workers compensation. We used a semi-automated approach based on two Naïve Bayesian algorithms to classify 15,000 workers compensation narratives into two-digit Bureau of Labor Statistics (BLS) event (leading to injury) codes. Narratives were filtered out for manual review if the algorithms disagreed or made weak predictions. This approach resulted in an overall accuracy of 87%, with consistently high positive predictive values across all two-digit BLS event categories including the very small categories (e.g., exposure to noise, needle sticks). The Naïve Bayes algorithms were able to identify and accurately machine code most narratives leaving only 32% (4853) for manual review. This strategy substantially reduces the need for resources compared with manual review alone.

  2. Leaf mechanical resistance in plant trait databases: comparing the results of two common measurement methods

    PubMed Central

    Enrico, Lucas; Díaz, Sandra; Westoby, Mark; Rice, Barbara L.

    2016-01-01

    Background and Aims The influence of leaf mechanical properties on local ecosystem processes, such as trophic transfer, decomposition and nutrient cycling, has resulted in a growing interest in including leaf mechanical resistance in large-scale databases of plant functional traits. ‘Specific work to shear’ and ‘force to tear’ are two properties commonly used to describe mechanical resistance (toughness or strength) of leaves. Two methodologies have been widely used to measure them across large datasets. This study aimed to assess correlations and standardization between the two methods, as measured by two widely used apparatuses, in order to inter-convert existing data in those global datasets. Methods Specific work to shear (WSS) and force to tear (FT) were measured in leaves of 72 species from south-eastern Australia. The measurements were made including and excluding midribs. Relationships between the variables were tested by Spearman correlations and ordinary least square regressions. Key Results A positive and significant correlation was found between the methods, but coefficients varied according to the inclusion or exclusion of the midrib in the measurements. Equations for prediction varied according to leaf venation pattern. A positive and significant (r = 0·90, P < 0·0001) correlation was also found between WSS values for fresh and rehydrated leaves, which is considered to be of practical relevance. Conclusions In the context of broad-scale ecological hypotheses and used within the constraints recommended here, leaf mechanical resistance data obtained with both methodologies could be pooled together into a single coarser variable, using the equations provided in this paper. However, more detailed datasets of FT cannot be safely filled in with estimations based on WSS, or vice versa. In addition, WSS values of green leaves can be predicted with good accuracy from WSS of rehydrated leaves of the same species. PMID:26530215

  3. Exploring the Ligand-Protein Networks in Traditional Chinese Medicine: Current Databases, Methods, and Applications

    PubMed Central

    Zhao, Mingzhu; Wei, Dong-Qing

    2013-01-01

    The traditional Chinese medicine (TCM), which has thousands of years of clinical application among China and other Asian countries, is the pioneer of the “multicomponent-multitarget” and network pharmacology. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This paper firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in detail along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM. PMID:23818932

  4. Similarity landscapes: An improved method for scientific visualization of information from protein and DNA database searches

    SciTech Connect

    Dogget, N.; Myers, G.; Wills, C.J.

    1998-12-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The authors have used computer simulations and examination of a variety of databases to answer questions about a wide range of evolutionary questions. The authors have found that there is a clear distinction in the evolution of HIV-1 and HIV-2, with the former and more virulent virus evolving more rapidly at a functional level. The authors have discovered highly non-random patterns in the evolution of HIV-1 that can be attributed to a variety of selective pressures. In the course of examination of microsatellite DNA (short repeat regions) in microorganisms, the authors have found clear differences between prokaryotes and eukaryotes in their distribution, differences that can be tied to different selective pressures. They have developed a new method (topiary pruning) for enhancing the phylogenetic information contained in DNA sequences. Most recently, the authors have discovered effects in complex rainforest ecosystems that indicate strong frequency-dependent interactions between host species and their parasites, leading to the maintenance of ecosystem variability.

  5. Method of content-based image retrieval for a spinal x-ray image database

    NASA Astrophysics Data System (ADS)

    Krainak, Daniel M.; Long, L. Rodney; Thoma, George R.

    2002-05-01

    The Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM) maintains a digital archive of 17,000 cervical and lumbar spine images collected in the second National Health and Nutrition Examination Survey (NHANES II) conducted by the National Center for Health Statistics (NCHS). Classification of the images for the osteoarthritis research community has been a long-standing goal of researchers at the NLM, collaborators at NCHS, and the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), and capability to retrieve images based on geometric characteristics of the vertebral bodies is of interest to the vertebral morphometry community. Automated or computer-assisted classification and retrieval methods are highly desirable to offset the high cost of manual classification and manipulation by medical experts. We implemented a prototype system for a database of 118 spine x-rays and health survey text data related to these x-rays. The system supports conventional text retrieval, as well as retrieval based on shape similarity to a user-supplied vertebral image or sketch.

  6. Applying Agile Methods to the Development of a Community-Based Sea Ice Observations Database

    NASA Astrophysics Data System (ADS)

    Pulsifer, P. L.; Collins, J. A.; Kaufman, M.; Eicken, H.; Parsons, M. A.; Gearheard, S.

    2011-12-01

    Local and traditional knowledge and community-based monitoring programs are increasingly being recognized as an important part of establishing an Arctic observing network, and understanding Arctic environmental change. The Seasonal Ice Zone Observing Network (SIZONet, http://www.sizonet.org) project has implemented an integrated program for observing seasonal ice in Alaska. Observation and analysis by local sea ice experts helps track seasonal and inter-annual variability of the ice cover and its use by coastal communities. The ELOKA project (http://eloka-arctic.org) is collaborating with SIZONet on the development of a community accessible, Web-based application for collecting and distributing local observations. The SIZONet project is dealing with complicated qualitative and quantitative data collected from a growing number of observers in different communities while concurrently working to design a system that will serve a wide range of different end users including Arctic residents, scientists, educators, and other stakeholders with a need for sea ice information. The benefits of linking and integrating knowledge from communities and university-based researchers are clear, however, development of an information system in this multidisciplinary, multi-participant context is challenging. Participants are geographically distributed, have different levels of technical expertise, and have varying goals for how the system will be used. As previously reported (Pulsifer et al. 2010), new technologies have been used to deal with some of the challenges presented in this complex development context. In this paper, we report on the challenges and innovations related to working as a multi-disciplinary software development team. Specifically, we discuss how Agile software development methods have been used in defining and refining user needs, developing prototypes, and releasing a production level application. We provide an overview of the production application that

  7. A METHOD FOR ESTIMATING GAS PRESSURE IN 3013 CONTAINERS USING AN ISP DATABASE QUERY

    SciTech Connect

    Friday, G; L. G. Peppers, L; D. K. Veirs, D

    2008-07-31

    The U.S. Department of Energy's Integrated Surveillance Program (ISP) is responsible for the storage and surveillance of plutonium-bearing material. During storage, plutonium-bearing material has the potential to generate hydrogen gas from the radiolysis of adsorbed water. The generation of hydrogen gas is a safety concern, especially when a container is breached within a glove box during destructive evaluation. To address this issue, the DOE established a standard (DOE, 2004) that sets the criteria for the stabilization and packaging of material for up to 50 years. The DOE has now packaged most of its excess plutonium for long-term storage in compliance with this standard. As part of this process, it is desirable to know within reasonable certainty the total maximum pressure of hydrogen and other gases within the 3013 container if safety issues and compliance with the DOE standards are to be attained. The principal goal of this investigation is to document the method and query used to estimate total (i.e. hydrogen and other gases) gas pressure within a 3013 container based on the material properties and estimated moisture content contained in the ISP database. Initial attempts to estimate hydrogen gas pressure in 3013 containers was based on G-values (hydrogen gas generation per energy input) derived from small scale samples. These maximum G-values were used to calculate worst case pressures based on container material weight, assay, wattage, moisture content, container age, and container volume. This paper documents a revised hydrogen pressure calculation that incorporates new surveillance results and includes a component for gases other than hydrogen. The calculation is produced by executing a query of the ISP database. An example of manual mathematical computations from the pressure equation is compared and evaluated with results from the query. Based on the destructive evaluation of 17 containers, the estimated mean absolute pressure was significantly higher

  8. A Novel Method of Drug Administration to Multiple Zebrafish (Danio rerio) and the Quantification of Withdrawal

    PubMed Central

    Holcombe, Adam; Schalomon, Melike; Hamilton, Trevor James

    2014-01-01

    Anxiety testing in zebrafish is often studied in combination with the application of pharmacological substances. In these studies, fish are routinely netted and transported between home aquaria and dosing tanks. In order to enhance the ease of compound administration, a novel method for transferring fish between tanks for drug administration was developed. Inserts that are designed for spawning were used to transfer groups of fish into the drug solution, allowing accurate dosing of all fish in the group. This increases the precision and efficiency of dosing, which becomes very important in long schedules of repeated drug administration. We implemented this procedure for use in a study examining the behavior of zebrafish in the light/dark test after administering ethanol with differing 21 day schedules. In fish exposed to daily-moderate amounts of alcohol there was a significant difference in location preference after 2 days of withdrawal when compared to the control group. However, a significant difference in location preference in a group exposed to weekly-binge administration was not observed. This protocol can be generalized for use with all types of compounds that are water-soluble and may be used in any situation when the behavior of fish during or after long schedules of drug administration is being examined. The light/dark test is also a valuable method of assessing withdrawal-induced changes in anxiety. PMID:25407925

  9. 29 CFR 1630.7 - Standards, criteria, or methods of administration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Section 1630.7 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION REGULATIONS TO IMPLEMENT THE EQUAL EMPLOYMENT PROVISIONS OF THE AMERICANS WITH DISABILITIES ACT § 1630.7..., criteria, or methods of administration, which are not job-related and consistent with business...

  10. Chaos Theory: A Scientific Basis for Alternative Research Methods in Educational Administration.

    ERIC Educational Resources Information Center

    Peca, Kathy

    This paper has three purposes. First, it places in scientific perspective the growing acceptance in educational administration research of alternative methods to empiricism by an explication of chaos theory and its assumptions. Second, it demonstrates that chaos theory provides a scientific basis for investigation of complex qualitative variables…

  11. Personality and Student Performance on Evaluation Methods Used in Business Administration Courses

    ERIC Educational Resources Information Center

    Lakhal, Sawsen; Sévigny, Serge; Frenette, Éric

    2015-01-01

    The objective of this study was to verify whether personality (Big Five model) influences performance on the evaluation methods used in business administration courses. A sample of 169 students enrolled in two compulsory undergraduate business courses responded to an online questionnaire. As it is difficult within the same course to assess…

  12. Student Discipline Methods and Resources Used by Indiana Secondary School Administrators.

    ERIC Educational Resources Information Center

    Killion, Rocky

    1998-01-01

    A survey of Indiana secondary school administrators indicated that the most effective reactive discipline methods were alternative schools, suspensions, and Saturday schools. The least effective was detention. The number one problem was student tardiness. Larger schools had more students involved with gang activity, drug use, and vandalism. The…

  13. The Use of Quantitative Methods as an Aid to Decision Making in Educational Administration.

    ERIC Educational Resources Information Center

    Alkin, Marvin C.

    Three quantitative methods are outlined, with suggestions for application to particular problem areas of educational administration: (1) The Leontief input-output analysis, incorporating a "transaction table" for displaying relationships between economic outputs and inputs, mainly applicable to budget analysis and planning; (2) linear programing,…

  14. Exploring the ligand-protein networks in traditional chinese medicine: current databases, methods and applications.

    PubMed

    Zhao, Mingzhu; Wei, Dongqing

    2015-01-01

    While the concept of "single component-single target" in drug discovery seems to have come to an end, "Multi-component-multi-target" is considered to be another promising way out in this field. The Traditional Chinese Medicine (TCM), which has thousands of years' clinical application among China and other Asian countries, is the pioneer of the "Multi-component-multi-target" and network pharmacology. Hundreds of different components in a TCM prescription can cure the diseases or relieve the patients by modulating the network of potential therapeutic targets. Although there is no doubt of the efficacy, it is difficult to elucidate convincing underlying mechanism of TCM due to its complex composition and unclear pharmacology. Without thorough investigation of its potential targets and side effects, TCM is not able to generate large-scale medicinal benefits, especially in the days when scientific reductionism and quantification are dominant. The use of ligand-protein networks has been gaining significant value in the history of drug discovery while its application in TCM is still in its early stage. This article firstly surveys TCM databases for virtual screening that have been greatly expanded in size and data diversity in recent years. On that basis, different screening methods and strategies for identifying active ingredients and targets of TCM are outlined based on the amount of network information available, both on sides of ligand bioactivity and the protein structures. Furthermore, applications of successful in silico target identification attempts are discussed in details along with experiments in exploring the ligand-protein networks of TCM. Finally, it will be concluded that the prospective application of ligand-protein networks can be used not only to predict protein targets of a small molecule, but also to explore the mode of action of TCM.

  15. Engineering design optimization capability using object-oriented programming method with database management system

    SciTech Connect

    Lee, Hueihuang.

    1989-01-01

    Recent advances in computer hardware and software techniques offer opportunities to create large-scale engineering design systems that were once thought to be impossible or impractical. Incorporating existing software systems into an integrated engineering design system and creating new capabilities in the integrated system are challenging problems in the area of engineering software design. The creation of such a system is a large and complex project. Furthermore the engineering design system usually needs to be modified and extended quite often because of continuing developments in engineering theories and practice. Confronted with such a massive, complex, and volatile project, the program developers have been attempting to devise systematic approaches to complete the software system and maintain its understandability, modifiability, reliability, and efficiency (Ross, Goodenough, and Irvine, 1975). Considerable efforts have been made toward achieving these goals. They include the discipline of software engineering, the database management techniques, and the software design methodologies. Among the software design methods, the object-oriented approach has been very successful in the past years. This can be reflected from the supports of the object-oriented programming paradigm in the popular programming languages such as Ada (1983) and C++ (Stroustrup, 1986). A new RQP algorithm based on augmented Lagrangian is implemented into the system in a relatively short time. These capabilities demonstrate the extendibility of IDESIGN-10. The process of developing the new RQP algorithm is presented in depth as a complete demonstration of object-oriented programming in engineering applications. A preliminary evaluation of the algorithm shows that it has potential for engineering applications; however it needs to be further developed.

  16. Antidepressant adherence patterns in older patients: use of a clustering method on a prescription database.

    PubMed

    Braunstein, David; Hardy, Amélie; Boucherie, Quentin; Frauger, Elisabeth; Blin, Olivier; Gentile, Gaétan; Micallef, Joëlle

    2017-04-01

    According to the World Health Organization, depression will become the second most important cause of disability worldwide by 2020. Our objective was to identify patterns of adherence to antidepressant treatments in older patients using several indicators of adherence and to characterize these patterns in terms of medication exposure. We conducted a retrospective cohort study using the French National Health Insurance reimbursement database. Incident antidepressant users aged more than 65 were included from July 1, 2010, to June 30, 2011, and followed up for 18 months. Antidepressant and other psychotropic drugs (opioids, benzodiazepines, antipsychotics, anti-epileptics) were recorded. Adherence to antidepressant treatment was assessed by several measures including proportion of days covered, discontinuation periods, persistence of treatment, and doses dispensed. Patients were classified according to their adherence patterns using a mixed clustering method. We identified five groups according to antidepressant adherence. One group (n = 7505, 26.9%) was fully adherent with regard to guidelines on antidepressant use. Two patterns of nonadherent users were identified: irregular but persistent users (n = 5131, 18.4%) and regular but nonpersistent users (n = 9037, 32.4%). Serotonin reuptake inhibitors were the most frequently dispensed antidepressant class (70.6%), followed by other antidepressants (43.3%, mainly serotonin-norepinephrine reuptake inhibitors and tianeptine) and tricyclic antidepressants (TCAs) (13.4%). Nonadherent users more frequently had a dispensing of TCA, opioid, and anti-epileptic medication than adherent users. Health policies to improve adherence to antidepressant treatment may require better training of physicians and pharmacists, insisting on the important role of the continuation period of antidepressant treatment.

  17. A comprehensive change detection method for updating the National Land Cover Database to circa 2011

    USGS Publications Warehouse

    Jin, Suming; Yang, Limin; Danielson, Patrick; Homer, Collin G.; Fry, Joyce; Xian, George

    2013-01-01

    The importance of characterizing, quantifying, and monitoring land cover, land use, and their changes has been widely recognized by global and environmental change studies. Since the early 1990s, three U.S. National Land Cover Database (NLCD) products (circa 1992, 2001, and 2006) have been released as free downloads for users. The NLCD 2006 also provides land cover change products between 2001 and 2006. To continue providing updated national land cover and change datasets, a new initiative in developing NLCD 2011 is currently underway. We present a new Comprehensive Change Detection Method (CCDM) designed as a key component for the development of NLCD 2011 and the research results from two exemplar studies. The CCDM integrates spectral-based change detection algorithms including a Multi-Index Integrated Change Analysis (MIICA) model and a novel change model called Zone, which extracts change information from two Landsat image pairs. The MIICA model is the core module of the change detection strategy and uses four spectral indices (CV, RCVMAX, dNBR, and dNDVI) to obtain the changes that occurred between two image dates. The CCDM also includes a knowledge-based system, which uses critical information on historical and current land cover conditions and trends and the likelihood of land cover change, to combine the changes from MIICA and Zone. For NLCD 2011, the improved and enhanced change products obtained from the CCDM provide critical information on location, magnitude, and direction of potential change areas and serve as a basis for further characterizing land cover changes for the nation. An accuracy assessment from the two study areas show 100% agreement between CCDM mapped no-change class with reference dataset, and 18% and 82% disagreement for the change class for WRS path/row p22r39 and p33r33, respectively. The strength of the CCDM is that the method is simple, easy to operate, widely applicable, and capable of capturing a variety of natural and

  18. Analysis of Institutionally Specific Retention Research: A Comparison between Survey and Institutional Database Methods

    ERIC Educational Resources Information Center

    Caison, Amy L.

    2007-01-01

    This study empirically explores the comparability of traditional survey-based retention research methodology with an alternative approach that relies on data commonly available in institutional student databases. Drawing on Tinto's [Tinto, V. (1993). "Leaving College: Rethinking the Causes and Cures of Student Attrition" (2nd Ed.), The University…

  19. An Interactive Iterative Method for Electronic Searching of Large Literature Databases

    ERIC Educational Resources Information Center

    Hernandez, Marco A.

    2013-01-01

    PubMed® is an on-line literature database hosted by the U.S. National Library of Medicine. Containing over 21 million citations for biomedical literature--both abstracts and full text--in the areas of the life sciences, behavioral studies, chemistry, and bioengineering, PubMed® represents an important tool for researchers. PubMed® searches return…

  20. Optimal Dose and Method of Administration of Intravenous Insulin in the Management of Emergency Hyperkalemia: A Systematic Review

    PubMed Central

    Harel, Ziv; Kamel, Kamel S.

    2016-01-01

    Background and Objectives Hyperkalemia is a common electrolyte disorder that can result in fatal cardiac arrhythmias. Despite the importance of insulin as a lifesaving intervention in the treatment of hyperkalemia in an emergency setting, there is no consensus on the dose or the method (bolus or infusion) of its administration. Our aim was to review data in the literature to determine the optimal dose and route of administration of insulin in the management of emergency hyperkalemia. Design, Setting, Participants, & Measurements We searched several databases from their date of inception through February 2015 for eligible articles published in any language. We included any study that reported on the use of insulin in the management of hyperkalemia. Results We identified eleven studies. In seven studies, 10 units of regular insulin was administered (bolus in five studies, infusion in two studies), in one study 12 units of regular insulin was infused over 30 minutes, and in three studies 20 units of regular insulin was infused over 60 minutes. The majority of included studies were biased. There was no statistically significant difference in mean decrease in serum potassium (K+) concentration at 60 minutes between studies in which insulin was administered as an infusion of 20 units over 60 minutes and studies in which 10 units of insulin was administered as a bolus (0.79±0.25 mmol/L versus 0.78±0.25 mmol/L, P = 0.98) or studies in which 10 units of insulin was administered as an infusion (0.79±0.25 mmol/L versus 0.39±0.09 mmol/L, P = 0.1). Almost one fifth of the study population experienced an episode of hypoglycemia. Conclusion The limited data available in the literature shows no statistically significant difference between the different regimens of insulin used to acutely lower serum K+ concentration. Accordingly, 10 units of short acting insulin given intravenously may be used in cases of hyperkalemia. Alternatively, 20 units of short acting insulin may be

  1. Imaging-documented cardiovascular signal database for assessing methods for ischaemia analysis.

    PubMed

    Taddei, A; Emdin, M; Varanini, M; Nassi, G; Bertinelli, M; Picano, E; Marchesi, C

    1997-01-01

    A new database of cardiovascular signals has recently been developed at the CNR Institute of Clinical Physiology in a study based on patients admitted to the Coronary Care Unit for suspected ischaemic heart disease (IHD), who underwent both ECG effort stress test and echo or radionuclide diagnostic imaging procedures associated with pharmacological test of myocardial ischaemia. During stress testing, in addition to 12-lead ECG, arterial blood pressure and respiration signals are measured non-invasively and recorded. Signals and representative image frames at baseline and during ischaemia are stored in the database, which is planned to include 50 cases, annotated beat by beat and archived on CD-ROM. Each case also contains resting ECG and a comprehensive patient clinical record; if possible Holter ECG and coronary arteriography frames.

  2. Methods for long-term 17β-estradiol administration to mice.

    PubMed

    Ingberg, E; Theodorsson, A; Theodorsson, E; Strom, J O

    2012-01-01

    Rodent models constitute a cornerstone in the elucidation of the effects and biological mechanisms of 17β-estradiol. However, a thorough assessment of the methods for long-term administration of 17β-estradiol to mice is lacking. The fact that 17β-estradiol has been demonstrated to exert different effects depending on dose emphasizes the need for validated administration regimens. Therefore, 169 female C57BL/6 mice were ovariectomized and administered 17β-estradiol using one of the two commonly used subcutaneous methods; slow-release pellets (0.18 mg, 60-day release pellets; 0.72 mg, 90-day release pellets) and silastic capsules (with/without convalescence period, silastic laboratory tubing, inner/outer diameter: 1.575/3.175 mm, filled with a 14 mm column of 36 μg 17β-estradiol/mL sesame oil), or a novel peroral method (56 μg 17β-estradiol/day/kg body weight in the hazelnut cream Nutella). Forty animals were used as ovariectomized and intact controls. Serum samples were obtained weekly for five weeks and 17β-estradiol concentrations were measured using radioimmunoassay. The peroral method resulted in steady concentrations within--except on one occasion--the physiological range and the silastic capsules produced predominantly physiological concentrations, although exceeding the range by maximum a factor three during the first three weeks. The 0.18 mg pellet yielded initial concentrations an order of magnitude higher than the physiological range, which then decreased drastically, and the 0.72 mg pellet produced between 18 and 40 times higher concentrations than the physiological range during the entire experiment. The peroral method and silastic capsules described in this article constitute reliable modes of administration of 17β-estradiol, superior to the widely used commercial pellets.

  3. Low template STR typing: effect of replicate number and consensus method on genotyping reliability and DNA database search results.

    PubMed

    Benschop, Corina C G; van der Beek, Cornelis P; Meiland, Hugo C; van Gorp, Ankie G M; Westen, Antoinette A; Sijen, Titia

    2011-08-01

    To analyze DNA samples with very low DNA concentrations, various methods have been developed that sensitize short tandem repeat (STR) typing. Sensitized DNA typing is accompanied by stochastic amplification effects, such as allele drop-outs and drop-ins. Therefore low template (LT) DNA profiles are interpreted with care. One can either try to infer the genotype by a consensus method that uses alleles confirmed in replicate analyses, or one can use a statistical model to evaluate the strength of the evidence in a direct comparison with a known DNA profile. In this study we focused on the first strategy and we show that the procedure by which the consensus profile is assembled will affect genotyping reliability. In order to gain insight in the roles of replicate number and requested level of reproducibility, we generated six independent amplifications of samples of known donors. The LT methods included both increased cycling and enhanced capillary electrophoresis (CE) injection [1]. Consensus profiles were assembled from two to six of the replications using four methods: composite (include all alleles), n-1 (include alleles detected in all but one replicate), n/2 (include alleles detected in at least half of the replicates) and 2× (include alleles detected twice). We compared the consensus DNA profiles with the DNA profile of the known donor, studied the stochastic amplification effects and examined the effect of the consensus procedure on DNA database search results. From all these analyses we conclude that the accuracy of LT DNA typing and the efficiency of database searching improve when the number of replicates is increased and the consensus method is n/2. The most functional number of replicates within this n/2 method is four (although a replicate number of three suffices for samples showing >25% of the alleles in standard STR typing). This approach was also the optimal strategy for the analysis of 2-person mixtures, although modified search strategies may be

  4. A method for fast database search for all k-nucleotide repeats.

    PubMed Central

    Benson, G; Waterman, M S

    1994-01-01

    A significant portion of DNA consists of repeating patterns of various sizes, from very small (one, two and three nucleotides) to very large (over 300 nucleotides). Although the functions of these repeating regions are not well understood, they appear important for understanding the expression, regulation and evolution of DNA. For example, increases in the number of trinucleotide repeats have been associated with human genetic disease, including Fragile-X mental retardation and Huntington's disease. Repeats are also useful as a tool in mapping and identifying DNA; the number of copies of a particular pattern at a site is often variable among individuals (polymorphic) and is therefore helpful in locating genes via linkage studies and also in providing DNA fingerprints of individuals. The number of repeating regions is unknown as is the distribution of pattern sizes. It would be useful to search for such regions in the DNA database in order that they may be studied more fully. The DNA database currently consists of approximately 150 million basepairs and is growing exponentially. Therefore, any program to look for repeats must be efficient and fast. In this paper, we present some new techniques that are useful in recognizing repeating patterns and describe a new program for rapidly detecting repeat regions in the DNA database where the basic unit of the repeat has size up to 32 nucleotides. It is our hope that the examples in this paper will illustrate the unrealized diversity of repeats in DNA and that the program we have developed will be a useful tool for locating new and interesting repeats. PMID:7984436

  5. Patch-based denoising method using low-rank technique and targeted database for optical coherence tomography image.

    PubMed

    Liu, Xiaoming; Yang, Zhou; Wang, Jia; Liu, Jun; Zhang, Kai; Hu, Wei

    2017-01-01

    Image denoising is a crucial step before performing segmentation or feature extraction on an image, which affects the final result in image processing. In recent years, utilizing the self-similarity characteristics of the images, many patch-based image denoising methods have been proposed, but most of them, named the internal denoising methods, utilized the noisy image only where the performances are constrained by the limited information they used. We proposed a patch-based method, which uses a low-rank technique and targeted database, to denoise the optical coherence tomography (OCT) image. When selecting the similar patches for the noisy patch, our method combined internal and external denoising, utilizing the other images relevant to the noisy image, in which our targeted database is made up of these two kinds of images and is an improvement compared with the previous methods. Next, we leverage the low-rank technique to denoise the group matrix consisting of the noisy patch and the corresponding similar patches, for the fact that a clean image can be seen as a low-rank matrix and rank of the noisy image is much larger than the clean image. After the first-step denoising is accomplished, we take advantage of Gabor transform, which considered the layer characteristic of the OCT retinal images, to construct a noisy image before the second step. Experimental results demonstrate that our method compares favorably with the existing state-of-the-art methods.

  6. Application of the stochastic tunneling method to high throughput database screening

    NASA Astrophysics Data System (ADS)

    Merlitz, H.; Burghardt, B.; Wenzel, W.

    2003-03-01

    The stochastic tunneling technique is applied to screen a database of chemical compounds to the active site of dihydrofolate reductase for lead candidates in the receptor-ligand docking problem. Using an atomistic force field we consider the ligand's internal rotational degrees of freedom. It is shown that the natural ligand (methotrexate) scores best among 10 000 randomly chosen compounds. We analyze the top scoring compounds to identify hot-spots of the receptor. We mutate the amino acids that are responsible for the hot-spots of the receptor and verify that its specificity is lost upon modification.

  7. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  8. Administration on Aging

    MedlinePlus

    ... Administration on Aging Administration on Disabilities Center for Integrated Programs Center for Performance and Evaluation National Institute ... Project Aging Statistics Profile of Older Americans AGing Integrated Database (AGID) Census Data & Population Estimates Projected Future ...

  9. Method applied to the background analysis of energy data to be considered for the European Reference Life Cycle Database (ELCD).

    PubMed

    Fazio, Simone; Garraín, Daniel; Mathieux, Fabrice; De la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda

    2015-01-01

    Under the framework of the European Platform on Life Cycle Assessment, the European Reference Life-Cycle Database (ELCD - developed by the Joint Research Centre of the European Commission), provides core Life Cycle Inventory (LCI) data from front-running EU-level business associations and other sources. The ELCD contains energy-related data on power and fuels. This study describes the methods to be used for the quality analysis of energy data for European markets (available in third-party LC databases and from authoritative sources) that are, or could be, used in the context of the ELCD. The methodology was developed and tested on the energy datasets most relevant for the EU context, derived from GaBi (the reference database used to derive datasets for the ELCD), Ecoinvent, E3 and Gemis. The criteria for the database selection were based on the availability of EU-related data, the inclusion of comprehensive datasets on energy products and services, and the general approval of the LCA community. The proposed approach was based on the quality indicators developed within the International Reference Life Cycle Data System (ILCD) Handbook, further refined to facilitate their use in the analysis of energy systems. The overall Data Quality Rating (DQR) of the energy datasets can be calculated by summing up the quality rating (ranging from 1 to 5, where 1 represents very good, and 5 very poor quality) of each of the quality criteria indicators, divided by the total number of indicators considered. The quality of each dataset can be estimated for each indicator, and then compared with the different databases/sources. The results can be used to highlight the weaknesses of each dataset and can be used to guide further improvements to enhance the data quality with regard to the established criteria. This paper describes the application of the methodology to two exemplary datasets, in order to show the potential of the methodological approach. The analysis helps LCA

  10. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2014-04-29

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  11. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2011-02-01

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  12. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA

    2012-03-20

    In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.

  13. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L; Hanrahan, Patrick

    2015-03-03

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.

  14. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2015-11-10

    A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.

  15. A semantic data dictionary method for database schema integration in CIESIN

    NASA Astrophysics Data System (ADS)

    Hinds, N.; Huang, Y.; Ravishankar, C.

    1993-08-01

    CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.

  16. A noninvasive method for evaluating portal circulation by administration of Tl-201 per rectum

    SciTech Connect

    Tonami, N.; Nakajima, K.; Hisada, K.; Tanaka, N.; Kobayashi, K.

    1982-11-01

    A new method for evaluating portal systemic circulation by administration of Tl-201 per rectum was performed in 13 control subjects and in 65 patients with various liver diseases. In normal controls, the liver was visualized on the 0-5-min image whereas the images of other organs such as the heart, spleen, and lungs were very poor. In patients with liver cirrhosis associated with portal-systemic shunt, and in many other patients with hepatocellular damage, the liver was not so clearly visualized, whereas radioactivity in other organs, especially the heart, became evident. The heart-to-liver uptake ratio at 20 min after administration (H/L ratio) was significantly higher in liver cirrhosis than in normals and patients with chronic hepatitis (p<0.001). The patients with esophageal varices showed a significantly higher H/L ratio compared with that in cirrhotic patients without esophageal varices (p<0.001). The H/L ratio also showed a significant difference (p<0.01) between Stage 1 and Stage 3 esophageal varices. Since there were many other patients with hepatocellular damage who had high H/L ratios similar to those in liver cirrhosis, the effect that hepatocellular damage has on the liver uptake of T1-201 is also considered. Our present data suggest that this noninvasive method seems to be useful in evaluating portal-to-systemic shunting.

  17. A noninvasive method for evaluating portal circulation by administration of /sup 201/Tl per rectum

    SciTech Connect

    Tonami, N.; Nakajima, K.; Hisada, K.; Tanaka, N.; Kobayashi, K.

    1982-11-01

    A new method for evaluating portal systemic circulation by administration of /sup 201/Tl per rectum was performed in 13 control subjects and in 65 patients with various liver diseases. In normal controls, the liver was visualized on the 0--5-min image whereas the images of other organs such as the heart, spleen, and lungs were very poor. In patients with liver cirrhosis associated with portal-systemic shunt, and in many other patients with hepatocellular damage, the liver was not so clearly visualized, whereas radioactivity in other organs, especially the heart, became evident. The heart-to-liver uptake ratio at 20 min after administration (H/L ratio) was significantly higher in liver cirrhosis than in normals and patients with chronic hepatitis (p less than 0.001). The patients with esophageal varices showed a significantly higher H/L ratio compared with that in cirrhotic patients without esophageal varices (p less than 0.001). The H/L ratio also showed a significant difference (p less than 0.01) between Stage 1 and Stage 3 esophageal varices. Since there were many other patients with hepatocellular damage who had high H/L ratios similar to those in liver cirrhosis, the effect that hepatocellular damage has on the liver uptake of /sup 201/Tl is also considered. Our present data suggest that this noninvasive method seems to be useful in evaluating portal-to-systemic shunting.

  18. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix

  19. A simple method for enema administration in one-day-old broiler chicks.

    PubMed

    Marietto-Gonçalves, Guilherme Augusto; Grandi, Fabrizio; Andreatti Filho, Raphael Lucio

    2012-01-01

    The present study aimed to describe a simple technique for enema administration in one-day-old broiler chicks. For this purpose we used 455 unsexed health birds divided into four groups submitted to three different experimental protocols: in the first one, we measured the total length of the large intestine in order to establish a secure distance for probe introduction; in the second, we evaluated maximum compliance of large intestine and diffusion range; finally, based on results obtained we tested the hypothesis in 400 birds in order to standardize the method. Enema solutions applied in an intrarectal manner with a stainless steel gavage BD-10 probe into one-day-old broiler chicks at 0.2 mL at a distance of 1.5 cm proved to be a reliable method.

  20. Updating the 2001 National Land Cover Database Impervious Surface Products to 2006 using Landsat Imagery Change Detection Methods

    USGS Publications Warehouse

    Xian, G.; Homer, C.

    2010-01-01

    A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline. ?? 2010 Elsevier Inc.

  1. Modelos para la Unificacion de Conceptos, Metodos y Procedimientos Administrativos (Guidelines for Uniform Administrative Concepts, Methods, and Procedures).

    ERIC Educational Resources Information Center

    Serrano, Jorge A., Ed.

    These documents, discussed and approved during the first meeting of the university administrators affiliated with the Federation of Private Universities of Central America and Panama (FUPAC), seek to establish uniform administrative concepts, methods, and procedures, particularly with respect to budgetary matters. The documents define relevant…

  2. La Administradora: A Mixed Methods Study of the Resilience of Mexican American Women Administrators at Hispanic Serving Institutions

    ERIC Educational Resources Information Center

    Sanchez-Zamora, Sabrina Suzanne

    2013-01-01

    This mixed methods study explored the resilience of Mexican American women administrators at Hispanic Serving Institutions (HSIs). The women administrators that were considered in this study included department chairs, deans, and vice presidents in a four-year public HSI. There is an underrepresentation of Mexican American women in higher…

  3. MapReduce Implementation of a Hybrid Spectral Library-Database Search Method for Large-Scale Peptide Identification

    SciTech Connect

    Kalyanaraman, Anantharaman; Cannon, William R.; Latt, Benjamin K.; Baxter, Douglas J.

    2011-11-01

    A MapReduce-based implementation called MR- MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs.

  4. Universal data-based method for reconstructing complex networks with binary-state dynamics

    NASA Astrophysics Data System (ADS)

    Li, Jingwen; Shen, Zhesi; Wang, Wen-Xu; Grebogi, Celso; Lai, Ying-Cheng

    2017-03-01

    To understand, predict, and control complex networked systems, a prerequisite is to reconstruct the network structure from observable data. Despite recent progress in network reconstruction, binary-state dynamics that are ubiquitous in nature, technology, and society still present an outstanding challenge in this field. Here we offer a framework for reconstructing complex networks with binary-state dynamics by developing a universal data-based linearization approach that is applicable to systems with linear, nonlinear, discontinuous, or stochastic dynamics governed by monotonic functions. The linearization procedure enables us to convert the network reconstruction into a sparse signal reconstruction problem that can be resolved through convex optimization. We demonstrate generally high reconstruction accuracy for a number of complex networks associated with distinct binary-state dynamics from using binary data contaminated by noise and missing data. Our framework is completely data driven, efficient, and robust, and does not require any a priori knowledge about the detailed dynamical process on the network. The framework represents a general paradigm for reconstructing, understanding, and exploiting complex networked systems with binary-state dynamics.

  5. Evaluation of current methods used to analyze the expression profiles of ABC transporters yields an improved drug-discovery database

    PubMed Central

    Orina, Josiah N.; Calcagno, Anna Maria; Wu, Chung-Pu; Varma, Sudhir; Shih, Joanna; Lin, Min; Eichler, Gabriel; Weinstein, John N.; Pommier, Yves; Ambudkar, Suresh V.; Gottesman, Michael M.; Gillet, Jean-Pierre

    2009-01-01

    The development of multidrug resistance (MDR) to chemotherapy remains a major challenge in the treatment of cancer. Resistance exists against every effective anti-cancer drug and can develop by multiple mechanisms. These mechanisms can act individually or synergistically, leading to multidrug resistance (MDR), in which the cell becomes resistant to a variety of structurally and mechanistically unrelated drugs in addition to the drug initially administered. Although extensive work has been done to characterize MDR mechanisms in vitro, the translation of this knowledge to the clinic has not been successful. Therefore, identifying genes and mechanisms critical to the development of MDR in vivo and establishing a reliable method for analyzing highly homologous genes from small amounts of tissue is fundamental to achieving any significant enhancement in our understanding of multidrug resistance mechanisms and could lead to treatments designed to circumvent it. In this study, we use a previously established database that allows the identification of lead compounds in the early stages of drug discovery that are not ABC transporter substrates. We believe this can serve as a model for appraising the accuracy and sensitivity of current methods used to analyze the expression profiles of ABC transporters. We found two platforms to be superior methods for the analysis of expression profiles of highly homologous gene superfamilies. This study also led to an improved database by revealing previously unidentified substrates for ABCB1, ABCC1 and ABCG2, transporters that contribute to multidrug resistance. PMID:19584229

  6. AMS method validation for quantitation in pharmacokinetic studies with concomitant extravascular and intravenous administration.

    PubMed

    Lappin, Graham; Seymour, Mark; Young, Graeme; Higton, David; Hill, Howard M

    2011-02-01

    A technique has emerged in the past few years that has enabled a drug's intravenous pharmacokinetics to be readily obtained in humans without having to conduct extensive toxicology studies by this route of administration or expand protracted effort in formulation. The technique involves the intravenous administration of a low dose of (14)C-labelled drug (termed a tracer dose) concomitantly with a non-labelled extravascular dose given at therapeutically levels. Plasma samples collected over time are analysed to determine the total parent drug concentration by LC-MS (which essentially measures that arising from the oral dose) and by LC followed by accelerator mass spectrometry (AMS) to determine the (14)C-drug concentration (i.e., that arising from the intravenous dose). There are currently no published accounts of how the principles of bioanalytical validation might be applied to intravenous studies using AMS as an analytical technique. The authors describe the primary elements of AMS when used with LC separation and how this off-line technique differs from LC-MS. They then discuss how the principles of bioanalytical validation might be applied to determine selectivity, accuracy, precision and stability of methods involving LC followed by AMS analysis.

  7. Facilitators and Barriers to Safe Medication Administration to Hospital Inpatients: A Mixed Methods Study of Nurses’ Medication Administration Processes and Systems (the MAPS Study)

    PubMed Central

    McLeod, Monsey; Barber, Nicholas; Franklin, Bryony Dean

    2015-01-01

    Context Research has documented the problem of medication administration errors and their causes. However, little is known about how nurses administer medications safely or how existing systems facilitate or hinder medication administration; this represents a missed opportunity for implementation of practical, effective, and low-cost strategies to increase safety. Aim To identify system factors that facilitate and/or hinder successful medication administration focused on three inter-related areas: nurse practices and workarounds, workflow, and interruptions and distractions. Methods We used a mixed-methods ethnographic approach involving observational fieldwork, field notes, participant narratives, photographs, and spaghetti diagrams to identify system factors that facilitate and/or hinder successful medication administration in three inpatient wards, each from a different English NHS trust. We supplemented this with quantitative data on interruptions and distractions among other established medication safety measures. Findings Overall, 43 nurses on 56 drug rounds were observed. We identified a median of 5.5 interruptions and 9.6 distractions per hour. We identified three interlinked themes that facilitated successful medication administration in some situations but which also acted as barriers in others: (1) system configurations and features, (2) behaviour types among nurses, and (3) patient interactions. Some system configurations and features acted as a physical constraint for parts of the drug round, however some system effects were partly dependent on nurses’ inherent behaviour; we grouped these behaviours into ‘task focused’, and ‘patient-interaction focused’. The former contributed to a more streamlined workflow with fewer interruptions while the latter seemed to empower patients to act as a defence barrier against medication errors by being: (1) an active resource of information, (2) a passive information resource, and/or (3) a

  8. Report: Office of Research and Development Needs to Improve Its Method of Measuring Administrative Savings

    EPA Pesticide Factsheets

    Report #11-P-0333, July 14, 2011. ORD’s efforts to reduce its administrative costs are noteworthy, but ORD needs to improve its measurement mechanism for assessing the effectiveness of its initiatives to reduce administrative costs.

  9. Description of two waterborne disease outbreaks in France: a comparative study with data from cohort studies and from health administrative databases.

    PubMed

    Mouly, D; Van Cauteren, D; Vincent, N; Vaissiere, E; Beaudeau, P; Ducrot, C; Gallay, A

    2016-02-01

    Waterborne disease outbreaks (WBDO) of acute gastrointestinal illness (AGI) are a public health concern in France. Their occurrence is probably underestimated due to the lack of a specific surveillance system. The French health insurance database provides an interesting opportunity to improve the detection of these events. A specific algorithm to identify AGI cases from drug payment reimbursement data in the health insurance database has been previously developed. The purpose of our comparative study was to retrospectively assess the ability of the health insurance data to describe WBDO. Data from the health insurance database was compared with the data from cohort studies conducted in two WBDO in 2010 and 2012. The temporal distribution of cases, the day of the peak and the duration of the epidemic, as measured using the health insurance data, were similar to the data from one of the two cohort studies. However, health insurance data accounted for 54 cases compared to the estimated 252 cases accounted for in the cohort study. The accuracy of using health insurance data to describe WBDO depends on the medical consultation rate in the impacted population. As this is never the case, data analysis underestimates the total number of AGI cases. However this data source can be considered for the development of a detection system of a WBDO in France, given its ability to describe an epidemic signal.

  10. Integrated approach for designing medical decision support systems with knowledge extracted from clinical databases by statistical methods.

    PubMed Central

    Krusinska, E.; Babic, A.; Chowdhury, S.; Wigertz, O.; Bodemar, G.; Mathiesen, U.

    1991-01-01

    In clinical research data is often studied by a particular method without previous analysis of quality or semantic contents which could link clinical database and data analytical (e.g. statistical) procedures. In order to avoid bias caused by this situation, we propose that the analysis of medical data should be divided into two main steps. In the first one we concentrate on conducting the quality, semantic and structure analyses. In the second step our aim is to build an appropriate dictionary of data analysis methods for further knowledge extraction. Methods like robust statistical techniques, procedures for mixed continuous and discrete data, fuzzy linguistic approach, machine learning and neural networks can be included. The results may be evaluated both using test samples and applying other relevant data-analytical techniques to the particular problem under the study. PMID:1807621

  11. Business Architecture Development at Public Administration - Insights from Government EA Method Engineering Project in Finland

    NASA Astrophysics Data System (ADS)

    Valtonen, Katariina; Leppänen, Mauri

    Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.

  12. Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases

    PubMed Central

    Elgendi, Mohamed

    2013-01-01

    The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usually based on machine-learning approaches that require sufficient computational resources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically efficient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-efficient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital filter design. PMID:24066054

  13. Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database

    PubMed Central

    Turner, Erick H.; Knoepflmacher, Daniel; Shapley, Lee

    2012-01-01

    Background Publication bias compromises the validity of evidence-based medicine, yet a growing body of research shows that this problem is widespread. Efficacy data from drug regulatory agencies, e.g., the US Food and Drug Administration (FDA), can serve as a benchmark or control against which data in journal articles can be checked. Thus one may determine whether publication bias is present and quantify the extent to which it inflates apparent drug efficacy. Methods and Findings FDA Drug Approval Packages for eight second-generation antipsychotics—aripiprazole, iloperidone, olanzapine, paliperidone, quetiapine, risperidone, risperidone long-acting injection (risperidone LAI), and ziprasidone—were used to identify a cohort of 24 FDA-registered premarketing trials. The results of these trials according to the FDA were compared with the results conveyed in corresponding journal articles. The relationship between study outcome and publication status was examined, and effect sizes derived from the two data sources were compared. Among the 24 FDA-registered trials, four (17%) were unpublished. Of these, three failed to show that the study drug had a statistical advantage over placebo, and one showed the study drug was statistically inferior to the active comparator. Among the 20 published trials, the five that were not positive, according to the FDA, showed some evidence of outcome reporting bias. However, the association between trial outcome and publication status did not reach statistical significance. Further, the apparent increase in the effect size point estimate due to publication bias was modest (8%) and not statistically significant. On the other hand, the effect size for unpublished trials (0.23, 95% confidence interval 0.07 to 0.39) was less than half that for the published trials (0.47, 95% confidence interval 0.40 to 0.54), a difference that was significant. Conclusions The magnitude of publication bias found for antipsychotics was less than that found

  14. Interference with the Jaffé Method for Creatinine Following 5-Aminolevulinic Acid Administration

    PubMed Central

    Quon, Harry; Grossman, Craig E.; King, Rebecca L.; Putt, Mary; Donaldson, Keri; Kricka, Larry; Finlay, Jarod; Malloy, Kelly; Cengel, Keith A.; Busch, Theresa M.

    2013-01-01

    Background The photosensitizer pro-drug 5-aminolevulinic acid (5-ALA) has been administered systemically for photodynamic therapy. Although several toxicities have been reported, nephrotoxicity has never been observed. Materials and Methods Patients with head and neck mucosal dysplasia have been treated on a phase 1 study of escalating light doses in combination with 60 mg/kg of oral 5-ALA. Serum creatinine was measured with the modified Jaffe method or an enzymatic method in the first 24 hours after 5-ALA. Interference by 5-ALA, as well as by its photosensitizing product protoporphyrin IX, was assessed. Results Among 11 subjects enrolled to date, 9 of 11 had blood chemistries collected within the first 5 hours with 7 demonstrating significant grade 3 creatinine elevations (p=0.030). There was no additional evidence of compromised renal function or increased PDT-induced mucositis. Creatinine levels measured by the Jaffe assay increased linearly as a function of the ex-vivo addition of ALA (p<.0001). The exogenous addition of PpIX did not alter creatinine levels. ALA did not interfere with creatinine levels as measured by an enzymatic assay. A total of 4 of the 11 subjects had creatinine levels prospectively measured by both the Jaffe and the enzymatic assays. Only the Jaffe method demonstrated significant elevations as a function of time after ALA administration. Conclusions The transient increase in creatinine after systematic ALA can be attributed, in part, if not entirely, to interference of ALA in the Jaffe reaction. Alternative assays should be employed in situations calling for monitoring of kidney function after systemic ALA. PMID:21112550

  15. A new database of source time functions (STFs) extracted from the SCARDEC method

    NASA Astrophysics Data System (ADS)

    Vallée, Martin; Douet, Vincent

    2016-08-01

    SCARDEC method (Vallée et al., 2011) offers a natural access to the earthquakes source time functions (STFs), together with the 1st order earthquake source parameters (seismic moment, depth and focal mechanism). This article first aims at presenting some new approaches and related implementations done in order to automatically provide broadband STFs with the SCARDEC method, both for moderate and very large earthquakes. The updated method has been applied to all earthquakes above magnitude 5.8 contained in the NEIC-PDE catalog since 1992, providing a new consistent catalog of source parameters associated with STFs. This represents today a large catalog (2782 events on 2014/12/31) that we plan to update on a regular basis. It is made available through a web interface whose functionalities are described here.

  16. Comparative study of multimodal intra-subject image registration methods on a publicly available database

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Ghayoor, Ali; Johnson, Hans J.; Sonka, Milan

    2016-03-01

    This work reports on a comparative study between five manual and automated methods for intra-subject pair-wise registration of images from different modalities. The study includes a variety of inter-modal image registrations (MR-CT, PET-CT, PET-MR) utilizing different methods including two manual point-based techniques using rigid and similarity transformations, one automated point-based approach based on Iterative Closest Point (ICP) algorithm, and two automated intensity-based methods using mutual information (MI) and normalized mutual information (NMI). These techniques were employed for inter-modal registration of brain images of 9 subjects from a publicly available dataset, and the results were evaluated qualitatively via checkerboard images and quantitatively using root mean square error and MI criteria. In addition, for each inter-modal registration, a paired t-test was performed on the quantitative results in order to find any significant difference between the results of the studied registration techniques.

  17. Open Rotor Tone Shielding Methods for System Noise Assessments Using Multiple Databases

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Thomas, Russell H.; Lopes, Leonard V.; Burley, Casey L.; Van Zante, Dale E.

    2014-01-01

    Advanced aircraft designs such as the hybrid wing body, in conjunction with open rotor engines, may allow for significant improvements in the environmental impact of aviation. System noise assessments allow for the prediction of the aircraft noise of such designs while they are still in the conceptual phase. Due to significant requirements of computational methods, these predictions still rely on experimental data to account for the interaction of the open rotor tones with the hybrid wing body airframe. Recently, multiple aircraft system noise assessments have been conducted for hybrid wing body designs with open rotor engines. These assessments utilized measured benchmark data from a Propulsion Airframe Aeroacoustic interaction effects test. The measured data demonstrated airframe shielding of open rotor tonal and broadband noise with legacy F7/A7 open rotor blades. Two methods are proposed for improving the use of these data on general open rotor designs in a system noise assessment. The first, direct difference, is a simple octave band subtraction which does not account for tone distribution within the rotor acoustic signal. The second, tone matching, is a higher-fidelity process incorporating additional physical aspects of the problem, where isolated rotor tones are matched by their directivity to determine tone-by-tone shielding. A case study is conducted with the two methods to assess how well each reproduces the measured data and identify the merits of each. Both methods perform similarly for system level results and successfully approach the experimental data for the case study. The tone matching method provides additional tools for assessing the quality of the match to the data set. Additionally, a potential path to improve the tone matching method is provided.

  18. Djeen (Database for Joomla!’s Extensible Engine): a research information management system for flexible multi-technology project administration

    PubMed Central

    2013-01-01

    Background With the advance of post-genomic technologies, the need for tools to manage large scale data in biology becomes more pressing. This involves annotating and storing data securely, as well as granting permissions flexibly with several technologies (all array types, flow cytometry, proteomics) for collaborative work and data sharing. This task is not easily achieved with most systems available today. Findings We developed Djeen (Database for Joomla!’s Extensible Engine), a new Research Information Management System (RIMS) for collaborative projects. Djeen is a user-friendly application, designed to streamline data storage and annotation collaboratively. Its database model, kept simple, is compliant with most technologies and allows storing and managing of heterogeneous data with the same system. Advanced permissions are managed through different roles. Templates allow Minimum Information (MI) compliance. Conclusion Djeen allows managing project associated with heterogeneous data types while enforcing annotation integrity and minimum information. Projects are managed within a hierarchy and user permissions are finely-grained for each project, user and group. Djeen Component source code (version 1.5.1) and installation documentation are available under CeCILL license from http://sourceforge.net/projects/djeen/files and supplementary material. PMID:23742665

  19. A Comparison of Bibliographic Instruction Methods on CD-ROM Databases.

    ERIC Educational Resources Information Center

    Davis, Dorothy F.

    1993-01-01

    Describes a study of four methods used to instruct college students on searching PsycLIT on CD-ROM: (1) lecture/demonstration; (2) lecture/demonstration using a liquid crystal display (LCD) panel; (3) video; and (4) a computer-based tutorial. Performance data are analyzed, and factors to consider when developing a CD-ROM bibliographic instruction…

  20. Petroleum recovery: Reservoir engineering and recovery methods. (Latest citations from the NTIS bibliographic database). Published Search

    SciTech Connect

    Not Available

    1994-09-01

    The bibliography contains citations concerning field projects and supporting research on petroleum recovery and reservoir technology. Recovery agents and methods are discussed including responsive copolymers, microemulsions, surfactants, steam injection, gas injection, miscible displacement, and thermal processes. Reservoir modeling, simulation, and performance are examined. (Contains 250 citations and includes a subject term index and title list.)

  1. The Corpus: A Data-Based Device for Teaching Field Methods.

    ERIC Educational Resources Information Center

    Stoddart, Kenneth

    1987-01-01

    Notes that one-semester field methods courses in sociology often lack adequate time for students to learn appropriate techniques and still collect and report their data. Describes how undergraduate students bypass this problem by using multiple observations of a single event to quickly form a corpus of ethnographic data. (JDH)

  2. Timing, method and discontinuation of hydrocortisone administration for septic shock patients

    PubMed Central

    Ibarra-Estrada, Miguel A; Chávez-Peña, Quetzalcóatl; Reynoso-Estrella, Claudia I; Rios-Zermeño, Jorge; Aguilera-González, Pável E; García-Soto, Miguel A; Aguirre-Avalos, Guadalupe

    2017-01-01

    AIM To characterize the prescribing patterns for hydrocortisone for patients with septic shock and perform an exploratory analysis in order to identify the variables associated with better outcomes. METHODS This prospective cohort study included 59 patients with septic shock who received stress-dose hydrocortisone. It was performed at 2 critical care units in academic hospitals from June 1st, 2015, to July 31st, 2016. Demographic data, comorbidities, medical management details, adverse effects related to corticosteroids, and outcomes were collected after the critical care physician indicated initiation of hydrocortisone. Univariate comparison between continuous and bolus administration of hydrocortisone was performed, including multivariate analysis, as well as Kaplan-Meier analysis to compare the proportion of shock reversal at 7 d after presentation. Receiver operating characteristic (ROC) curves determined the best cut-off criteria for initiation of hydrocortisone associated with the highest probability of shock reversal. We addressed the effects of the taper strategy for discontinuation of hydrocortisone, noting risk of shock relapse and adverse effects. RESULTS All-cause 30-d mortality was 42%. Hydrocortisone was administered as a continuous infusion in 54.2% of patients; time to reversal of shock was 49 h longer in patients who were given a bolus administration [59 h (range, 47.5-90.5) vs 108 h (range, 63.2-189); P = 0.001]. The maximal dose of norepinephrine after initiation of hydrocortisone was lower in patients on continuous infusion [0.19 μg/kg per minute (range, 0.11-0.28 μg)] compared with patients who were given bolus [0.34 μg/kg per minute (range, 0.16-0.49); P = 0.004]. Kaplan-Meier analysis revealed a higher proportion of shock reversal at 7 d in patients with continuous infusion compared to those given bolus (83% vs 63%; P = 0.004). There was a good correlation between time to initiation of hydrocortisone and time to reversal of shock (r = 0

  3. A new database of Source Time Functions (STFs) extracted from the SCARDEC method

    NASA Astrophysics Data System (ADS)

    Vallée, Martin; Douet, Vincent

    2016-04-01

    SCARDEC method (Vallée et al., 2011) offers a natural access to the earthquakes source time functions (STFs), together with the first order earthquake source parameters (seismic moment, depth and focal mechanism). We first present here some new approaches and related implementations done in order to automatically provide broadband STFs with the SCARDEC method, both for moderate (down to magnitude 5.8) and very large earthquakes. The updated method has been applied to all the earthquakes since 1992, providing a new consistent catalog of source parameters associated with STFs. Applications are expected to be various, as STFs offer quantitative information on the source process, helping fundamental research on earthquake mechanics or more applied studies related to seismic hazard. On the other hand, they can be also seen as a tool for Earth structure analyses, where the excitation of the medium at the source has to be known. The catalog now contains 2889 events (including earthquakes till 2014/12/31), and we plan to update it on a regular basis. It is made available through a web interface whose functionalities are described here.

  4. A robust and quantitative method for tracking liposome contents after intravenous administration.

    PubMed

    Kohli, Aditya G; Kieler-Ferguson, Heidi M; Chan, Darren; Szoka, Francis C

    2014-02-28

    We introduce a method for tracking the rate and extent of delivery of liposome contents in vivo based on encapsulation of 4-methylumbelliferyl phosphate (MU-P), a profluorophore of 4-methylumbelliferone (MU). MU-P is rapidly dephosphorylated by endogenous phosphatases in vivo to form MU after leakage from the liposome. The change in fluorescence spectra when MU-P is converted to MU allows for quantification of entrapped (MU-P) and released (MU) liposome contents by fluorescence or by a sensitive high performance liquid chromatography assay. We define the "cellular availability" of an agent encapsulated in a liposome as the ratio of the amount of released agent in the tissue to the total amount of agent in the tissue; this parameter quantifies the fraction of drug available for therapy. The advantage of this method over existing technologies is the ability to decouple the signals of entrapped and released liposome contents. We validate this method by tracking the circulation and tissue distribution of MU-P loaded liposomes after intravenous administration. We use this assay to compare the cellular availability of liposomes composed of engineered phosphocholine lipids with covalently attached cholesterol, sterol-modified lipids (SML), to liposomes composed of conventional phospholipids and cholesterol. The SML liposomes have similar pharmacokinetic and biodistribution patterns as conventional phospholipid-cholesterol liposomes but a slower rate of contents delivery into the tissue. Thus, MU-P enables the tracking of the rate and extent of liposome contents release in tissues and should facilitate a better understanding of the pharmacodynamics of liposome-encapsulated drugs in animals.

  5. A robust and quantitative method for tracking liposome contents after intravenous administration

    PubMed Central

    Chan, Darren; Szoka, Francis C.

    2014-01-01

    We introduce a method for tracking the rate and extent of delivery of liposome contents in vivo based on encapsulation of 4-methylumbelliferyl phosphate (MU-P), a profluorophore of 4-methylumbelliferone (MU). MU-P is rapidly dephosphorylated by endogenous phosphatases in vivo to form MU after leakage from the liposome. The change in fluorescence spectra when MU-P is converted to MU allows for quantification of entrapped (MU-P) and released (MU) liposome contents by fluorescence or by a sensitive high performance liquid chromatography assay. We define the “cellular availability” of an agent encapsulated in a liposome as the ratio of the amount of released agent in the tissue to the total amount of agent in the tissue; this parameter quantifies the fraction of drug available for therapy. The advantage of this method over existing technologies is the ability to decouple the signals of entrapped and released liposome contents. We validate this method by tracking the circulation and tissue distribution of MU-P loaded liposomes after intravenous administration. We use this assay to compare the cellular availability of liposomes composed of engineered phosphocholine lipids with covalently attached cholesterol, sterol-modified lipids (SML), to liposomes composed of conventional phospholipids and cholesterol. The SML liposomes have similar pharmacokinetic and biodistribution patterns as conventional phospholipid-cholesterol liposomes but a slower rate of contents delivery into the tissue. Thus, MU-P enables the tracking of the rate and extent of liposome contents release in tissues and should facilitate a better understanding of the pharmacodynamics of liposome-encapsulated drugs in animals. PMID:24368300

  6. A method for including protein flexibility in protein-ligand docking: improving tools for database mining and virtual screening.

    PubMed

    Broughton, H B

    2000-06-01

    Second-generation methods for docking ligands into their biological receptors, such as FLOG, provide for flexibility of the ligand but not of the receptor. Molecular dynamics based methods, such as free energy perturbation, account for flexibility, solvent effects, etc., but are very time consuming. We combined the use of statistical analysis of conformational samples from short-run protein molecular dynamics with grid-based docking protocols and demonstrated improved performance in two test cases. Our statistical analysis explores the importance of the average strength of a potential interaction with the biological target and optionally applies a weighting depending on the variability in the strength of the interaction seen during dynamics simulation. Using these methods, we improved the num-top-ranked 10% of a database of drug-like molecules, in searches based on the three-dimensional structure of the protein. These methods are able to match the ability of manual docking to assess likely inactivity on steric grounds and indeed to rank order ligands from a homologous series of cyclooxygenase-2 inhibitors with good correlation to their true activity. Furthermore, these methods reduce the need for human intervention in setting up molecular docking experiments.

  7. Assessment Center Methods in Educational Administration: Past, Present, and Future. UCEA Monograph Series.

    ERIC Educational Resources Information Center

    Wendel, Frederick C.; Sybouts, Ward

    Issues related to the assessment and induction (preparation, recruitment, and selection) of educational administrators are of critical importance because of the never-ending flow of entrants into administration, and because of the complex variables associated with assessment and selection criteria. Accordingly, this monograph traces the…

  8. A Heuristic Method of Optimal Generalized Hypercube Encoding for Pictorial Databases.

    DTIC Science & Technology

    1980-01-15

    the GHm codes are: (X1 ,.. *, • ..- 1 ; am., an; bm ... , bn), where (x-,. x1 , ,z ,.. ., z ) is in S for some coordinates, z, z ; and a =mm {y,: for...PROBLEM 1 Given m, the method of choosing m-1 handles which will generate optimal GHm codes for a point set S is based on the following ideas: * For each...generate GHm encoded tuples, and their number was counted for a different number of point set and dimensions of space. To find the optimal GHm encoding

  9. Expanded image database of pistachio x-ray images and classification by conventional methods

    NASA Astrophysics Data System (ADS)

    Keagy, Pamela M.; Schatzki, Thomas F.; Le, Lan Chau; Casasent, David P.; Weber, David

    1996-12-01

    In order to develop sorting methods for insect damaged pistachio nuts, a large data set of pistachio x-ray images (6,759 nuts) was created. Both film and linescan sensor images were acquired, nuts dissected and internal conditions coded using the U.S. Grade standards and definitions for pistachios. A subset of 1199 good and 686 insect damaged nuts was used to calculate and test discriminant functions. Statistical parameters of image histograms were evaluated for inclusion by forward stepwise discrimination. Using three variables in the discriminant function, 89% of test set nuts were correctly identified. Comparable data for 6 human subjects ranged from 67 to 92%. If the loss of good nuts is held to 1% by requiring a high probability to discard a nut as insect damaged, approximately half of the insect damage present in clean pistachio nuts may be detected and removed by x-ray inspection.

  10. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan

    2002-01-01

    Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling

  11. Superovulation with a single administration of FSH in aluminum hydroxide gel: a novel superovulation method for cattle

    PubMed Central

    KIMURA, Koji

    2016-01-01

    Superovulation (SOV) is a necessary technique to produce large numbers of embryos for embryo transfer. In the conventional methods, follicular stimulating hormone (FSH) is administered to donor cattle twice daily for 3 to 4 days. As this method is labor intensive and stresses cattle, improving this method has been desired. We previously developed a novel and simple SOV method, in which the intramuscular injection of a single dose of FSH in aluminum hydroxide gel (AH-gel) induced the growth of multiple follicles, ovulation and the production of multiple embryos. Here we show that AH-gel can efficiently adsorb FSH and release it effectively in the presence of BSA, a major interstitial protein. When a single intramuscular administration of the FSH and AH-gel mixture was performed to cattle, multiple follicular growth, ovulation and embryo production were induced. However, the treatments caused indurations at the administration sites in the muscle. To reduce the muscle damage, we investigated alternative administration routes and different amounts of aluminum in the gel. By administering the FSH in AH-gel subcutaneously rather than intramuscularly, the amount of aluminum in the gel could be reduced, thus reducing the size of the induration. Moreover, repeated administrations of FSH with AH-gel did not affect the superovulatory response. These results indicate that a single administration of FSH with AH-gel is an effective, novel and practical method for SOV treatment. PMID:27396385

  12. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  13. Threshold detection for the generalized Pareto distribution: Review of representative methods and application to the NOAA NCDC daily rainfall database

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonios; Puliga, Michelangelo; Deidda, Roberto

    2016-04-01

    In extreme excess modeling, one fits a generalized Pareto (GP) distribution to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as nonparametric methods that are intended to locate the changing point between extreme and nonextreme regions of the data, graphical methods where one studies the dependence of GP-related metrics on the threshold level u, and Goodness-of-Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. Here we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 overcentennial daily rainfall records from the NOAA-NCDC database. We find that nonparametric methods are generally not reliable, while methods that are based on GP asymptotic properties lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e., on the order of 0.1-0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on preasymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2 and 12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results.

  14. A curated gluten protein sequence database to support development of proteomics methods for determination of gluten in gluten-free foods.

    PubMed

    Bromilow, Sophie; Gethings, Lee A; Buckley, Mike; Bromley, Mike; Shewry, Peter R; Langridge, James I; Clare Mills, E N

    2017-04-03

    The unique physiochemical properties of wheat gluten enable a diverse range of food products to be manufactured. However, gluten triggers coeliac disease, a condition which is treated using a gluten-free diet. Analytical methods are required to confirm if foods are gluten-free, but current immunoassay-based methods can unreliable and proteomic methods offer an alternative. However, proteomic methods require comprehensive and well annotated sequence databases which are lacking for gluten. A manually a curated database (GluPro V1.0) of gluten proteins, comprising 630 discrete unique full length protein sequences has been compiled. It is representative of the different types of gliadin and glutenin components found in gluten. An in silico comparison of their coeliac toxicity was undertaken by analysing the distribution of coeliac toxic motifs. This demonstrated that whilst the α-gliadin proteins contained more toxic motifs, these were distributed across all gluten protein sub-types. Comparison of annotations observed using a discovery proteomics dataset acquired using ion mobility MS/MS showed that more reliable identifications were obtained using the GluPro V1.0 database compared to the complete reviewed Viridiplantae database. This highlights the value of a curated sequence database specifically designed to support the proteomic workflows and the development of methods to detect and quantify gluten.

  15. Updating the 2001 National Land Cover Database land cover classification to 2006 by using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, G.; Homer, C.; Fry, J.

    2009-01-01

    The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline. ?? 2009 Elsevier Inc.

  16. A three-dimensional cellular automata model coupled with finite element method and thermodynamic database for alloy solidification

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Qin, R. S.; Chen, D. F.

    2013-08-01

    A three-dimensional (3D) cellular automata (CA) model has been developed for the simulation of microstructure evolution in alloy solidification. The governing rule for the CA model is associated with the phase transition driving force which is obtained via a thermodynamic database. This determines the migration rate of the non-equilibrium solid-liquid (SL) interface and is calculated according to the local temperature and chemical composition. The curvature of the interface and the anisotropic property of the surface energy are taken into consideration. A 3D finite element (FE) method is applied for the calculation of transient heat and mass transfer. Numerical calculations for the solidification of Fe-1.5 wt% C alloy have been performed. The morphological evolution of dendrites, carbon segregation and temperature distribution in both isothermal and non-isothermal conditions are studied. The parameters affecting the growth of equiaxed and columnar dendrites are discussed. The calculated results are verified using the analytical model and previous experiments. The method provides a sophisticated approach to the solidification of multi-phase and multi-component systems.

  17. Healthcare usage and economic impact of non-treated obesity in Italy: findings from a retrospective administrative and clinical database analysis

    PubMed Central

    Colao, Annamaria; Lucchese, Marcello; D'Adamo, Monica; Savastano, Silvia; Facchiano, Enrico; Veronesi, Chiara; Blini, Valerio; Degli Esposti, Luca

    2017-01-01

    Objectives Investigate the prevalence of obesity in Italy and examine its resource consumption and economic impact on the Italian national healthcare system (NHS). Design Retrospective, observational and real-life study. Setting Data from three health units from Northern (Bergamo, Lombardy), Central (Grosseto, Tuscany) and Southern (Naples, Campania) Italy. Participants All patients aged ≥18 years with at least one recorded body mass index (BMI) measurement between 1 January 2009 and 31 December 2012 were included. Interventions Information retrieved from the databases included primary care data, medical prescriptions, specialist consultations and hospital discharge records from 2009–2013. Costs associated with these data were also calculated. Data are presented for two time periods (1 year after BMI measurement and study end). Primary and secondary outcome measures Primary—to estimate health resources consumption and the associated economic impact on the Italian NHS. Secondary—the prevalence and characteristics of subjects by BMI category. Results 20 159 adult subjects with at least one documented BMI measurement. Subjects with BMI ≥30 kg/m2 were defined as obese. The prevalence of obesity was 22.2% (N=4471) and increased with age. At the 1-year observation period, obese subjects who did not receive treatment for their obesity experienced longer durations of hospitalisation (median length: 5 days vs 3 days), used more prescription drugs (75.0% vs 57.7%), required more specialised outpatient healthcare (mean number: 5.3 vs 4.4) and were associated with greater costs, primarily owing to prescription drugs and hospital admissions (mean annual cost per year per patient: €460.6 vs €288.0 for drug prescriptions, €422.7 vs € 279.2 for hospitalisations and €283.2 vs €251.7 for outpatient care), compared with normal weight subjects. Similar findings were observed for the period up to data cut-off (mean follow-up of 2.7

  18. Different profiles of quercetin metabolites in rat plasma: comparison of two administration methods.

    PubMed

    Kawai, Yoshichika; Saito, Satomi; Nishikawa, Tomomi; Ishisaka, Akari; Murota, Kaeko; Terao, Junji

    2009-03-23

    The bioavailability of polyphenols in human and rodents has been discussed regarding their biological activity. We found different metabolite profiles of quercetin in rat plasma between two administration procedures. A single intragastric administration (50 mg/kg) resulted in the appearance of a variety of metabolites in the plasma, whereas only a major fraction was detected by free access (1% quercetin). The methylated/non-methylated metabolites ratio was much higher in the free access group. Mass spectrometric analyses showed that the fraction from free access contained highly conjugated quercetin metabolites such as sulfo-glucuronides of quercetin and methylquercetin. The metabolite profile of human plasma after an intake of onion was similar to that with intragastric administration in rats. In vitro oxidation of human low-density lipoprotein showed that methylation of the catechol moiety of quercetin significantly attenuated the antioxidative activity. These results might provide information about the bioavailability of quercetin when conducting animal experiments.

  19. Characterization of Listeria monocytogenes recovered from imported cheese contributed to the National PulseNet Database by the U.S. Food and Drug Administration from 2001 to 2008.

    PubMed

    Timbo, Babgaleh B; Keys, Christine; Klontz, Karl

    2010-08-01

    Imported foods must meet the same U.S. Food and Drug Administration (FDA) standards as domestic foods. The FDA determines whether an imported food is in compliance with the Federal Food, Drug, and Cosmetic Act. Pursuant to its regulatory activities, the FDA conducts compliance surveillance on imported foods offered for entry into the U.S. commerce. The National PulseNet Database is the molecular surveillance network for foodborne infections and is widely used to provide real-time subtyping support to epidemiologic investigations of foodborne diseases. FDA laboratories use pulsed-field gel electrophoresis to subtype foodborne pathogens recovered from imported foods and submit the molecular patterns to the National PulseNet Database at the Centers for Disease Control and Prevention. There were 60 isolates of Listeria monocytogenes in the FDA Field Accomplishment and Compliance Tracking System from 2001 to 2008 due to cheese imported from the following countries: Mexico (n=21 isolates), Italy (19), Israel (9), Portugal (5), Colombia (3), Greece (2), and Spain (1). We observed genetic diversity of L. monocytogenes isolates and genetic relatedness among strains recovered from imported cheese products coming to the United States from different countries. Consistent characterization of L. monocytogenes isolates recovered from imported cheeses, accompanied by epidemiologic investigations to ascertain human illness associated with these strains, could be helpful in the control of listeriosis acquired from imported cheeses.

  20. Faculty and Administrator Perspectives of Merit Pay Compensation Systems in Private Higher Education: A Mixed Methods Analysis

    ERIC Educational Resources Information Center

    Power, Anne L.

    2013-01-01

    The purpose of this explanatory sequential mixed methods study is to explore faculty and administrator perspectives of faculty merit pay compensation systems in private, higher education institutions. The study focuses on 10 small, private, four-year institutions which are religiously affiliated. All institutions are located in Nebraska, Iowa, and…

  1. The Relative Value of Skills, Knowledge, and Teaching Methods in Explaining Master of Business Administration (MBA) Program Return on Investment

    ERIC Educational Resources Information Center

    van Auken, Stuart; Wells, Ludmilla Gricenko; Chrysler, Earl

    2005-01-01

    In this article, the authors provide insight into alumni perceptions of Master of Business Administration (MBA) program return on investment (ROI). They sought to assess the relative value of skills, knowledge, and teaching methods in explaining ROI. By developing insight into the drivers of ROI, the real utility of MBA program ingredients can be…

  2. An iterative calibration method with prediction of post-translational modifications for the construction of a two-dimensional electrophoresis database of mouse mammary gland proteins.

    PubMed

    Aksu, Sevil; Scheler, Christian; Focks, Nicole; Leenders, Frauke; Theuring, Franz; Salnikow, Johann; Jungblut, Peter R

    2002-10-01

    Protein databases serve as general reference resources providing an orientation on two-dimensional electrophoresis (2-DE) patterns of interest. The intention behind constructing a 2-DE database of the water soluble proteins from wild-type mouse mammary gland tissue was to create a reference before going on to investigate cancer-associated protein variations. This database shall be deemed to be a model system for mouse tissue, which is open for transgenic or knockout experiments. Proteins were separated and characterized in terms of their molecular weight (M(r)) and isoelectric point (pI) by high resolution 2-DE. The proteins were identified using prevalent proteomics methods. One method was peptide mass fingerprinting by matrix-assisted laser desorption/ionization-mass spectrometry. Another method was N-terminal sequencing by Edman degradation. By N-terminal sequencing M(r) and pI values were specified more accurately and so the calibration of the master gel was obtained more systematically and exactly. This permits the prediction of possible post-translational modifications of some proteins. The mouse mammary gland 2-DE protein database created presently contains 66 identified protein spots, which are clickable on the gel pattern. This relational database is accessible on the WWW under the URL: http://www.mpiib-berlin.mpg.de/2D-PAGE.

  3. The Effects of Applying Alternative Research Methods to Educational Administration Theory and Practice.

    ERIC Educational Resources Information Center

    Peca, Kathy

    Ways in which the application of positivistic, phenomenological, ethnomethodological, and critical theories affect educational administration theory and practice are explored in this paper. A review of literature concludes that positivism separates practice from abstract theory; phenomenology offers a different view of reality; ethnomethodology is…

  4. Multi-Sample Pooling and Illumina Genome Analyzer Sequencing Methods to Determine Gene Sequence Variation for Database Development

    PubMed Central

    Margraf, Rebecca L.; Durtschi, Jacob D.; Dames, Shale; Pattison, David C.; Stephens, Jack E.; Mao, Rong; Voelkerding, Karl V.

    2010-01-01

    Determination of sequence variation within a genetic locus to develop clinically relevant databases is critical for molecular assay design and clinical test interpretation, so multisample pooling for Illumina genome analyzer (GA) sequencing was investigated using the RET proto-oncogene as a model. Samples were Sanger-sequenced for RET exons 10, 11, and 13–16. Ten samples with 13 known unique variants (“singleton variants” within the pool) and seven common changes were amplified and then equimolar-pooled before sequencing on a single flow cell lane, generating 36 base reads. For comparison, a single “control” sample was run in a different lane. After alignment, a 24-base quality score-screening threshold and 3` read end trimming of three bases yielded low background error rates with a 27% decrease in aligned read coverage. Sequencing data were evaluated using an established variant detection method (percent variant reads), by the presented subtractive correction method, and with SNPSeeker software. In total, 41 variants (of which 23 were singleton variants) were detected in the 10 pool data, which included all Sanger-identified variants. The 23 singleton variants were detected near the expected 5% allele frequency (average 5.17%±0.90% variant reads), well above the highest background error (1.25%). Based on background error rates, read coverage, simulated 30, 40, and 50 sample pool data, expected singleton allele frequencies within pools, and variant detection methods; ≥30 samples (which demonstrated a minimum 1% variant reads for singletons) could be pooled to reliably detect singleton variants by GA sequencing. PMID:20808642

  5. GlycoFish: A Database of Zebrafish N-linked Glycoproteins Identified Using SPEG Method Coupled with LC/MS

    PubMed Central

    Baycin-Hizal, Deniz; Tian, Yuan; Akan, Ilhan; Jacobson, Elena; Clark, Dean; Wu, Alexander; Jampol, Russell; Palter, Karen; Betenbaugh, Michael; Zhang, Hui

    2011-01-01

    Zebrafish (Danio rerio) is a model organism to study the mechanisms and pathways of human disorders. Many dysfunctions in neurological, development and neuromuscular systems are due to glycosylation deficiencies, but the glycoproteins involved in zebrafish embryonic development have not been established. In this study, a mass spectrometry-based glycoproteomic characterization of zebrafish embryos was performed to identify the N-linked glycoproteins and N-linked glycosylation sites. To increase the number of glycopeptides, proteins from zebrafish were digested with two different proteases, chymotrypsin and trypsin, into peptides of different length. The N-glycosylated peptides of zebrafish were then captured by the solid phase extraction of N-linked glycopeptides (SPEG) method and the peptides were identified with an LTQ OrbiTrap Velos mass spectrometer. From 265 unique glycopeptides, including 269 consensus NXT/S glycosites, we identified 169 different N-glycosylated proteins. The identified glycoproteins were highly abundant in proteins belonging to the transporter, cell adhesion, and ion channel/ion binding categories which are important to embryonic, organ, and central nervous system development. This proteomics data will expand our knowledge about glycoproteins in zebrafish and may be used to elucidate the role glycosylation plays in cellular processes and disease. The glycoprotein data are available through the GlycoFish database (http://betenbaugh.jhu.edu/GlycoFish) introduced in this paper. PMID:21591763

  6. Custom database development and biomarker discovery methods for MALDI-TOF mass spectrometry-based identification of high-consequence bacterial pathogens.

    PubMed

    Tracz, Dobryan M; Tyler, Andrea D; Cunningham, Ian; Antonation, Kym S; Corbett, Cindi R

    2017-03-01

    A high-quality custom database of MALDI-TOF mass spectral profiles was developed with the goal of improving clinical diagnostic identification of high-consequence bacterial pathogens. A biomarker discovery method is presented for identifying and evaluating MALDI-TOF MS spectra to potentially differentiate biothreat bacteria from less-pathogenic near-neighbour species.

  7. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  8. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  9. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  10. IDPredictor: predict database links in biomedical database.

    PubMed

    Mehlhorn, Hendrik; Lange, Matthias; Scholz, Uwe; Schreiber, Falk

    2012-06-26

    Knowledge found in biomedical databases, in particular in Web information systems, is a major bioinformatics resource. In general, this biological knowledge is worldwide represented in a network of databases. These data is spread among thousands of databases, which overlap in content, but differ substantially with respect to content detail, interface, formats and data structure. To support a functional annotation of lab data, such as protein sequences, metabolites or DNA sequences as well as a semi-automated data exploration in information retrieval environments, an integrated view to databases is essential. Search engines have the potential of assisting in data retrieval from these structured sources, but fall short of providing a comprehensive knowledge except out of the interlinked databases. A prerequisite of supporting the concept of an integrated data view is to acquire insights into cross-references among database entities. This issue is being hampered by the fact, that only a fraction of all possible cross-references are explicitely tagged in the particular biomedical informations systems. In this work, we investigate to what extend an automated construction of an integrated data network is possible. We propose a method that predicts and extracts cross-references from multiple life science databases and possible referenced data targets. We study the retrieval quality of our method and report on first, promising results. The method is implemented as the tool IDPredictor, which is published under the DOI 10.5447/IPK/2012/4 and is freely available using the URL: http://dx.doi.org/10.5447/IPK/2012/4.

  11. [Explore method about post-marketing safety re-evaluation of Chinese patent medicines based on HIS database in real world].

    PubMed

    Yang, Wei; Xie, Yanming; Zhuang, Yan

    2011-10-01

    There are many kinds of Chinese traditional patent medicine used in clinical practice and many adverse events have been reported by clinical professionals. Chinese patent medicine's safety problems are the most concerned by patients and physicians. At present, many researchers have studied re-evaluation methods about post marketing Chinese medicine safety inside and outside China. However, it is rare that using data from hospital information system (HIS) to re-evaluating post marketing Chinese traditional patent medicine safety problems. HIS database in real world is a good resource with rich information to research medicine safety. This study planed to analyze HIS data selected from ten top general hospitals in Beijing, formed a large HIS database in real world with a capacity of 1 000 000 cases in total after a series of data cleaning and integrating procedures. This study could be a new project that using information to evaluate traditional Chinese medicine safety based on HIS database. A clear protocol has been completed as for the first step for the whole study. The protocol is as follows. First of all, separate each of the Chinese traditional patent medicines existing in the total HIS database as a single database. Secondly, select some related laboratory tests indexes as the safety evaluating outcomes, such as routine blood, routine urine, feces routine, conventional coagulation, liver function, kidney function and other tests. Thirdly, use the data mining method to analyze those selected safety outcomes which had abnormal change before and after using Chinese patent medicines. Finally, judge the relationship between those abnormal changing and Chinese patent medicine. We hope this method could imply useful information to Chinese medicine researchers interested in safety evaluation of traditional Chinese medicine.

  12. Automated Psychological Testing: Method of Administration, Need for Approval, and Measures of Anxiety.

    ERIC Educational Resources Information Center

    Davis, Caroline; Cowles, Michael

    1989-01-01

    Computerized and paper-and-pencil versions of four standard personality inventories administered to 147 undergraduates were compared for: (1) test-retest reliability; (2) scores; (3) trait anxiety; (4) interaction between method and social desirability; and (5) preferences concerning method of testing. Doubts concerning the efficacy of…

  13. A General Method for Evaluating Deep Brain Stimulation Effects on Intravenous Methamphetamine Self-Administration

    PubMed Central

    Batra, Vinita; Guerin, Glenn F.; Goeders, Nicholas E.; Wilden, Jessica A.

    2016-01-01

    Substance use disorders, particularly to methamphetamine, are devastating, relapsing diseases that disproportionally affect young people. There is a need for novel, effective and practical treatment strategies that are validated in animal models. Neuromodulation, including deep brain stimulation (DBS) therapy, refers to the use of electricity to influence pathological neuronal activity and has shown promise for psychiatric disorders, including drug dependence. DBS in clinical practice involves the continuous delivery of stimulation into brain structures using an implantable pacemaker-like system that is programmed externally by a physician to alleviate symptoms. This treatment will be limited in methamphetamine users due to challenging psychosocial situations. Electrical treatments that can be delivered intermittently, non-invasively and remotely from the drug-use setting will be more realistic. This article describes the delivery of intracranial electrical stimulation that is temporally and spatially separate from the drug-use environment for the treatment of IV methamphetamine dependence. Methamphetamine dependence is rapidly developed in rodents using an operant paradigm of intravenous (IV) self-administration that incorporates a period of extended access to drug and demonstrates both escalation of use and high motivation to obtain drug. PMID:26863392

  14. A General Method for Evaluating Deep Brain Stimulation Effects on Intravenous Methamphetamine Self-Administration.

    PubMed

    Batra, Vinita; Guerin, Glenn F; Goeders, Nicholas E; Wilden, Jessica A

    2016-01-22

    Substance use disorders, particularly to methamphetamine, are devastating, relapsing diseases that disproportionally affect young people. There is a need for novel, effective and practical treatment strategies that are validated in animal models. Neuromodulation, including deep brain stimulation (DBS) therapy, refers to the use of electricity to influence pathological neuronal activity and has shown promise for psychiatric disorders, including drug dependence. DBS in clinical practice involves the continuous delivery of stimulation into brain structures using an implantable pacemaker-like system that is programmed externally by a physician to alleviate symptoms. This treatment will be limited in methamphetamine users due to challenging psychosocial situations. Electrical treatments that can be delivered intermittently, non-invasively and remotely from the drug-use setting will be more realistic. This article describes the delivery of intracranial electrical stimulation that is temporally and spatially separate from the drug-use environment for the treatment of IV methamphetamine dependence. Methamphetamine dependence is rapidly developed in rodents using an operant paradigm of intravenous (IV) self-administration that incorporates a period of extended access to drug and demonstrates both escalation of use and high motivation to obtain drug.

  15. Intraaortic administration of protamine: Method for heparin neutralization after cardiopulmonary bypass

    PubMed Central

    Aris, Alejandro; Solanes, Heriberto; Bonnin, Jose O.; Garin, Rosa; Caralps, Jose M.

    1981-01-01

    In neutralizing heparin with intravenous protamine sulfate, hypotension may be prevented by administering the drug intraarterially. Forty patients underwent cardiac surgery with extracorporeal circulation in our hospital; each received a rapid injection of nondiluted protamine sulfate in the aortic root to reverse the effects of heparin. To maintain the blood volume at a constant level, volume expanders and inotropic drugs were avoided. The intraaortic injections ranged in duration from 0.2 min to 2.8 min, with a mean of 1.1 min. The mean systolic pressure only dropped from 92 mm Hg (SD ± 21) before protamine injection to 85 mm Hg (SD ± 23) after injection (p < 0.0001). In seven patients (18%), no hypotension was evident; in the remaining patients, the systolic pressure returned to preinjection values within a mean of 2.2 min. Coagulation was observed within 3 to 4 min (mean = 2.2 min) after the initiation of injection. This study indicates that intraaortic administration of protamine is a rapid and safe technique for heparin reversal after cardiopulmonary bypass. PMID:15216222

  16. Estimating the administrative cost of regulatory noncompliance: a pilot method for quantifying the value of prevention.

    PubMed

    Emery, R J; Charlton, M A; Mathis, J L

    2000-05-01

    Routine regulatory inspections provide a valuable independent quality assurance review of radiation protection programs that ultimately serves to improve overall program performance. But when an item of non-compliance is noted, regardless of its significance or severity the ensuing notice of violation (NOV) results in an added cost to both the permit holder and the regulatory authority. Such added costs may be tangible, in the form of added work to process and resolve the NOV, or intangible, in the form of damage to organizational reputation or worker morale. If the portion of the tangible costs incurred by a regulatory agency for issuing NOVs could be quantified, the analysis could aid in the identification of agency resources that might be dedicated to other areas such as prevention. Ideally, any prevention activities would reduce the overall number of NOVs issued without impacting the routine inspection process. In this study, the administrative costs of NOV issuance and resolution was estimated by obtaining data from the professional staff of the Texas Department of Health, Bureau of Radiation Control (TDH-BRC). Based a focus group model, the data indicate that approximately $106,000 in TDH-BRC personnel resources were expended to process and resolve the 6,800 NOVs issued in Texas during 1997 inspection activities. The study's findings imply that an incremental decrease in the number of NOVs issued would result in corresponding savings of agency resources. Suggested prevention activities that might be financed through any resource savings include the dissemination of common violation data to permit holders or training for improving correspondence with regulatory agencies. The significance of this exercise is that any savings experienced by an agency could enhance permittee compliance without impacting the routine inspection process.

  17. A Novel Forensic Tool for the Characterization and Comparison of Printing Ink Evidence: Development and Evaluation of a Searchable Database Using Data Fusion of Spectrochemical Methods.

    PubMed

    Trejos, Tatiana; Torrione, Peter; Corzo, Ruthmara; Raeva, Ana; Subedi, Kiran; Williamson, Rhett; Yoo, Jong; Almirall, Jose

    2016-05-01

    A searchable printing ink database was designed and validated as a tool to improve the chemical information gathered from the analysis of ink evidence. The database contains 319 samples from printing sources that represent some of the global diversity in toner, inkjet, offset, and intaglio inks. Five analytical methods were used to generate data to populate the searchable database including FTIR, SEM-EDS, LA-ICP-MS, DART-MS, and Py-GC-MS. The search algorithm based on partial least-squares discriminant analysis generates a similarity "score" used for the association between similar samples. The performance of a particular analytical method to associate similar inks was found to be dependent on the ink type with LA-ICP-MS performing best, followed by SEM-EDS and DART-MS methods, while FTIR and Py-GC-MS were less useful in association but were still useful for classification purposes. Data fusion of data collected from two complementary methods (i.e., LA-ICP-MS and DART-MS) improves the classification and association of similar inks.

  18. CoryneRegNet 6.0--Updated database content, new analysis methods and novel features focusing on community demands.

    PubMed

    Pauling, Josch; Röttger, Richard; Tauch, Andreas; Azevedo, Vasco; Baumbach, Jan

    2012-01-01

    Post-genomic analysis techniques such as next-generation sequencing have produced vast amounts of data about micro organisms including genetic sequences, their functional annotations and gene regulatory interactions. The latter are genetic mechanisms that control a cell's characteristics, for instance, pathogenicity as well as survival and reproduction strategies. CoryneRegNet is the reference database and analysis platform for corynebacterial gene regulatory networks. In this article we introduce the updated version 6.0 of CoryneRegNet and describe the updated database content which includes, 6352 corynebacterial regulatory interactions compared with 4928 interactions in release 5.0 and 3235 regulations in release 4.0, respectively. We also demonstrate how we support the community by integrating analysis and visualization features for transiently imported custom data, such as gene regulatory interactions. Furthermore, with release 6.0, we provide easy-to-use functions that allow the user to submit data for persistent storage with the CoryneRegNet database. Thus, it offers important options to its users in terms of community demands. CoryneRegNet is publicly available at http://www.coryneregnet.de.

  19. [A method for identifying people with a high level of frailty by using a population database, Varese, Italy].

    PubMed

    Pisani, Salvatore; Gambino, Maria; Balconi, Lorena; Degli Stefani, Cristina; Speziali, Sabina; Bonarrigo, Domenico

    2016-01-01

    Since over 10 years, the Lombardy Region (Italy) has developed a system for classifying all persons registered with the healthcare system (database of persons registered with a general practitioner), according to their use of major healthcare services (hospitalizations, outpatient consultations, pharmaceutical) and whether they are exempt from copayment fees for disease-specific medications and healthcare services. The present study was conducted by the local health authorities of the province of Varese (Lombardy region, Italy) with 894.039 persons registered in the database of whom 258.770 (28.9%) with at least one chronic condition, 104.731 (11.7%) with multiple chronic conditions and 195.296 (21.8%) elderly persons. The aim was to evaluate death rates in different subgroups of patients entered in the database, including persons with chronic diseases and elderly persons. Standardized mortality rates were calculated for the year 2012. Compared with the general population, relative risk for mortality was 4,1 (95% confidence Intervals 4,0-4,2) in the elderly and 1,3 (95% confidence intervals 1,3-1,4) in chronic patients. This confirms that elderly persons have a higher level of frailty with respect to patients with chronic conditions. Mortality was found to be 28 times higher in elderly persons over 74 years of age, affected by high cost conditions such as cancer and cardiac disease, with respect to the general population.

  20. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  1. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    SciTech Connect

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  2. Perception of Teachers and Administrators on the Teaching Methods That Influence the Acquisition of Generic Skills

    ERIC Educational Resources Information Center

    Audu, R.; Bin Kamin, Yusri; Bin Musta'amal, Aede Hatib; Bin Saud, Muhammad Sukri; Hamid, Mohd. Zolkifli Abd.

    2014-01-01

    This study is designed to identify the most significant teaching methods that influence the acquisition of generic skills of mechanical engineering trades students at technical college level. Descriptive survey research design was utilized in carrying out the study. One hundred and ninety (190) respondents comprised of mechanical engineering…

  3. An AMS method to determine analyte recovery from pharmacokinetic studies with concomitant extravascular and intravenous administration.

    PubMed

    Lappin, Graham; Seymour, Mark; Young, Graeme; Higton, David; Hill, Howard M

    2011-02-01

    The absolute bioavailability, clearance and volume of distribution of a drug can be investigated by administering a very low dose of the (14)C-drug intravenously along with a therapeutic nonlabeled dose by the extravascular route (typically orally). The total drug concentration is measured by an assay such as LC-MS and the (14)C-drug is measured by accelerator MS (AMS). In another article in this issue, a method validation is proposed where AMS was used as the analytical assay. Part of the validation is to assess the recovery of the analyte being measured as this has a direct impact on its quantification. In this article, a method of internal standardisation is described where the UV response of the nonlabeled analyte, spiked in excess into the matrix being analysed, is used for internal standardization. The method allows for the recovery of analyte to be measured in each individual sample being analysed. It is important to know the recovery of a (14)C-labeled analyte when determining its mass concentration from (14)C:(12)C isotopic ratio data using AMS. A method is reported in this article that utilizes the UV response of the nonlabeled drug for internal standardization, so that the recovery for each individual sample analyzed can be ascertained.

  4. A Case Study of Qualitative Research: Methods and Administrative Impact. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Schoen, Jane; Warner, Sean

    A case study in program evaluation that demonstrates the effectiveness of qualitative research methods is presented. Over a 5-year period, the Union for Experimenting Colleges and Universities in Ohio offered a baccalaureate program (University Without Walls) to local employees of a national manufacturing firm. The institutional research office…

  5. 75 FR 41225 - Federal Housing Administration (FHA) First Look Sales Method for Grantees, Nonprofit...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-15

    ... Stabilization Program (NSP). HERA authorizes the Secretary to specify alternative requirements to any provision... Method may fall under the voluntary acquisition exclusion at 49 CFR 24.101(b)(3). That provision exempts.... Eligible NSP purchasers are required to document compliance with the tenant protection provisions of...

  6. A Systematic Review of Validated Methods for Identifying Cerebrovascular Accident or Transient Ischemic Attack Using Administrative Data

    PubMed Central

    Andrade, Susan E.; Harrold, Leslie R.; Tjia, Jennifer; Cutrona, Sarah L.; Saczynski, Jane S.; Dodd, Katherine S.; Goldberg, Robert J.; Gurwitz, Jerry H.

    2012-01-01

    Purpose To perform a systematic review of the validity of algorithms for identifying cerebrovascular accidents (CVAs) or transient ischemic attacks (TIAs) using administrative and claims data. Methods PubMed and Iowa Drug Information Service (IDIS) searches of the English language literature were performed to identify studies published between 1990 and 2010 that evaluated the validity of algorithms for identifying CVAs (ischemic and hemorrhagic strokes, intracranial hemorrhage and subarachnoid hemorrhage) and/or TIAs in administrative data. Two study investigators independently reviewed the abstracts and articles to determine relevant studies according to pre-specified criteria. Results A total of 35 articles met the criteria for evaluation. Of these, 26 articles provided data to evaluate the validity of stroke, 7 reported the validity of TIA, 5 reported the validity of intracranial bleeds (intracerebral hemorrhage and subarachnoid hemorrhage), and 10 studies reported the validity of algorithms to identify the composite endpoints of stroke/TIA or cerebrovascular disease. Positive predictive values (PPVs) varied depending on the specific outcomes and algorithms evaluated. Specific algorithms to evaluate the presence of stroke and intracranial bleeds were found to have high PPVs (80% or greater). Algorithms to evaluate TIAs in adult populations were generally found to have PPVs of 70% or greater. Conclusions The algorithms and definitions to identify CVAs and TIAs using administrative and claims data differ greatly in the published literature. The choice of the algorithm employed should be determined by the stroke subtype of interest. PMID:22262598

  7. Experiment Databases

    NASA Astrophysics Data System (ADS)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  8. Plant Genome Duplication Database.

    PubMed

    Lee, Tae-Ho; Kim, Junah; Robertson, Jon S; Paterson, Andrew H

    2017-01-01

    Genome duplication, widespread in flowering plants, is a driving force in evolution. Genome alignments between/within genomes facilitate identification of homologous regions and individual genes to investigate evolutionary consequences of genome duplication. PGDD (the Plant Genome Duplication Database), a public web service database, provides intra- or interplant genome alignment information. At present, PGDD contains information for 47 plants whose genome sequences have been released. Here, we describe methods for identification and estimation of dates of genome duplication and speciation by functions of PGDD.The database is freely available at http://chibba.agtec.uga.edu/duplication/.

  9. A Method to Compare ICF and SNOMED CT for Coverage of U.S. Social Security Administration's Disability Listing Criteria.

    PubMed

    Tu, Samson W; Nyulas, Csongor I; Tudorache, Tania; Musen, Mark A

    2015-01-01

    We developed a method to evaluate the extent to which the International Classification of Function, Disability, and Health (ICF) and SNOMED CT cover concepts used in the disability listing criteria of the U.S. Social Security Administration's "Blue Book." First we decomposed the criteria into their constituent concepts and relationships. We defined different types of mappings and manually mapped the recognized concepts and relationships to either ICF or SNOMED CT. We defined various metrics for measuring the coverage of each terminology, taking into account the effects of inexact matches and frequency of occurrence. We validated our method by mapping the terms in the disability criteria of Adult Listings, Chapter 12 (Mental Disorders). SNOMED CT dominates ICF in almost all the metrics that we have computed. The method is applicable for determining any terminology's coverage of eligibility criteria.

  10. Development of an Aerodynamic Analysis Method and Database for the SLS Service Module Panel Jettison Event Utilizing Inviscid CFD and MATLAB

    NASA Technical Reports Server (NTRS)

    Applebaum, Michael P.; Hall, Leslie, H.; Eppard, William M.; Purinton, David C.; Campbell, John R.; Blevins, John A.

    2015-01-01

    This paper describes the development, testing, and utilization of an aerodynamic force and moment database for the Space Launch System (SLS) Service Module (SM) panel jettison event. The database is a combination of inviscid Computational Fluid Dynamic (CFD) data and MATLAB code written to query the data at input values of vehicle/SM panel parameters and return the aerodynamic force and moment coefficients of the panels as they are jettisoned from the vehicle. The database encompasses over 5000 CFD simulations with the panels either in the initial stages of separation where they are hinged to the vehicle, in close proximity to the vehicle, or far enough from the vehicle that body interference effects are neglected. A series of viscous CFD check cases were performed to assess the accuracy of the Euler solutions for this class of problem and good agreement was obtained. The ultimate goal of the panel jettison database was to create a tool that could be coupled with any 6-Degree-Of-Freedom (DOF) dynamics model to rapidly predict SM panel separation from the SLS vehicle in a quasi-unsteady manner. Results are presented for panel jettison simulations that utilize the database at various SLS flight conditions. These results compare favorably to an approach that directly couples a 6-DOF model with the Cart3D Euler flow solver and obtains solutions for the panels at exact locations. This paper demonstrates a method of using inviscid CFD simulations coupled with a 6-DOF model that provides adequate fidelity to capture the physics of this complex multiple moving-body panel separation event.

  11. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  12. Glycoproteomic and glycomic databases.

    PubMed

    Baycin Hizal, Deniz; Wolozny, Daniel; Colao, Joseph; Jacobson, Elena; Tian, Yuan; Krag, Sharon S; Betenbaugh, Michael J; Zhang, Hui

    2014-01-01

    Protein glycosylation serves critical roles in the cellular and biological processes of many organisms. Aberrant glycosylation has been associated with many illnesses such as hereditary and chronic diseases like cancer, cardiovascular diseases, neurological disorders, and immunological disorders. Emerging mass spectrometry (MS) technologies that enable the high-throughput identification of glycoproteins and glycans have accelerated the analysis and made possible the creation of dynamic and expanding databases. Although glycosylation-related databases have been established by many laboratories and institutions, they are not yet widely known in the community. Our study reviews 15 different publicly available databases and identifies their key elements so that users can identify the most applicable platform for their analytical needs. These databases include biological information on the experimentally identified glycans and glycopeptides from various cells and organisms such as human, rat, mouse, fly and zebrafish. The features of these databases - 7 for glycoproteomic data, 6 for glycomic data, and 2 for glycan binding proteins are summarized including the enrichment techniques that are used for glycoproteome and glycan identification. Furthermore databases such as Unipep, GlycoFly, GlycoFish recently established by our group are introduced. The unique features of each database, such as the analytical methods used and bioinformatical tools available are summarized. This information will be a valuable resource for the glycobiology community as it presents the analytical methods and glycosylation related databases together in one compendium. It will also represent a step towards the desired long term goal of integrating the different databases of glycosylation in order to characterize and categorize glycoproteins and glycans better for biomedical research.

  13. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  14. Human mapping databases.

    PubMed

    Talbot, C; Cuticchia, A J

    2001-05-01

    This unit concentrates on the data contained within two human genome databasesGDB (Genome Database) and OMIM (Online Mendelian Inheritance in Man)and includes discussion of different methods for submitting and accessing data. An understanding of electronic mail, FTP, and the use of a World Wide Web (WWW) navigational tool such as Netscape or Internet Explorer is a prerequisite for utilizing the information in this unit.

  15. Rapid and reliable screening method for detection of 70 pesticides in whole blood by gas chromatography-mass spectrometry using a constructed calibration-locking database.

    PubMed

    Kudo, Keiko; Nagamatsu, Kumi; Umehara, Takahiro; Usumoto, Yosuke; Sameshima, Naomi; Tsuji, Akiko; Ikeda, Noriaki

    2012-03-01

    Pesticide poisoning is one of the most common causes of death by poisoning in Japan, and various kinds of pesticides including organophosphates, carbamates and pyrethroids are listed as causative substances. The purpose of our study was to develop a rapid and reliable screening method for various kinds of pesticides in whole blood by using a unique calibration-locking database and gas chromatography-mass spectrometry. A database of 70 pesticides was constructed using NAGINATA™ software with parameters such as mass spectrum, retention time and qualifier ion/target ion ratio (QT ratio) and calibration curve. Diazepam-d(5) was used as the internal standard for construction of each calibration curve within the range of 0.01-5.0 μg/ml. We examined the applicability of the constructed database by analyzing whole blood samples spiked with 70 pesticides. The pesticides in blood were extracted with hexane under acidic conditions or with an enhanced polymer column (Focus™), subjected to GC-MS, and screened by the pesticides database. Among the 70 pesticides examined, 66 and 62 were successfully identified at the level of 1 and 0.1 μg/ml, respectively, by hexane and 63 and 51 were identified by the Focus column without the use of standard compounds. The time required for data analysis was significantly reduced. Since the established method can produce qualitative and semi-quantitative data without the need for standard substances, this new screening method using NAGINATA™ should be useful for confirming the presence of pesticides in blood in future clinical and forensic cases.

  16. Challenges of the Administrative Consultation Wiki Research Project as a Learning and Competences Development Method for MPA Students

    ERIC Educational Resources Information Center

    Kovac, Polonca; Stare, Janez

    2015-01-01

    Administrative Consultation Wiki (ACW) is a project run under the auspices of the Faculty of Administration and the Ministry of Public Administration in Slovenia since 2009. A crucial component thereof is the involvement of students of Master of Public Administration (MPA) degree programs to offer them an opportunity to develop competences in…

  17. The induction of ovulation by pulsatile administration of GnRH: an appropriate method in hypothalamic amenorrhea.

    PubMed

    Christou, Fotini; Pitteloud, Nelly; Gomez, Fulgencio

    2017-03-06

    The induction of ovulation by the means of a pump which assures the pulsatile administration of GnRH is a well-known method that applies to women suffering from amenorrhea of hypothalamic origin. Although a simple and efficient method to establish fertility, it is underused. Twelve patients suffering from this condition, 1 Kallmann syndrome, 4 normosmic isolated hypogonadotropic hypogonadism, and 7 functional hypothalamic amenorrhea desiring pregnancy were treated. They underwent one or more cycles of pulsatile GnRH, at a frequency of 90 minutes, either by the intravenous or the subcutaneous route. An initial dose of 5 μg per pulse in the intravenous route was administered and of 15 μg per pulse in the subcutaneous route. The treatment was monitored by regular dosing of gonadotropins, estradiol and progesterone, and the development of follicles and ovulation was monitored by intra-vaginal ultrasonography. All the patients had documented ovulation, after a mean of 17 days on pump stimulation. Single ovulation occurred in 30 of 33 treatment cycles, irrespective of the route of administration. Ovulation resulted in 10 pregnancies over 7 patients (2 pregnancies in 3 of them), distributed in the 3 diagnostic categories. For comparison, a patient with PCOS treated similarly, disclosed premature LH surge without ovulation.

  18. Administrators: Nursing Home Administrator

    ERIC Educational Resources Information Center

    Kahl, Anne

    1976-01-01

    Responsibilities, skills needed, training needed, earnings, employment outlook, and sources of additional information are outlined for the administrator who holds the top management job in a nursing home. (JT)

  19. Unconventional Methods and Materials for Preparing Educational Administrators. ERIC/CEM-UCEA Series on Administrator Preparation. ERIC/CEM State-of-the-Knowledge Series, Number Fifteen. UCEA Monograph Series, Number Two.

    ERIC Educational Resources Information Center

    Wynn, Richard

    In this monograph, the author describes the variety of new and innovative instructional methods and materials being used to prepare educational administrators. Because the subject is new and the nomenclature surrounding it imprecise, the author defines his terms. An outline of the history of unconventional instructional methods and the rationale…

  20. Administrative Uses of the Microcomputer.

    ERIC Educational Resources Information Center

    Spuck, Dennis W.; Atkinson, Gene

    1983-01-01

    An outline of microcomputer applications for administrative computing in education is followed by discussions of aspects of office automation, database management systems, management information systems, administrative computer systems, and software. Several potential problems relating to administrative computing in education are identified.…

  1. The use of databases and registries to enhance colonoscopy quality.

    PubMed

    Logan, Judith R; Lieberman, David A

    2010-10-01

    Administrative databases, registries, and clinical databases are designed for different purposes and therefore have different advantages and disadvantages in providing data for enhancing quality. Administrative databases provide the advantages of size, availability, and generalizability, but are subject to constraints inherent in the coding systems used and from data collection methods optimized for billing. Registries are designed for research and quality reporting but require significant investment from participants for secondary data collection and quality control. Electronic health records contain all of the data needed for quality research and measurement, but that data is too often locked in narrative text and unavailable for analysis. National mandates for electronic health record implementation and functionality will likely change this landscape in the near future.

  2. A novel method for detecting inpatient pediatric asthma encounters using administrative data.

    PubMed

    Knighton, Andrew J; Flood, Andrew; Harmon, Brian; Smith, Patti; Crosby, Carrie; Payne, Nathaniel R

    2014-08-01

    Multiple methods for detecting asthma encounters are used today in public surveillance, quality reporting, and clinical research. Failure to detect asthma encounters can make it difficult to measure the scope and effectiveness of hospital or community-based interventions important in comparative effectiveness research and accountable care. Given the pairing of asthma with certain respiratory conditions, the objective of this study was to develop and test an asthma detection algorithm with specificity and sensitivity using 2 criteria: (1) principal discharge diagnosis and (2) asthma diagnosis code position. A medical record review was conducted (n=191) as the gold standard for identifying asthma encounters given objective criteria. The study team observed that for certain principal respiratory diagnoses (n=110), the observed odds ratio that encounters were for asthma when asthma was coded in the second or third code position was not significantly different than when asthma was coded as the principal diagnosis, 0.36 (P=0.42) and 0.18 (P=0.14), respectively. In contrast, the observed odds ratio was significantly different when asthma was coded in the fourth or fifth positions (P<.001). This difference remained after adjusting for covariates. Including encounters with asthma in 1 of the 3 first positions increased the detection sensitivity to 0.84 [95% confidence interval (CI): 0.76-0.92] while increasing the false positive rate to 0.19 [95% CI: 0.07-0.31]. Use of the proposed algorithm significantly improved the reporting accuracy [0.83 95%CI:0.76-0.90] over use of (1) the principal diagnosis alone [0.55 95% CI:0.46-0.64] or (2) all encounters with asthma 0.66 [95% CI:0.57-0.75]. Bed days resulting from asthma encounters increased 64% over use of the principal diagnosis alone. Given these findings, an algorithm using certain respiratory principal diagnoses and asthma diagnosis code position can reliably improve asthma encounter detection for population-based health

  3. A Mixed Method Study Measuring the Perceptions of Administrators, Classroom Teachers and Professional Staff on the Use of iPads in a Midwest School District

    ERIC Educational Resources Information Center

    Beckerle, Andrea Laux

    2013-01-01

    The purpose of this mixed methods study was to assess the perceptions of classroom teachers, administrators and professional support staff in one Midwest school district regarding the usefulness and effectiveness of the iPad device as an instructional and support tool within the classroom. The need to address classroom teacher, administrator and…

  4. Databases as an information service

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  5. A global database of lake surface temperatures collected by in situ and satellite methods from 1985-2009

    NASA Astrophysics Data System (ADS)

    Sharma, Sapna; Gray, Derek K.; Read, Jordan S.; O'Reilly, Catherine M.; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E.; Hook, Simon; Lenters, John D.; Livingstone, David M.; McIntyre, Peter B.; Adrian, Rita; Allan, Mathew G.; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S.; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L.; Higgins, Scott N.; Hixson, Eric; Izmest'Eva, Lyubov R.; Jones, Benjamin M.; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M.; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P.; Paterson, Andrew M.; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A.; Salmaso, Nico; Samal, Nihar R.; Schindler, Daniel E.; Schladow, Geoffrey; Schmidt, Silke R.; Schultz, Tracey; Silow, Eugene A.; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A.; Williamson, Craig E.; Woo, Kara H.

    2015-03-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985-2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues.

  6. A global database of lake surface temperatures collected by in situ and satellite methods from 1985–2009

    PubMed Central

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O’Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest’eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985–2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues. PMID:25977814

  7. A global database of lake surface temperatures collected by in situ and satellite methods from 1985–2009

    USGS Publications Warehouse

    Sharma, Sapna; Gray, Derek; Read, Jordan S.; O'Reilly, Catherine; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie; Hook, Simon; Lenters, John; Livingstone, David M.; McIntyre, Peter B.; Adrian, Rita; Allan, Mathew; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John E.; Baron, Jill S.; Brookes, Justin D; Chen, Yuwei; Daly, Robert; Ewing, Kye; de Eyto, Elvira; Dokulil, Martin; Hamilton, David B.; Havens, Karl; Haydon, Shane; Hetzenaeur, Harald; Heneberry, Jocelyn; Hetherington, Amy; Higgins, Scott; Hixson, Eric; Izmest'eva, Lyubov; Jones, Benjamin M.; Kangur, Kulli; Kasprzak, Peter; Kraemer, Benjamin; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Dörthe Müller-Navarra,; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Pius Niederhauser,; North, Ryan P.; Andrew Paterson,; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Lars Rudstam,; Rusak, James A.; Salmaso, Nico; Samal, Nihar R.; Daniel E. Schindler,; Geoffrey Schladow,; Schmidt, Silke R.; Tracey Schultz,; Silow, Eugene A.; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A.; Craig E. Williamson,; Kara H. Woo,

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985–2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues.

  8. A global database of lake surface temperatures collected by in situ and satellite methods from 1985-2009.

    PubMed

    Sharma, Sapna; Gray, Derek K; Read, Jordan S; O'Reilly, Catherine M; Schneider, Philipp; Qudrat, Anam; Gries, Corinna; Stefanoff, Samantha; Hampton, Stephanie E; Hook, Simon; Lenters, John D; Livingstone, David M; McIntyre, Peter B; Adrian, Rita; Allan, Mathew G; Anneville, Orlane; Arvola, Lauri; Austin, Jay; Bailey, John; Baron, Jill S; Brookes, Justin; Chen, Yuwei; Daly, Robert; Dokulil, Martin; Dong, Bo; Ewing, Kye; de Eyto, Elvira; Hamilton, David; Havens, Karl; Haydon, Shane; Hetzenauer, Harald; Heneberry, Jocelyne; Hetherington, Amy L; Higgins, Scott N; Hixson, Eric; Izmest'eva, Lyubov R; Jones, Benjamin M; Kangur, Külli; Kasprzak, Peter; Köster, Olivier; Kraemer, Benjamin M; Kumagai, Michio; Kuusisto, Esko; Leshkevich, George; May, Linda; MacIntyre, Sally; Müller-Navarra, Dörthe; Naumenko, Mikhail; Noges, Peeter; Noges, Tiina; Niederhauser, Pius; North, Ryan P; Paterson, Andrew M; Plisnier, Pierre-Denis; Rigosi, Anna; Rimmer, Alon; Rogora, Michela; Rudstam, Lars; Rusak, James A; Salmaso, Nico; Samal, Nihar R; Schindler, Daniel E; Schladow, Geoffrey; Schmidt, Silke R; Schultz, Tracey; Silow, Eugene A; Straile, Dietmar; Teubner, Katrin; Verburg, Piet; Voutilainen, Ari; Watkinson, Andrew; Weyhenmeyer, Gesa A; Williamson, Craig E; Woo, Kara H

    2015-01-01

    Global environmental change has influenced lake surface temperatures, a key driver of ecosystem structure and function. Recent studies have suggested significant warming of water temperatures in individual lakes across many different regions around the world. However, the spatial and temporal coherence associated with the magnitude of these trends remains unclear. Thus, a global data set of water temperature is required to understand and synthesize global, long-term trends in surface water temperatures of inland bodies of water. We assembled a database of summer lake surface temperatures for 291 lakes collected in situ and/or by satellites for the period 1985-2009. In addition, corresponding climatic drivers (air temperatures, solar radiation, and cloud cover) and geomorphometric characteristics (latitude, longitude, elevation, lake surface area, maximum depth, mean depth, and volume) that influence lake surface temperatures were compiled for each lake. This unique dataset offers an invaluable baseline perspective on global-scale lake thermal conditions as environmental change continues.

  9. Mapping the literature of nursing administration

    PubMed Central

    Galganski, Carol J.

    2006-01-01

    Objectives: As part of Phase I of a project to map the literature of nursing, sponsored by the Nursing and Allied Health Resources Section of the Medical Library Association, this study identifies the core literature cited in nursing administration and the indexing services that provide access to the core journals. The results of this study will assist librarians and end users searching for information related to this nursing discipline, as well as database producers who might consider adding specific titles to their indexing services. Methods: Using the common methodology described in the overview article, five source journals for nursing administration were identified and selected for citation analysis over a three-year period, 1996 to 1998, to identify the most frequently cited titles according to Bradford's Law of Scattering. From this core of most productive journal titles, the bibliographic databases that provide the best access to these titles were identified. Results: Results reveal that nursing administration literature relies most heavily on journal articles and on those titles identified as core nursing administrative titles. When the indexing coverage of nine services is compared, PubMed/MEDLINE and CINAHL provide the most comprehensive coverage of this nursing discipline. Conclusions: No one indexing service adequately covers this nursing discipline. Researchers needing comprehensive coverage in this area must search more than one database to effectively research their projects. While PubMed/MEDLINE and CINAHL provide more coverage for this discipline than the other indexing services, none is sufficiently broad in scope to provide indexing of nursing, health care management, and medical literature in a single file. Nurse administrators using the literature to research current work issues need to review not only the nursing titles covered by CINAHL but should also include the major weekly medical titles, core titles in health care administration, and

  10. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  11. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  12. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  13. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  14. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet...

  15. Atomic Databases

    NASA Astrophysics Data System (ADS)

    Mendoza, Claudio

    2000-10-01

    Atomic and molecular data are required in a variety of fields ranging from the traditional astronomy, atmospherics and fusion research to fast growing technologies such as lasers, lighting, low-temperature plasmas, plasma assisted etching and radiotherapy. In this context, there are some research groups, both theoretical and experimental, scattered round the world that attend to most of this data demand, but the implementation of atomic databases has grown independently out of sheer necessity. In some cases the latter has been associated with the data production process or with data centers involved in data collection and evaluation; but sometimes it has been the result of individual initiatives that have been quite successful. In any case, the development and maintenance of atomic databases call for a number of skills and an entrepreneurial spirit that are not usually associated with most physics researchers. In the present report we present some of the highlights in this area in the past five years and discuss what we think are some of the main issues that have to be addressed.

  16. Towards an enhanced use of soil databases for assessing water availability in (sub)tropical regions using fractal-based methods

    NASA Astrophysics Data System (ADS)

    Botula Manyala, Y.; Baetens, J.; Baert, G.; Van Ranst, E.; Cornelis, W.

    2012-12-01

    Following the completion of numerous elaborate soil surveys in many (sub)tropical regions of the African continent during the past decades, vast databases with soil properties of the prevailing soil orders in these regions have been assembled in order to support agricultural stakeholders throughout crucial decision-making processes. Unfortunately, even though soil hydraulic properties are of primary interest for designing sustainable farming practices, guiding crop choice and irrigation scheduling, a substantial share of the soil surveys is restricted to the collection of soil chemical properties. This bias principally originates from the fact that soil chemical characteristics like pH, organic carbon/matter (OC/OM), cation exchange capacity (CEC), base saturation (BS) can be determined readily. On the other hand, determination of the hydraulic properties of a soil on the field or in the lab, is much more time consuming, particularly the soil-water retention curve (SWRC) which is generally considered as one of the most important physical property since it constitutes the footprint of a soil. Owing to the incompleteness of most soil databases in (sub)tropical regions, either much valuable information is discarded because the assessment of meaningful indices in land evaluation such as the soil available water capacity (AWC), the hydraulic conductivity are merely based upon those soil samples for which hydraulic properties were measured, or one has to resort to pedotransfer functions (PTFs). The latter are equations for deducing hydraulic properties of a soil from physico-chemical data that are commonly available in soil survey reports (sand, silt, clay, OC/OM, CEC, etc.). Yet, such PTFs are only locally applicable because their derivation rests on statistical or machine learning techniques and has no physical basis. Recently, however, physically-based, and hence globally applicable, fractal methods have been put forward for assessing a soil's SWRC based upon its

  17. Water pollution in the pulp and paper industry: Treatment methods. (Latest citations from the NTIS database). Published Search

    SciTech Connect

    Not Available

    1993-07-01

    The bibliography contains citations concerning waste treatment methods for the pulp and paper industry. Some of these methods are: sedimentation, flotation, filtration, coagulation, adsorption, and general concentration processes. (Contains a minimum of 142 citations and includes a subject term index and title list.)

  18. Databases and data mining

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  19. Intensity of regionally applied tastes in relation to administration method: an investigation based on the "taste strips" test.

    PubMed

    Manzi, Brian; Hummel, Thomas

    2014-02-01

    To compare various methods to apply regional taste stimuli to the tongue. "Taste strips" are a clinical tool to determine gustatory function. How a patient perceives the chemical environment in the mouth is a result of many factors such as taste bud distribution and interactions between the cranial nerves. To date, there have been few studies describing the different approaches to administer taste strips to maximize taste identification accuracy and intensity. This is a normative value acquisition pilot and single-center study. The investigation involved 30 participants reporting a normal sense of smell and taste (18 women, 12 men, mean age 33 years). The taste test was based on spoon-shaped filter paper strips impregnated with four taste qualities (sweet, sour, salty, and bitter) at concentrations shown to be easily detectable by young healthy subjects. The strips were administered in three methods (held stationary on the tip of the tongue, applied across the tongue, held in the mouth), resulting in a total of 12 trials per participant. Subjects identified the taste from a list of four descriptors, (sweet, sour, salty, bitter) and ranked the intensity on a scale from 0 to 10. Statistical analyses were performed on the accuracy of taste identification and rated intensities. The participants perceived in order of most to least intense: salt, sour, bitter, sweet. Of the four tastes, sour consistently was least accurately identified. Presenting the taste strip inside the closed mouth of the participants produced the least accurate taste identification, whereas moving the taste strip across the tongue led to a significant increase in intensity for the sweet taste. In this study of 30 subjects at the second concentration, optimized accuracy and intensity of taste identification was observed through administration of taste strips laterally across the anterior third of the extended tongue. Further studies are required on more subjects and the additional concentrations

  20. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  1. [Changes in features of diabetes care in Hungary in the period of years 2001-2014. Aims and methods of the database analysis of the National Health Insurance Fund].

    PubMed

    Jermendy, György; Kempler, Péter; Abonyi-Tóth, Zsolt; Rokszin, György; Wittmann, István

    2016-08-01

    In the last couple of years, database analyses have become increasingly popular among clinical-epidemiological investigations. In Hungary, the National Health Insurance Fund serves as central database of all medical attendances in state departments and purchases of drug prescriptions in pharmacies. Data from in- and outpatient departments as well as those from pharmacies are regularly collected in this database which is public and accessible on request. The aim of this retrospective study was to investigate the database of the National Health Insurance Fund in order to analyze the diabetes-associated morbidity and mortality in the period of years 2001-2014. Moreover, data of therapeutic costs, features of hospitalizations and practice of antidiabetic treatment were examined. The authors report now on the method of the database analysis. It is to be hoped that the upcoming results of this investigation will add some new data to recent knowledge about diabetes care in Hungary. Orv. Hetil., 2016, 157(32), 1259-1265.

  2. Medical database security evaluation.

    PubMed

    Pangalos, G J

    1993-01-01

    Users of medical information systems need confidence in the security of the system they are using. They also need a method to evaluate and compare its security capabilities. Every system has its own requirements for maintaining confidentiality, integrity and availability. In order to meet these requirements a number of security functions must be specified covering areas such as access control, auditing, error recovery, etc. Appropriate confidence in these functions is also required. The 'trust' in trusted computer systems rests on their ability to prove that their secure mechanisms work as advertised and cannot be disabled or diverted. The general framework and requirements for medical database security and a number of parameters of the evaluation problem are presented and discussed. The problem of database security evaluation is then discussed, and a number of specific proposals are presented, based on a number of existing medical database security systems.

  3. School Administrator and Parent Perceptions of School, Family, and Community Partnerships in Middle School: A Mixed Methods Study

    ERIC Educational Resources Information Center

    LeBlanc, Jackie

    2011-01-01

    The purpose of this research was to identify, analyze, and compare the perceptions of parents and school administrators in regard to school-family partnerships in three middle schools in the State of Louisiana. The study investigated the similarities and dissimilarities between parent and school administrator perceptions, probed to determine…

  4. Nanocomposites based on Soluplus and Angelica gigas Nakai extract fabricated by an electrohydrodynamic method for oral administration.

    PubMed

    Lee, Jeong-Jun; Nam, Suyeong; Park, Ju-Hwan; Lee, Song Yi; Jeong, Jae Young; Lee, Jae-Young; Kang, Wie-Soo; Yoon, In-Soo; Kim, Dae-Duk; Cho, Hyun-Jong

    2016-12-15

    Nanocomposites (NCs) based on Soluplus (SP) were fabricated by an electrohydrodynamic (EHD) method for the oral delivery of Angelica gigas Nakai (AGN). Nano-sized particles were obtained after dispersing the resultant, produced by the EHD technique, in the aqueous environment. AGN/SP2 (AGN:SP=1:2, w/w) NC dispersion in aqueous media exhibited a 130nm mean diameter, narrow size distribution, and robust stability in the tested concentration range of the ethanol extract of AGN (AGN EtOH ext) and at pH 1.2 and 6.8. Amorphization of the components of AGN and their interactions with SP in the AGN/SP2 NC formulation were demonstrated by X-ray diffractometry (XRD) analysis. The released amounts of decursin (D) and decursinol angelate (DA), major components of AGN, from NCs were improved compared with those from the AGN EtOH ext group at both pH 1.2 and 6.8. As D and DA can be metabolized into decursinol (DOH) in the liver after oral administration, the DOH concentrations in plasma were quantitatively determined to evaluate the oral absorption of AGN. In a pharmacokinetic study in rats, higher oral absorption and the maximum concentration in plasma (Cmax) were presented in the AGN/SP2 NC group compared with the AGN EtOH ext and AGN NC groups. These findings indicate the successful application of developed SP-based NCs for the oral delivery of AGN.

  5. Analysing factors related to slipping, stumbling, and falling accidents at work: Application of data mining methods to Finnish occupational accidents and diseases statistics database.

    PubMed

    Nenonen, Noora

    2013-03-01

    The utilisation of data mining methods has become common in many fields. In occupational accident analysis, however, these methods are still rarely exploited. This study applies methods of data mining (decision tree and association rules) to the Finnish national occupational accidents and diseases statistics database to analyse factors related to slipping, stumbling, and falling (SSF) accidents at work from 2006 to 2007. SSF accidents at work constitute a large proportion (22%) of all accidents at work in Finland. In addition, they are more likely to result in longer periods of incapacity for work than other workplace accidents. The most important factor influencing whether or not an accident at work is related to SSF is the specific physical activity of movement. In addition, the risk of SSF accidents at work seems to depend on the occupation and the age of the worker. The results were in line with previous research. Hence the application of data mining methods was considered successful. The results did not reveal anything unexpected though. Nevertheless, because of the capability to illustrate a large dataset and relationships between variables easily, data mining methods were seen as a useful supplementary method in analysing occupational accident data.

  6. Validation of White-Matter Lesion Change Detection Methods on a Novel Publicly Available MRI Image Database.

    PubMed

    Lesjak, Žiga; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2016-10-01

    Changes of white-matter lesions (WMLs) are good predictors of the progression of neurodegenerative diseases like multiple sclerosis (MS). Based on longitudinal magnetic resonance (MR) imaging the changes can be monitored, while the need for their accurate and reliable quantification led to the development of several automated MR image analysis methods. However, an objective comparison of the methods is difficult, because publicly unavailable validation datasets with ground truth and different sets of performance metrics were used. In this study, we acquired longitudinal MR datasets of 20 MS patients, in which brain regions were extracted, spatially aligned and intensity normalized. Two expert raters then delineated and jointly revised the WML changes on subtracted baseline and follow-up MR images to obtain ground truth WML segmentations. The main contribution of this paper is an objective, quantitative and systematic evaluation of two unsupervised and one supervised intensity based change detection method on the publicly available datasets with ground truth segmentations, using common pre- and post-processing steps and common evaluation metrics. Besides, different combinations of the two main steps of the studied change detection methods, i.e. dissimilarity map construction and its segmentation, were tested to identify the best performing combination.

  7. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  8. 41 CFR 302-7.304 - When HHG are shipped under the actual expense method, and PBP&E as an administrative expense, in...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-TRANSPORTATION AND TEMPORARY STORAGE OF HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Agency Responsibilities § 302-7.304 When HHG are shipped under the actual expense method, and PBP&E as an... under the actual expense method, and PBP&E as an administrative expense, in the same lot, are...

  9. Performance evaluation of an automatic segmentation method of cerebral arteries in MRA images by use of a large image database

    NASA Astrophysics Data System (ADS)

    Uchiyama, Yoshikazu; Asano, Tatsunori; Hara, Takeshi; Fujita, Hiroshi; Kinosada, Yasutomi; Asano, Takahiko; Kato, Hiroki; Kanematsu, Masayuki; Hoshi, Hiroaki; Iwama, Toru

    2009-02-01

    The detection of cerebrovascular diseases such as unruptured aneurysm, stenosis, and occlusion is a major application of magnetic resonance angiography (MRA). However, their accurate detection is often difficult for radiologists. Therefore, several computer-aided diagnosis (CAD) schemes have been developed in order to assist radiologists with image interpretation. The purpose of this study was to develop a computerized method for segmenting cerebral arteries, which is an essential component of CAD schemes. For the segmentation of vessel regions, we first used a gray level transformation to calibrate voxel values. To adjust for variations in the positioning of patients, registration was subsequently employed to maximize the overlapping of the vessel regions in the target image and reference image. The vessel regions were then segmented from the background using gray-level thresholding and region growing techniques. Finally, rule-based schemes with features such as size, shape, and anatomical location were employed to distinguish between vessel regions and false positives. Our method was applied to 854 clinical cases obtained from two different hospitals. The segmentation of cerebral arteries in 97.1%(829/854) of the MRA studies was attained as an acceptable result. Therefore, our computerized method would be useful in CAD schemes for the detection of cerebrovascular diseases in MRA images.

  10. Data manipulation in heterogeneous databases

    SciTech Connect

    Chatterjee, A.; Segev, A.

    1991-10-01

    Many important information systems applications require access to data stored in multiple heterogeneous databases. This paper examines a problem in inter-database data manipulation within a heterogeneous environment, where conventional techniques are no longer useful. To solve the problem, a broader definition for join operator is proposed. Also, a method to probabilistically estimate the accuracy of the join is discussed.

  11. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  12. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  13. A Database of Formation Enthalpies of Nitrogen Species by Compound Methods (CBS-QB3, CBS-APNO, G3, G4).

    PubMed

    Simmie, John M

    2015-10-22

    Accurate thermochemical data for compounds containing C/H/N/O are required to underpin kinetics simulation and modeling of the reactions of these species in different environments. There is a dearth of experimental data so computational quantum chemistry has stepped in to fill this breach and to verify whether particular experiments are in need of revision. A number of composite model chemistries (CBS-QB3, CBS-APNO, G3, and G4) are used to compute theoretical atomization energies and hence enthalpies of formation at 0 and 298.15 K, and these are benchmarked against the best available compendium of values, the Active Thermochemical Tables or ATcT. In general the agreement is very good for some 28 species with the only discrepancy being for hydrazine. It is shown that, although individually the methods do not perform that well, collectively the mean unsigned error is <1.7 kJ mol(-1); hence, this approach provides a useful tool to screen published values and validate new experimental results. Using multiple model chemistries does have some drawbacks but can produce good results even for challenging molecules like HOON and CN2O2. The results for these smaller validated molecules are then used as anchors for determining the formation enthalpies of larger species such as methylated hydrazines and diazenes, five- and six-membered heterocyclics via carefully chosen isodesmic working reactions with the aim of resolving some discrepancies in the literature and establishing a properly validated database. This expanded database could be useful in testing the performance of computationally less-demanding density function methods with newer functionals that have the capacity to treat much larger systems than those tested here.

  14. 76 FR 59170 - Hartford Financial Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, CT...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-23

    ... Employment and Training Administration Hartford Financial Services, Inc., Corporate/EIT/CTO Database...) applicable to workers and former workers Hartford Financial Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, Connecticut (The Hartford, Corporate/EIT/CTO Database Management...

  15. 78 FR 25134 - Sixteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-29

    ... Federal Aviation Administration Sixteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  16. 78 FR 51809 - Seventeenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-21

    ... Federal Aviation Administration Seventeenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  17. 78 FR 8684 - Fifteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint with EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... Federal Aviation Administration Fifteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint with EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  18. 78 FR 66418 - Eighteenth Meeting: RTCA Special Committee 217-Aeronautical Databases Joint With EUROCAE WG-44...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-05

    ... Federal Aviation Administration Eighteenth Meeting: RTCA Special Committee 217--Aeronautical Databases Joint With EUROCAE WG-44--Aeronautical Databases AGENCY: Federal Aviation Administration (FAA), U.S. Department of Transportation (DOT). ACTION: Notice of RTCA Special Committee 217--Aeronautical...

  19. Tri-party agreement databases, access mechanism and procedures. Revision 2

    SciTech Connect

    Brulotte, P.J.

    1996-01-01

    This document contains the information required for the Washington State Department of Ecology (Ecology) and the U.S. Environmental Protection Agency (EPA) to access databases related to the Hanford Federal Facility Agreement and Consent Order (Tri-Party Agreement). It identifies the procedure required to obtain access to the Hanford Site computer networks and the Tri-Party Agreement related databases. It addresses security requirements, access methods, database availability dates, database access procedures, and the minimum computer hardware and software configurations required to operate within the Hanford Site networks. This document supersedes any previous agreements including the Administrative Agreement to Provide Computer Access to U.S. Environmental Protection Agency (EPA) and the Administrative Agreement to Provide Computer Access to Washington State Department of Ecology (Ecology), agreements that were signed by the U.S. Department of Energy (DOE), Richland Operations Office (RL) in June 1990, Access approval to EPA and Ecology is extended by RL to include all Tri-Party Agreement relevant databases named in this document via the documented access method and date. Access to databases and systems not listed in this document will be granted as determined necessary and negotiated among Ecology, EPA, and RL through the Tri-Party Agreement Project Managers. The Tri-Party Agreement Project Managers are the primary points of contact for all activities to be carried out under the Tri-Party Agreement. Action Plan. Access to the Tri-Party Agreement related databases and systems does not provide or imply any ownership on behalf of Ecology or EPA whether public or private of either the database or the system. Access to identified systems and databases does not include access to network/system administrative control information, network maps, etc.

  20. The Exoplanet Orbit Database

    NASA Astrophysics Data System (ADS)

    Wright, J. T.; Fakhouri, O.; Marcy, G. W.; Han, E.; Feng, Y.; Johnson, John Asher; Howard, A. W.; Fischer, D. A.; Valenti, J. A.; Anderson, J.; Piskunov, N.

    2011-04-01

    We present a database of well-determined orbital parameters of exoplanets, and their host stars’ properties. This database comprises spectroscopic orbital elements measured for 427 planets orbiting 363 stars from radial velocity and transit measurements as reported in the literature. We have also compiled fundamental transit parameters, stellar parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets with robust, well measured orbital parameters reported in peer-reviewed articles. The database is available in a searchable, filterable, and sortable form online through the Exoplanets Data Explorer table, and the data can be plotted and explored through the Exoplanet Data Explorer plotter. We use the Data Explorer to generate publication-ready plots, giving three examples of the signatures of exoplanet migration and dynamical evolution: We illustrate the character of the apparent correlation between mass and period in exoplanet orbits, the different selection biases between radial velocity and transit surveys, and that the multiplanet systems show a distinct semimajor-axis distribution from apparently singleton systems.

  1. Estimation of rhG-CSF absorption kinetics after subcutaneous administration using a modified Wagner-Nelson method with a nonlinear elimination model.

    PubMed

    Hayashi, N; Aso, H; Higashida, M; Kinoshita, H; Ohdo, S; Yukawa, E; Higuchi, S

    2001-05-01

    The clearance of recombinant human granulocyte-colony stimulating factor (rhG-CSF) is known to decrease with dose increase, and to be saturable. The average clearance after intravenous administration will be lower than that after subcutaneous administration. Therefore, the apparent absolute bioavailability with subcutaneous administration calculated from the AUC ratio is expected to be an underestimate. The absorption pharmacokinetics after subcutaneous administration was examined using the results of the bioequivalency study between two rhG-CSF formulations with a dose of 2 microg/kg. The analysis was performed using a modified Wagner-Nelson method with the nonlinear elimination model. The apparent absolute bioavailability for subcutaneous administration was 56.9 and 67.5% for each formulation, and the ratio between them was approximately 120%. The true absolute bioavailability was, however, estimated to be 89.8 and 96.9%, respectively, and the ratio was approximately 108%. The absorption pattern was applied to other doses, and the predicted clearance values for subcutaneous and intravenous administrations were then similar to the values for several doses reported in the literature. The underestimation of bioavailability was around 30%, and the amplification of difference was 2.5 times, from 8 to 20%, because of the nonlinear pharmacokinetics. The neutrophil increases for each formulation were identical, despite the different bioavailabilities. The reason for this is probably that the amount eliminated through the saturable process, which might indicate the amount consumed by the G-CSF receptor, was identical for each formulation.

  2. Energies and 2'-Hydroxyl Group Orientations of RNA Backbone Conformations. Benchmark CCSD(T)/CBS Database, Electronic Analysis, and Assessment of DFT Methods and MD Simulations.

    PubMed

    Mládek, Arnošt; Banáš, Pavel; Jurečka, Petr; Otyepka, Michal; Zgarbová, Marie; Šponer, Jiří

    2014-01-14

    Sugar-phosphate backbone is an electronically complex molecular segment imparting RNA molecules high flexibility and architectonic heterogeneity necessary for their biological functions. The structural variability of RNA molecules is amplified by the presence of the 2'-hydroxyl group, capable of forming multitude of intra- and intermolecular interactions. Bioinformatics studies based on X-ray structure database revealed that RNA backbone samples at least 46 substates known as rotameric families. The present study provides a comprehensive analysis of RNA backbone conformational preferences and 2'-hydroxyl group orientations. First, we create a benchmark database of estimated CCSD(T)/CBS relative energies of all rotameric families and test performance of dispersion-corrected DFT-D3 methods and molecular mechanics in vacuum and in continuum solvent. The performance of the DFT-D3 methods is in general quite satisfactory. The B-LYP-D3 method provides the best trade-off between accuracy and computational demands. B3-LYP-D3 slightly outperforms the new PW6B95-D3 and MPW1B95-D3 and is the second most accurate density functional of the study. The best agreement with CCSD(T)/CBS is provided by DSD-B-LYP-D3 double-hybrid functional, although its large-scale applications may be limited by high computational costs. Molecular mechanics does not reproduce the fine energy differences between the RNA backbone substates. We also demonstrate that the differences in the magnitude of the hyperconjugation effect do not correlate with the energy ranking of the backbone conformations. Further, we investigated the 2'-hydroxyl group orientation preferences. For all families, we conducted a QM and MM hydroxyl group rigid scan in gas phase and solvent. We then carried out set of explicit solvent MD simulations of folded RNAs and analyze 2'-hydroxyl group orientations of different backbone families in MD. The solvent energy profiles determined primarily by the sugar pucker match well with the

  3. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  4. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  5. Administrative Synergy

    ERIC Educational Resources Information Center

    Hewitt, Kimberly Kappler; Weckstein, Daniel K.

    2012-01-01

    One of the biggest obstacles to overcome in creating and sustaining an administrative professional learning community (PLC) is time. Administrators are constantly deluged by the tyranny of the urgent. It is a Herculean task to carve out time for PLCs, but it is imperative to do so. In this article, the authors describe how an administrative PLC…

  6. Pharmacokinetic Comparative Study of Gastrodin and Rhynchophylline after Oral Administration of Different Prescriptions of Yizhi Tablets in Rats by an HPLC-ESI/MS Method

    PubMed Central

    Ge, Zhaohui; Liang, Qionglin; Wang, Yiming; Luo, Guoan

    2014-01-01

    Pharmacokinetic characters of rhynchophylline (RIN), gastrodin (GAS), and gastrodigenin (p-hydroxybenzyl alcohol, HBA) were investigated after oral administration of different prescriptions of Yizhi: Yizhi tablets or effective parts of tianma (total saponins from Gastrodiae, EPT) and gouteng (rhynchophylla alkaloids, EPG). At different predetermined time points after administration, the concentrations of GAS, HBA, and RIN in rat plasma were determined by an HPLC-ESI/MS method, and the main pharmacokinetic parameters were investigated. The results showed that the pharmacokinetic parameters Cmax and AUC0–∞ (P < 0.05) were dramatically different after oral administration of different prescriptions of Yizhi. The data indicated that the pharmacokinetic processes of GAS, HBA, and RIN in rats would interact with each other or be affected by other components in Yizhi. The rationality of the compatibility of Uncaria and Gastrodia elata as a classic “herb pair” has been verified from the pharmacokinetic viewpoint. PMID:25610474

  7. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  8. Software Application for Supporting the Education of Database Systems

    ERIC Educational Resources Information Center

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  9. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at...

  10. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at...

  11. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  12. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  13. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at...

  14. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and...

  15. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at...

  16. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at...

  17. Quantitative Structure activity Relationship Analysis of Pyridinone HIV-1 Reverse Transcriptase Inhibitors using the k Nearest Neighbor Method and QSAR-based Database Mining

    NASA Astrophysics Data System (ADS)

    Medina-Franco, Jose Luis; Golbraikh, Alexander; Oloff, Scott; Castillo, Rafael; Tropsha, Alexander

    2005-04-01

    We have developed quantitative structure-activity relationship (QSAR) models for 44 non-nucleoside HIV-1 reverse transcriptase inhibitors (NNRTIs) of the pyridinone derivative type. The k nearest neighbor ( kNN) variable selection approach was used. This method utilizes multiple descriptors such as molecular connectivity indices, which are derived from two-dimensional molecular topology. The modeling process entailed extensive validation including the randomization of the target property (Y-randomization) test and the division of the dataset into multiple training and test sets to establish the external predictive power of the training set models. QSAR models with high internal and external accuracy were generated, with leave-one-out cross-validated R 2 ( q 2) values ranging between 0.5 and 0.8 for the training sets and R 2 values exceeding 0.6 for the test sets. The best models with the highest internal and external predictive power were used to search the National Cancer Institute database. Derivatives of the pyrazolo[3,4- d]pyrimidine and phenothiazine type were identified as promising novel NNRTIs leads. Several candidates were docked into the binding pocket of nevirapine with the AutoDock (version 3.0) software. Docking results suggested that these types of compounds could be binding in the NNRTI binding site in a similar mode to a known non-nucleoside inhibitor nevirapine.

  18. A universal support vector machines based method for automatic event location in waveforms and video-movies: applications to massive nuclear fusion databases.

    PubMed

    Vega, J; Murari, A; González, S

    2010-02-01

    Big physics experiments can collect terabytes (even petabytes) of data under continuous or long pulse basis. The measurement systems that follow the temporal evolution of physical quantities translate their observations into very large time-series data and video-movies. This article describes a universal and automatic technique to recognize and locate inside waveforms and video-films both signal segments with data of potential interest for specific investigations and singular events. The method is based on regression estimations of the signals using support vector machines. A reduced number of the samples is shown as outliers in the regression process and these samples allow the identification of both special signatures and singular points. Results are given with the database of the JET fusion device: location of sawteeth in soft x-ray signals to automate the plasma incremental diffusivity computation, identification of plasma disruptive behaviors with its automatic time instant determination, and, finally, recognition of potential interesting plasma events from infrared video-movies.

  19. Investigating potential for effects of environmental endocrine disrupters on wild populations of amphibians in UK and Japan: status of historical databases and review of methods.

    PubMed

    Pickford, Daniel B; Larroze, Severine; Takase, Minoru; Mitsui, Naoko; Tooi, Osamu; Santo, Noriaki

    2007-01-01

    Concern over global declines among amphibians has resulted in increased interest in the effects of environmental contaminants on amphibian populations, and more recently, this has stimulated research on the potential adverse effects of environmental endocrine disrupters in amphibians. Laboratory studies of the effects of single chemicals on endocrine-relevant endpoints in amphibian, mainly anuran, models are valuable in characterizing sensitivity at the individual level and may yield useful bioassays for screening chemicals for endocrine toxicity (for example, thyroid disrupting activity). Nevertheless, in the UK and Japan as in many other countries, it has yet to be demonstrated unequivocally that the exposure of native amphibians to endocrine disrupting environmental contaminants results in adverse effects at the population level. Assessing the potential of such effects is likely to require an ecoepidemiological approach to investigate associations between predicted or actual exposure of amphibians to (endocrine disrupting) environmental contaminants and biologically meaningful responses at the population level. In turn, this demands recent but relatively long-term population trend data. We review two potential sources of such data for widespread UK anurans that could be used in such investigations: records for common frogs and common toads in several databases maintained by the Biological Records Centre (UK Government Centre for Ecology and Hydrology), and adult toad count data from 'Toads on Roads' schemes registered with the UK wildlife charity 'Froglife'. There were little abundance data in the BRC databases that could be used for this purpose, while count data from the Toads on Roads schemes is potentially confounded by the effects of local topology on the detection probabilities and operation of nonchemical anthropogenic stressors. For Japan, local and regional surveys of amphibians and national ecological censuses gathering amphibian data were reviewed to

  20. Fast decision tree-based method to index large DNA-protein sequence databases using hybrid distributed-shared memory programming model.

    PubMed

    Jaber, Khalid Mohammad; Abdullah, Rosni; Rashid, Nur'Aini Abdul

    2014-01-01

    In recent times, the size of biological databases has increased significantly, with the continuous growth in the number of users and rate of queries; such that some databases have reached the terabyte size. There is therefore, the increasing need to access databases at the fastest rates possible. In this paper, the decision tree indexing model (PDTIM) was parallelised, using a hybrid of distributed and shared memory on resident database; with horizontal and vertical growth through Message Passing Interface (MPI) and POSIX Thread (PThread), to accelerate the index building time. The PDTIM was implemented using 1, 2, 4 and 5 processors on 1, 2, 3 and 4 threads respectively. The results show that the hybrid technique improved the speedup, compared to a sequential version. It could be concluded from results that the proposed PDTIM is appropriate for large data sets, in terms of index building time.

  1. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  2. EMEN2: an object oriented database and electronic lab notebook.

    PubMed

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J

    2013-02-01

    Transmission electron microscopy and associated methods, such as single particle analysis, two-dimensional crystallography, helical reconstruction, and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source.

  3. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  4. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  5. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  6. Identification and determination of major flavonoids in rat urine by HPLC-UV and HPLC-MS methods following oral administration of Dalbergia odorifera extract.

    PubMed

    Liu, Rongxia; Wang, Wei; Wang, Qiao; Bi, Kaishun; Guo, Dean

    2006-01-01

    Flavonoids are the main active constituents of Dalbergia odorifera. The excretion of the major flavonoids in rat urine after oral administration of D. odorifera extract was investigated by HPLC-UV and HPLC-MS methods. Utilizing the HPLC-MS technique, 18 flavonoids, including five isoflavones, four isoflavanones, four neoflavones, two flavanones, two chalcones and one isoflavanonol were identified in free form in a urine sample based on the direct comparison of the corresponding tR, UV maximum absorbance (lambda(max)) values and MS data with the authentic standards. The amounts of the prominent flavonoids, (3R)-4'-methoxy-2',3,7-trihydroxyisoflavanone and vestitone, were determined by HPLC-UV with the internal standard method, and the validation procedure confirmed that it afforded reliable analysis of these two analytes in urine after oral administration of D. odorifera extract.

  7. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  8. Peroral administration of 5-bromo-2-deoxyuridine in drinking water is not a reliable method for labeling proliferating S-phase cells in rats.

    PubMed

    Ševc, Juraj; Matiašová, Anna; Smoleková, Ivana; Jendželovský, Rastislav; Mikeš, Jaromír; Tomášová, Lenka; Kútna, Viera; Daxnerová, Zuzana; Fedoročko, Peter

    2015-01-01

    In rodents, peroral (p.o.) administration of 5-bromo-2'-deoxyuridine (BrdU) dissolved in drinking water is a widely used method for labeling newly formed cells over a prolonged time-period. Despite the broad applicability of this method, the pharmacokinetics of BrdU in rats or mice after p.o. administration remains unknown. Moreover, the p.o. route of administration may be limited by the relatively low amount of BrdU consumed over 24h and the characteristic drinking pattern of rats, with water intake being observed predominantly during the dark phase. Therefore, we investigated the reliability of staining proliferating S-phase cells with BrdU after p.o. administration (1mg/ml) to rats using both in vitro and in vivo conditions. Flow cytometric analysis of tumor cells co-cultivated with sera from experimental animals exposed to BrdU dissolved in drinking water or 25% orange juice revealed that the concentration of BrdU in the blood sera of rats throughout the day was below the detection limits of our assay. Ingested BrdU was only sufficient to label approximately 4.2±0.3% (water) or 4.2±0.3% (25% juice) of all S-phase cells. Analysis of data from in vivo conditions indicates that only 7.6±3.3% or 15.5±2.3% of all S-phase cells in the dentate gyrus of the hippocampus was labeled in animals administered drinking water containing BrdU during the light and dark phases of the day. In addition, the intensity of BrdU-positive nuclei in animals receiving p.o. administration of BrdU was significantly lower than in control animals intraperitoneally injected with BrdU. Our data indicate that the conventional approach of p.o. administration of BrdU in the drinking water to rats provides strongly inaccurate information about the number of proliferating cells in target tissues. Therefore other administration routes, such as osmotic mini pumps, should be considered for labeling of proliferating cells over a prolonged time-period.

  9. Administrative Ecology

    ERIC Educational Resources Information Center

    McGarity, Augustus C., III; Maulding, Wanda

    2007-01-01

    This article discusses how all four facets of administrative ecology help dispel the claims about the "impossibility" of the superintendency. These are personal ecology, professional ecology, organizational ecology, and community ecology. Using today's superintendency as an administrative platform, current literature describes a preponderance of…

  10. A Curated Database of Rodent Uterotrophic Bioactivity

    PubMed Central

    Kleinstreuer, Nicole C.; Ceger, Patricia C.; Allen, David G.; Strickland, Judy; Chang, Xiaoqing; Hamm, Jonathan T.; Casey, Warren M.

    2015-01-01

    Background: Novel in vitro methods are being developed to identify chemicals that may interfere with estrogen receptor (ER) signaling, but the results are difficult to put into biological context because of reliance on reference chemicals established using results from other in vitro assays and because of the lack of high-quality in vivo reference data. The Organisation for Economic Co-operation and Development (OECD)-validated rodent uterotrophic bioassay is considered the “gold standard” for identifying potential ER agonists. Objectives: We performed a comprehensive literature review to identify and evaluate data from uterotrophic studies and to analyze study variability. Methods: We reviewed 670 articles with results from 2,615 uterotrophic bioassays using 235 unique chemicals. Study descriptors, such as species/strain, route of administration, dosing regimen, lowest effect level, and test outcome, were captured in a database of uterotrophic results. Studies were assessed for adherence to six criteria that were based on uterotrophic regulatory test guidelines. Studies meeting all six criteria (458 bioassays on 118 unique chemicals) were considered guideline-like (GL) and were subsequently analyzed. Results: The immature rat model was used for 76% of the GL studies. Active outcomes were more prevalent across rat models (74% active) than across mouse models (36% active). Of the 70 chemicals with at least two GL studies, 18 (26%) had discordant outcomes and were classified as both active and inactive. Many discordant results were attributable to differences in study design (e.g., injection vs. oral dosing). Conclusions: This uterotrophic database provides a valuable resource for understanding in vivo outcome variability and for evaluating the performance of in vitro assays that measure estrogenic activity. Citation: Kleinstreuer NC, Ceger PC, Allen DG, Strickland J, Chang X, Hamm JT, Casey WM. 2016. A curated database of rodent uterotrophic bioactivity. Environ

  11. Ketoconazole ion-exchange fiber complex: a novel method to reduce the individual difference of bioavailability in oral administration caused by gastric anacidity.

    PubMed

    Xin, Che; Li-hong, Wang; Jing, Yuan; Yang, Yang; Yue, Yuan; Qi-fang, Wang; San-ming, Li

    2013-01-01

    Water insoluble faintly alkaline drugs often have potential absorption problem in gastrointestinal tract in oral administration for patients with gastric anacidity. The purpose of the present study is to develop a novel method to improve the absorption of the water insoluble faintly alkaline drug in peroral administration. This method is based on ion exchange of ion-exchange fibers. Water-insoluble faintly alkaline drug ketoconazole was used as a model drug. Ketoconazole and the active groups of the ion-exchange fibers combined into ion pairs based on the acid-base reaction. This drug carrier did not release drugs in deionized water, but in water solution containing other ions it would release the drugs into the solution by ion exchange. Confirmed by the X-ray diffraction and the differential scanning calorimetry (DSC), the ketoconazole combined onto the ion-exchange fibers was in a highly molecular level dispersed state. The improved dissolution of ketoconazole ion-exchange fiber complexes is likely to originate from this ketoconazole's highly dispersed state. Furthermore, due to this ketoconazole's highly dispersed state, ketoconazole ion-exchange fiber complexes significantly decreased the individual difference of absorption in oral administration of ketoconazole caused by the fluctuation of the acid degree in the gastric fluid.

  12. The Danish Cardiac Rehabilitation Database

    PubMed Central

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    Aim of database The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Study population Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Main variables Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Descriptive data Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. Conclusion The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients. PMID:27822083

  13. Leadership Styles of Nursing Home Administrators and Their Association with Staff Turnover

    ERIC Educational Resources Information Center

    Donoghue, Christopher; Castle, Nicholas G.

    2009-01-01

    Purpose: The purpose of this study was to examine the associations between nursing home administrator (NHA) leadership style and staff turnover. Design and Methods: We analyzed primary data from a survey of 2,900 NHAs conducted in 2005. The Online Survey Certification and Reporting database and the Area Resource File were utilized to extract…

  14. OSSI-PET: Open-Access Database of Simulated [(11)C]Raclopride Scans for the Inveon Preclinical PET Scanner: Application to the Optimization of Reconstruction Methods for Dynamic Studies.

    PubMed

    Garcia, Marie-Paule; Charil, Arnaud; Callaghan, Paul; Wimberley, Catriona; Busso, Florian; Gregoire, Marie-Claude; Bardies, Manuel; Reilhac, Anthonin

    2016-07-01

    A wide range of medical imaging applications benefits from the availability of realistic ground truth data. In the case of positron emission tomography (PET), ground truth data is crucial to validate processing algorithms and assessing their performances. The design of such ground truth data often relies on Monte-Carlo simulation techniques. Since the creation of a large dataset is not trivial both in terms of computing time and realism, we propose the OSSI-PET database containing 350 simulated [(11)C]Raclopride dynamic scans for rats, created specifically for the Inveon pre-clinical PET scanner. The originality of this database lies on the availability of several groups of scans with controlled biological variations in the striata. Besides, each group consists of a large number of realizations (i.e., noise replicates). We present the construction methodology of this database using rat pharmacokinetic and anatomical models. A first application using the OSSI-PET database is presented. Several commonly used reconstruction techniques were compared in terms of image quality, accuracy and variability of the activity estimates and of the computed kinetic parameters. The results showed that OP-OSEM3D iterative reconstruction method outperformed the other tested methods. Analytical methods such as FBP2D and 3DRP also produced satisfactory results. However, FORE followed by OSEM2D reconstructions should be avoided. Beyond the illustration of the potential of the database, this application will help scientists to understand the different sources of noise and bias that can occur at the different steps in the processing and will be very useful for choosing appropriate reconstruction methods and parameters.

  15. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately.

  16. Tests of methods for evaluating bibliographic databases: an analysis of the National Library of Medicine's handling of literatures in the medical behavioral sciences.

    PubMed

    Griffith, B C; White, H D; Drott, M C; Saye, J D

    1986-07-01

    This article reports on five separate studies designed for the National Library of Medicine (NLM) to develop and test methodologies for evaluating the products of large databases. The methodologies were tested on literatures of the medical behavioral sciences (MBS). One of these studies examined how well NLM covered MBS monographic literature using CATLINE and OCLC. Another examined MBS journal and serial literature coverage in MEDLINE and other MBS-related databases available through DIALOG. These two studies used 1010 items derived from the reference lists of sixty-one journals, and tested for gaps and overlaps in coverage in the various databases. A third study examined the quality of the indexing NLM provides to MBS literatures and developed a measure of indexing as a system component. The final two studies explored how well MEDLINE retrieved documents on topics submitted by MBS professionals and how online searchers viewed MEDLINE (and other systems and databases) in handling MBS topics. The five studies yielded both broad research outcomes and specific recommendations to NLM.

  17. The Education of Librarians for Data Administration.

    ERIC Educational Resources Information Center

    Koenig, Michael E. D.; Kochoff, Stephen T.

    1983-01-01

    Argues that the increasing importance of database management systems (DBMS) and recognition of the information dependency of business planning are creating new job opportunities for librarians/information technicians. Highlights include development and functions of DBMSs, data and database administration, potential for librarians, and implications…

  18. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  19. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  20. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  1. Scopus database: a review.

    PubMed

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  2. [DICOM data conversion technology research for database].

    PubMed

    Wang, Shiyu; Lin, Hao

    2010-12-01

    A comprehensive medical image platform built for medical images network access, measurements and virtual surgery navigation needs the support of medical image databases. The medical image database we built contains two-dimensional images and three-dimensional models. The common databases based on DICOM storing do not meet the requirements. We use the technology of DICOM conversion to convert DICOM into BMP images and indispensable data elements, and then we use the BMP images and indispensable data elements to reconstruct the three-dimensional model. The reliability of DICOM data conversion is verified, and on this basis, a human hip joint medical image database is built. Experimental results show that this method used in building the medical image database can not only meet the requirements of database application, but also greatly reduce the amount of database storage.

  3. JICST Factual Database

    NASA Astrophysics Data System (ADS)

    Hayase, Shuichi; Okano, Keiko

    Japan Information Center of Science and Technology (JICST) has started the on-line service of JICST Crystal Structure Database (JICST CR) in this January (1990). This database provides the information of atomic positions in a crystal and related informations of the crystal. The database system and the crystal data in JICST CR are outlined in this manuscript.

  4. [Evaluation of the Association of Hand-Foot Syndrome with Anticancer Drugs Using the US Food and Drug Administration Adverse Event Reporting System (FAERS) and Japanese Adverse Drug Event Report (JADER) Databases].

    PubMed

    Sasaoka, Sayaka; Matsui, Toshinobu; Abe, Junko; Umetsu, Ryogo; Kato, Yamato; Ueda, Natsumi; Hane, Yuuki; Motooka, Yumi; Hatahira, Haruna; Kinosada, Yasutomi; Nakamura, Mitsuhiro

    2016-01-01

    The Japanese Ministry of Health, Labor, and Welfare lists hand-foot syndrome as a serious adverse drug event. Therefore, we evaluated its association with anticancer drug therapy using case reports in the Japanese Adverse Drug Event Report (JADER) and the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS). In addition, we calculated the reporting odds ratio (ROR) of anticancer drugs potentially associated with hand-foot syndrome, and applied the Weibull shape parameter to time-to-event data from JADER. We found that JADER contained 338224 reports from April 2004 to November 2014, while FAERS contained 5821354 reports from January 2004 to June 2014. In JADER, the RORs [95% confidence interval (CI)] of hand-foot syndrome for capecitabine, tegafur-gimeracil-oteracil, fluorouracil, sorafenib, and regorafenib were 63.60 (95%CI, 56.19-71.99), 1.30 (95%CI, 0.89-1.89), 0.48 (95%CI, 0.30-0.77), 26.10 (95%CI, 22.86-29.80), and 133.27 (95%CI, 112.85-157.39), respectively. Adverse event symptoms of hand-foot syndrome were observed with most anticancer drugs, which carry warnings of the propensity to cause these effects in their drug information literature. The time-to-event analysis using the Weibull shape parameter revealed differences in the time-dependency of the adverse events of each drug. Therefore, anticancer drugs should be used carefully in clinical practice, and patients may require careful monitoring for symptoms of hand-foot syndrome.

  5. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  6. Development and Validation of a HPLC Method for the Determination of Cyclosporine A in New Bioadhesive Nanoparticles for Oral Administration

    PubMed Central

    Pecchio, M.; Salman, H.; Irache, J. M.; Renedo, M. J.; Dios-Viéitez, M. C.

    2014-01-01

    A simple and reliable high performance liquid chromatography method was developed and validated for the rapid determination of cyclosporine A in new pharmaceutical dosage forms based on the use of poly (methylvinylether-co-maleic anhydride) nanoparticles. The chromatographic separation was achieved using Ultrabase C18 column (250×4.6 mm, 5 μm), which was kept at 75°. The gradient mobile phase consisted of acetonitrile and water with a flow rate of 1 ml/min. The effluent was monitored at 205 nm using diode array detector. The method exhibited linearity over the assayed concentration range (22-250 μg/ml) and demonstrated good intraday and interday precision and accuracy (relative standard deviations were less than 6.5% and the deviation from theoretical values is below 5.5%). The detection limit was 1.36 μg/ml. This method was also applied for quantitative analysis of cyclosporine A released from poly (methylvinylether-co-maleic anhydride) nanoparticles. PMID:24843186

  7. A rapid method to assess the coverage of the mass drug administration of diethylcarbamazine in the program to eliminate lymphatic filariasis in India.

    PubMed

    Babu, B V

    2005-01-01

    A rapid method to assess the coverage of mass drug administration (MDA) in the program to eliminate lymphatic filariasis needs to be developed for monitoring and evaluation of the program. This study attempted to develop and test a method of rapid assessment of coverage by using the existing resources of the program. This is based on the data obtained from the randomly selected health workers and drug distributors involved in the drug distribution process and the data of a household coverage survey of the program. The MDA coverage rate obtained through the evaluation survey was highly correlated with the rates obtained from health workers and drug distributors as a rapid assessment. Thus, MDA coverages assessed through health workers and drug distributors can give a good coverage estimate. The involvement of the existing human resources of the program in this rapid method of assessing MDA coverage was cost-effective.

  8. Comparing the effects of reflexology methods and Ibuprofen administration on dysmenorrhea in female students of Isfahan University of Medical Sciences

    PubMed Central

    Valiani, Mahboubeh; Babaei, Elaheh; Heshmat, Reza; Zare, Zahra

    2010-01-01

    BACKGROUND: Dysmenorrhea or menstrual pain is one of the most common disorders experienced by 50% of women in their reproductive age. Adverse effects of medical treatments and its failure rate of 20-25% have caused many women to seek other complementary and alternative treatment methods for primary dysmenorrhea. Hence, this study aimed to compare and determine the efficacy of reflexology and Ibuprofen on reduction of pain intensity and duration of menstrual pain. METHODS: This was a quasi-experimental clinical trial study on 68 students with primary dysmenorrhea living in Isfahan University of Medical Sciences’ dormitories. Simple random sampling was done considering the inclusion criteria and then the students were randomly divided into two groups. In the reflexology group, the subjects received 10 reflexology sessions (40 minutes each) in two consecutive mense cycles. The Ibuprofen group received Ibuprofen (400 mg), once every eight hours for 3 days during 3 consecutive mense cycles. To assess the severity of dysmenorrhea, Standard McGill Pain Questionnaire, visual analog scale (VAS) and pain rating index (PRI) were used in this study. RESULTS: Findings of the study showed that the two groups had no statistically significant difference in terms of demographic characteristics (p > 0.05). Reflexology method was associated with more reduction of intensity and duration of menstrual pain in comparison with Ibuprofen therapy. Independent and Paired t-test showed that there was a significant difference in the two groups between intensity and duration of menstrual pain using VAS and PRI in each of the 3 cycles between reflexology and Ibuprofen groups (p < 0.05). CONCLUSIONS: Considering the results of the study, reflexology was superior to Ibuprofen on reducing dysmenorrhea and its treatment effect continued even after discontinuing the intervention in the third cycle. Therefore, considering that reflexology is a non-invasive, easy and cheap technique, it seems that it

  9. Determination of bergenin in human plasma after oral administration by HPLC-MS/MS method and its pharmacokinetic study.

    PubMed

    Wang, Jin; Wang, Ben-Jie; Wei, Chun-Min; Yuan, Gui-Yan; Zhang, Rui; Liu, Hui; Zhang, Xiu-Mei; Guo, Rui-Chen

    2009-02-01

    A highly sensitive, simple and selective high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) method was developed and applied to the determination of bergenin concentration in human plasma. Bergenin and the internal standard (IS) thiamphenicol in plasma were extracted with ethyl acetate, separated on a C(18 )reversed-phase column, eluted with mobile phase of acetonitrile-water, ionized by negative ion pneumatically assisted electrospray and detected in the multi-reaction monitoring mode using precursor --> product ions of m/z 327.1 --> 192 for bergenin and 354 --> 185.1 for the IS, respectively. The linear range of the calibration curve for bergenin was 0.25-60 ng mL(-1), with the lowest limit of quantification of 0.25 ng mL(-1), and the intra/inter-day relative standard deviation (RSD) was less than 10%. The method is suitable for the determination of low bergenin concentration in human plasma after therapeutic oral doses, and has been first and successfully used for its pharmacokinetic studies in healthy Chinese volunteers.

  10. ORNL RAIL & BARGE DB. Network Database

    SciTech Connect

    Johnson, P.

    1991-07-01

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database. The database consists of 96 subnetworks. Each of the subnetworks represent an individual railroad, a waterway system, or a composite group of small railroads. Two subnetworks represent waterways; one being barge/intercoastal, and the other coastal merchant marine with access through the Great Lakes/Saint Lawrence Seaway, Atlantic and Gulf Coasts, the Panama Canal, and Pacific Coast. Two other subnetworks represent small shortline railroads and terminal railroad operations. One subnetwork is maintained for the representation of Amtrak operations. The remaining 91 subnetworks represent individual or corporate groups of railroads. Coordinate locations are included as part of the database. The rail portion of the database is similar to the original FRA rail network. The waterway coordinates are greatly enhanced in the current release. Inland waterway representation was extracted from the 1:2,000,000 United States Geological Survey data. An important aspect of the database is the transfer file. This file identifies where two railroads interline traffic between their systems. Also included are locations where rail/waterway intermodal transfers could occur. Other files in the database include a translation table between Association of American Railroad (AAR) codes to the 96 subnetworks in the database, a list of names of the 96 subnetworks, and a file of names for a large proportion of the nodes in the network.

  11. ORNL RAIL & BARGE DB. Network Database

    SciTech Connect

    Johnson, P.

    1992-03-16

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database. The database consists of 96 subnetworks. Each of the subnetworks represent an individual railroad, a waterway system, or a composite group of small railroads. Two subnetworks represent waterways; one being barge/intercoastal, and the other coastal merchant marine with access through the Great Lakes/Saint Lawrence Seaway, Atlantic and Gulf Coasts, the Panama Canal, and Pacific Coast. Two other subnetworks represent small shortline railroads and terminal railroad operations. One subnetwork is maintained for the representation of Amtrak operations. The remaining 91 subnetworks represent individual or corporate groups of railroads. Coordinate locations are included as part of the database. The rail portion of the database is similar to the original FRA rail network. The waterway coordinates are greatly enhanced in the current release. Inland waterway representation was extracted from the 1:2,000,000 United States Geological Survey data. An important aspect of the database is the transfer file. This file identifies where two railroads interline traffic between their systems. Also included are locations where rail/waterway intermodal transfers could occur. Other files in the database include a translation table between Association of American Railroad (AAR) codes to the 96 subnetworks in the database, a list of names of the 96 subnetworks, and a file of names for a large proportion of the nodes in the network.

  12. The NCBI Taxonomy database.

    PubMed

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  13. Databases and registers: useful tools for research, no studies.

    PubMed

    Curbelo, Rafael J; Loza, Estíbaliz; de Yébenes, Maria Jesús García; Carmona, Loreto

    2014-04-01

    There are many misunderstandings about databases. Database is a commonly misused term in reference to any set of data entered into a computer. However, true databases serve a main purpose, organising data. They do so by establishing several layers of relationships; databases are hierarchical. Databases commonly organise data over different levels and over time, where time can be measured as the time between visits, or between treatments, or adverse events, etc. In this sense, medical databases are closely related to longitudinal observational studies, as databases allow the introduction of data on the same patient over time. Basically, we could establish four types of databases in medicine, depending on their purpose: (1) administrative databases, (2) clinical databases, (3) registers, and (4) study-oriented databases. But a database is a useful tool for a large variety of studies, not a type of study itself. Different types of databases serve very different purposes, and a clear understanding of the different research designs mentioned in this paper would prevent many of the databases we launch from being just a lot of work and very little science.

  14. HPLC method for comparative study on tissue distribution in rat after oral administration of salvianolic acid B and phenolic acids from Salvia miltiorrhiza.

    PubMed

    Xu, Man; Fu, Gang; Qiao, Xue; Wu, Wan-Ying; Guo, Hui; Liu, Ai-Hua; Sun, Jiang-Hao; Guo, De-An

    2007-10-01

    A sensitive and selective high-performance liquid chromatography method was developed and validated to determine the prototype of salvianolic acid B and the metabolites of phenolic acids (protocatechuic acid, vanillic acid and ferulic acid) in rat tissues after oral administration of total phenolic acids and salvianolic acid B extracted from the roots of Salvia miltiorrhiza, respectively. The tissue samples were treated with a simple liquid-liquid extraction prior to HPLC. Analysis of the extract was performed on a reverse-phase C(18) column with a mobile phase consisting of acetonitrile and 0.05% trifluoracetic acid. The calibration curves for the four phenolic acids were linear in the given concentration ranges. The intra-day and inter-day relative standard deviations in the measurement of quality control samples were less than 10% and the accuracies were in the range of 88-115%. The average recoveries of all the tissues ranged from 78.0 to 111.8%. This method was successfully applied to evaluate the distribution of the four phenolic acids in rat tissues after oral administration of total phenolic acids of Salvia miltiorrhiza or salvianolic acid B and the possible metabolic pathway was illustrated.

  15. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  16. Engineering Administration.

    ERIC Educational Resources Information Center

    Naval Personnel Program Support Activity, Washington, DC.

    This book is intended to acquaint naval engineering officers with their duties in the engineering department. Standard shipboard organizations are analyzed in connection with personnel assignments, division operations, and watch systems. Detailed descriptions are included for the administration of directives, ship's bills, damage control, training…

  17. Administrative IT

    ERIC Educational Resources Information Center

    Grayson, Katherine, Ed.

    2006-01-01

    When it comes to Administrative IT solutions and processes, best practices range across the spectrum. Enterprise resource planning (ERP), student information systems (SIS), and tech support are prominent and continuing areas of focus. But widespread change can also be accomplished via the implementation of campuswide document imaging and sharing,…

  18. PPD - Proteome Profile Database.

    PubMed

    Sakharkar, Kishore R; Chow, Vincent T K

    2004-01-01

    With the complete sequencing of multiple genomes, there have been extensions in the methods of sequence analysis from single gene/protein-based to analyzing multiple genes and proteins simultaneously. Therefore, there is a demand of user-friendly software tools that will allow mining of these enormous datasets. PPD is a WWW-based database for comparative analysis of protein lengths in completely sequenced prokaryotic and eukaryotic genomes. PPD's core objective is to create protein classification tables based on the lengths of proteins by specifying a set of organisms and parameters. The interface can also generate information on changes in proteins of specific length distributions. This feature is of importance when the user's interest is focused on some evolutionarily related organisms or on organisms with similar or related tissue specificity or life-style. PPD is available at: PPD Home.

  19. Data exploration systems for databases

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.; Hield, Christopher

    1992-01-01

    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems.

  20. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    SciTech Connect

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process. Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks from

  1. Implementing a Microcomputer Database Management System.

    ERIC Educational Resources Information Center

    Manock, John J.; Crater, K. Lynne

    1985-01-01

    Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…

  2. Report: EPA Needs to Strengthen Financial Database Security Oversight and Monitor Compliance

    EPA Pesticide Factsheets

    Report #2007-P-00017, March 29, 2007. Weaknesses in how EPA offices monitor databases for known security vulnerabilities, communicate the status of critical system patches, and monitor the access to database administrator accounts and privileges.

  3. Making Work Easy: Administrative Applications of Microcomputers.

    ERIC Educational Resources Information Center

    Schneider, Gail Thierbach

    This survey of the administrative applications of microcomputers identifies word processing, database management, spreadsheet functions, and graphics as four areas in which microcomputer use will reduce repetition, improve cost efficiency, minimize paperwork, enhance filing and retrieval systems, and save time. This will allow administrators and…

  4. The Microcomputer in the Administrative Office.

    ERIC Educational Resources Information Center

    Huntington, Fred

    1983-01-01

    Discusses microcomputer uses for administrative computing in education at site level and central office and recommends that administrators start with a word processing program for time management, an electronic spreadsheet for financial accounting, a database management system for inventories, and self-written programs to alleviate paper…

  5. WDDD: Worm Developmental Dynamics Database.

    PubMed

    Kyoda, Koji; Adachi, Eru; Masuda, Eriko; Nagai, Yoko; Suzuki, Yoko; Oguro, Taeko; Urai, Mitsuru; Arai, Ryoko; Furukawa, Mari; Shimada, Kumiko; Kuramochi, Junko; Nagai, Eriko; Onami, Shuichi

    2013-01-01

    During animal development, cells undergo dynamic changes in position and gene expression. A collection of quantitative information about morphological dynamics under a wide variety of gene perturbations would provide a rich resource for understanding the molecular mechanisms of development. Here, we created a database, the Worm Developmental Dynamics Database (http://so.qbic.riken.jp/wddd/), which stores a collection of quantitative information about cell division dynamics in early Caenorhabditis elegans embryos with single genes silenced by RNA-mediated interference. The information contains the three-dimensional coordinate values of the outlines of nuclear regions and the dynamics of the outlines over time. The database provides free access to 50 sets of quantitative data for wild-type embryos and 136 sets of quantitative data for RNA-mediated interference embryos corresponding to 72 of the 97 essential embryonic genes on chromosome III. The database also provides sets of four-dimensional differential interference contrast microscopy images on which the quantitative data were based. The database will provide a novel opportunity for the development of computational methods to obtain fresh insights into the mechanisms of development. The quantitative information and microscopy images can be synchronously viewed through a web browser, which is designed for easy access by experimental biologists.

  6. Developing a DNA variant database.

    PubMed

    Fung, David C Y

    2008-01-01

    Disease- and locus-specific variant databases have been a valuable resource to clinical and research geneticists. With the recent rapid developments in technologies, the number of DNA variants detected in a typical molecular genetics laboratory easily exceeds 1,000. To keep track of the growing inventory of DNA variants, many laboratories employ information technology to store the data as well as distributing the data and its associated information to clinicians and researchers via the Web. While it is a valuable resource, the hosting of a web-accessible database requires collaboration between bioinformaticians and biologists and careful planning to ensure its usability and availability. In this chapter, a series of tutorials on building a local DNA variant database out of a sample dataset will be provided. However, this tutorial will not include programming details on building a web interface and on constructing the web application necessary for web hosting. Instead, an introduction to the two commonly used methods for hosting web-accessible variant databases will be described. Apart from the tutorials, this chapter will also consider the resources and planning required for making a variant database project successful.

  7. WDDD: Worm Developmental Dynamics Database

    PubMed Central

    Kyoda, Koji; Adachi, Eru; Masuda, Eriko; Nagai, Yoko; Suzuki, Yoko; Oguro, Taeko; Urai, Mitsuru; Arai, Ryoko; Furukawa, Mari; Shimada, Kumiko; Kuramochi, Junko; Nagai, Eriko; Onami, Shuichi

    2013-01-01

    During animal development, cells undergo dynamic changes in position and gene expression. A collection of quantitative information about morphological dynamics under a wide variety of gene perturbations would provide a rich resource for understanding the molecular mechanisms of development. Here, we created a database, the Worm Developmental Dynamics Database (http://so.qbic.riken.jp/wddd/), which stores a collection of quantitative information about cell division dynamics in early Caenorhabditis elegans embryos with single genes silenced by RNA-mediated interference. The information contains the three-dimensional coordinate values of the outlines of nuclear regions and the dynamics of the outlines over time. The database provides free access to 50 sets of quantitative data for wild-type embryos and 136 sets of quantitative data for RNA-mediated interference embryos corresponding to 72 of the 97 essential embryonic genes on chromosome III. The database also provides sets of four-dimensional differential interference contrast microscopy images on which the quantitative data were based. The database will provide a novel opportunity for the development of computational methods to obtain fresh insights into the mechanisms of development. The quantitative information and microscopy images can be synchronously viewed through a web browser, which is designed for easy access by experimental biologists. PMID:23172286

  8. Searching NCBI Databases Using Entrez.

    PubMed

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-10-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  9. Searching NCBI databases using Entrez.

    PubMed

    Baxevanis, Andreas D

    2008-12-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two Basic Protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An Alternate Protocol builds upon the first Basic Protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The Support Protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  10. Searching NCBI databases using Entrez.

    PubMed

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-06-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  11. Evaluation of different preparation methods for a preservative free triamcinolone acetonide preparation for intravitreal administration: a validated stability indicating HPLC-method.

    PubMed

    Korodi, T; Lachmann, B; Kopelent-Frank, H

    2010-12-01

    Intravitreally applied triamcinolone acetonide (TA) is used to treat a variety of macular diseases. Commercially available products of TA are mainly intended for intramuscular application and contain benzyl alcohol (BA) as a bacteriostatic preservative. Since this agent damages ocular tissues, different methods such as filtration techniques and centrifugation are usually used to eliminate BA from commercial products (40 mg/mL TA, 9.9 mg/mL BA). In this study, we evaluated these methods in regard to their ability to eliminate benzyl alcohol and to guarantee standard doses of triamcinolone acetonide. A new formulation without BA (TA 40 mg/mL) was developped according to the following criteria: autoclavability, stability, and suitability for intravitreal use. For QA/QC evaluation a new rapid and simple HPLC procedure (C18 RP column, mobile phase consisting of methanol-water, 48:52, v/v) to quantify the respective compounds was developed and validated according to ICH guidelines. The HPLC method was proven to be selective, linear, precise and accurate. Analysis of preparations based on commercial products undergoing different filtration techniques showed variable results: TA concentrations of 22-80% of the declared amount were found, and BA content was not reduced to safe levels (up to 39% of initial content remained). Centrifugation methods decreased the concentration of the preservative adequately, however agglomerated TA crystals were observed, leading to irreproducible and deviating particle sizes that are potentially harmful with ocular use. The newly developed preservative free formulation (TA 40 mg/mL) delivered uniform doses of TA, revealed no drug loss during forced light exposure and was proven to be stable, sterile and bacterial endotoxin free after autoclaving and after storage for three months,. The new formulation may offer an alternative for the in-house production of intravitreally applicable TA preparations in hospital pharmacies and should enhance

  12. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  13. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.

  14. 2010 Worldwide Gasification Database

    DOE Data Explorer

    The 2010 Worldwide Gasification Database describes the current world gasification industry and identifies near-term planned capacity additions. The database lists gasification projects and includes information (e.g., plant location, number and type of gasifiers, syngas capacity, feedstock, and products). The database reveals that the worldwide gasification capacity has continued to grow for the past several decades and is now at 70,817 megawatts thermal (MWth) of syngas output at 144 operating plants with a total of 412 gasifiers.

  15. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  16. Development and Validation of a Qualitative Method for Target Screening of 448 Pesticide Residues in Fruits and Vegetables Using UHPLC/ESI Q-Orbitrap Based on Data-Independent Acquisition and Compound Database.

    PubMed

    Wang, Jian; Chow, Willis; Chang, James; Wong, Jon W

    2017-01-18

    A semiautomated qualitative method for target screening of 448 pesticide residues in fruits and vegetables was developed and validated using ultrahigh-performance liquid chromatography coupled with electrospray ionization quadrupole Orbitrap high-resolution mass spectrometry (UHPLC/ESI Q-Orbitrap). The Q-Orbitrap Full MS/dd-MS(2) (data dependent acquisition) was used to acquire product-ion spectra of individual pesticides to build a compound database or an MS library, while its Full MS/DIA (data independent acquisition) was utilized for sample data acquisition from fruit and vegetable matrices fortified with pesticides at 10 and 100 μg/kg for target screening purpose. Accurate mass, retention time and response threshold were three key parameters in a compound database that were used to detect incurred pesticide residues in samples. The concepts and practical aspects of in-spectrum mass correction or solvent background lock-mass correction, retention time alignment and response threshold adjustment are discussed while building a functional and working compound database for target screening. The validated target screening method is capable of screening at least 94% and 99% of 448 pesticides at 10 and 100 μg/kg, respectively, in fruits and vegetables without having to evaluate every compound manually during data processing, which significantly reduced the workload in routine practice.

  17. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  18. Mugshot Identification Database (MID)

    National Institute of Standards and Technology Data Gateway

    NIST Mugshot Identification Database (MID) (PC database for purchase)   NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size using lossless compression. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  19. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  20. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  1. HIV Sequence Databases

    PubMed Central

    Kuiken, Carla; Korber, Bette; Shafer, Robert W.

    2008-01-01

    Two important databases are often used in HIV genetic research, the HIV Sequence Database in Los Alamos, which collects all sequences and focuses on annotation and data analysis, and the HIV RT/Protease Sequence Database in Stanford, which collects sequences associated with the development of viral resistance against anti-retroviral drugs and focuses on analysis of those sequences. The types of data and services these two databases offer, the tools they provide, and the way they are set up and operated are described in detail. PMID:12875108

  2. Physics-Based GOES Satellite Product for Use in NREL's National Solar Radiation Database: Preprint

    SciTech Connect

    Sengupta, M.; Habte, A.; Gotseff, P.; Weekley, A.; Lopez, A.; Molling, C.; Heidinger, A.

    2014-07-01

    The National Renewable Energy Laboratory (NREL), University of Wisconsin, and National Oceanic Atmospheric Administration are collaborating to investigate the integration of the Satellite Algorithm for Shortwave Radiation Budget (SASRAB) products into future versions of NREL's 4-km by 4-km gridded National Solar Radiation Database (NSRDB). This paper describes a method to select an improved clear-sky model that could replace the current SASRAB global horizontal irradiance and direct normal irradiances reported during clear-sky conditions.

  3. dbQSNP: a database of SNPs in human promoter regions with allele frequency information determined by single-strand conformation polymorphism-based methods.

    PubMed

    Tahira, Tomoko; Baba, Shingo; Higasa, Koichiro; Kukita, Yoji; Suzuki, Yutaka; Sugano, Sumio; Hayashi, Kenshi

    2005-08-01

    We present a database, dbQSNP (http://qsnp.gen.kyushu-u.ac.jp/), that provides sequence and allele frequency information for single-nucleotide polymorphisms (SNPs) located in the promoter regions of human genes, which were defined by the 5' ends of full-length cDNA clones. We searched for the SNPs in these regions by sequencing or single-strand conformation polymorphism (SSCP) analysis. The allele frequencies of the identified SNPs in two ethnic groups were quantified by SSCP analyses of pooled DNA samples. The accuracy of our estimation is supported by strong correlations between the frequencies in our data and those in other databases for the same ethnic groups. The frequencies vary considerably between the two ethnic groups studied, suggesting the need for population-based collections and allele frequency determination of SNPs, in, e.g., association studies of diseases. We show profiles of SNP densities that are characteristic of transcription start site regions. A fraction of the SNPs revealed a significantly different allele frequency between the groups, suggesting differential selection of the genes involved.

  4. An Introduction to Database Management Systems.

    ERIC Educational Resources Information Center

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  5. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  6. Paleoepidemiologic investigation of Legionnaires disease at Wadsworth Veterans Administration Hospital by using three typing methods for comparison of legionellae from clinical and environmental sources.

    PubMed Central

    Edelstein, P H; Nakahama, C; Tobin, J O; Calarco, K; Beer, K B; Joly, J R; Selander, R K

    1986-01-01

    Multilocus enzyme electrophoresis, monoclonal antibody typing for Legionella pneumophila serogroup 1, and plasmid analysis were used to type 89 L. pneumophila strains isolated from nosocomial cases of Legionnaires disease at the Veterans Administration Wadsworth Medical Center (VAWMC) and from the hospital environment. Twelve L. pneumophila clinical isolates, obtained from patients at non-VAWMC hospitals, were also typed by the same methods to determine typing specificity. Seventy-nine percent of 33 VAWMC L. pneumophila serogroup 1 clinical isolates and 70% of 23 environmental isolates were found in only one of the five monoclonal subgroups. Similar clustering was found for the other two typing methods, with excellent correlation between all methods. Enzyme electrophoretic typing divided the isolates into the greatest number of distinct groups, resulting in the identification of 10 different L. pneumophila types and 5 types not belonging to L. pneumophila, which probably constitute an undescribed Legionella species; 7 clinical and 34 environmental VAWMC isolates and 2 non-VAWMC clinical isolates were found to be members of the new species. Twelve different plasmid patterns were found; 95% of VAWMC clinical isolates contained plasmids. Major VAWMC epidemic-bacterial types were common in the hospital potable-water distribution system and cooling towers. Strains of L. pneumophila which persisted after disinfection of contaminated environmental sites were of a different type from the prechlorination strains. All three typing methods were useful in the epidemiologic analysis of the VAWMC outbreak. PMID:3711303

  7. Automated tools for cross-referencing large databases. Final report

    SciTech Connect

    Clapp, N E; Green, P L; Bell, D

    1997-05-01

    A Cooperative Research and Development Agreement (CRADA) was funded with TRESP Associates, Inc., to develop a limited prototype software package operating on one platform (e.g., a personal computer, small workstation, or other selected device) to demonstrate the concepts of using an automated database application to improve the process of detecting fraud and abuse of the welfare system. An analysis was performed on Tennessee`s welfare administration system. This analysis was undertaken to determine if the incidence of welfare waste, fraud, and abuse could be reduced and if the administrative process could be improved to reduce benefits overpayment errors. The analysis revealed a general inability to obtain timely data to support the verification of a welfare recipient`s economic status and eligibility for benefits. It has been concluded that the provision of more modern computer-based tools and the establishment of electronic links to other state and federal data sources could increase staff efficiency, reduce the incidence of out-of-date information provided to welfare assistance staff, and make much of the new data required available in real time. Electronic data links have been proposed to allow near-real-time access to data residing in databases located in other states and at federal agency data repositories. The ability to provide these improvements to the local office staff would require the provision of additional computers, software, and electronic data links within each of the offices and the establishment of approved methods of accessing remote databases and transferring potentially sensitive data. In addition, investigations will be required to ascertain if existing laws would allow such data transfers, and if not, what changed or new laws would be required. The benefits, in both cost and efficiency, to the state of Tennessee of having electronically-enhanced welfare system administration and control are expected to result in a rapid return of investment.

  8. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... RELEASE PREVENTION REQUIREMENTS; RISK MANAGEMENT PROGRAMS UNDER THE CLEAN AIR ACT SECTION 112(r)(7... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to...

  9. Video Databases: An Emerging Tool in Business Education

    ERIC Educational Resources Information Center

    MacKinnon, Gregory; Vibert, Conor

    2014-01-01

    A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…

  10. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved

  11. Time Dependent Antinociceptive Effects of Morphine and Tramadol in the Hot Plate Test: Using Different Methods of Drug Administration in Female Rats

    PubMed Central

    Gholami, Morteza; Saboory, Ehsan; Mehraban, Sogol; Niakani, Afsaneh; Banihabib, Nafiseh; Azad, Mohamad-Reza; Fereidoni, Javid

    2015-01-01

    Morphine and tramadol which have analgesic effects can be administered acutely or chronically. This study tried to investigate the effect of these drugs at various times by using different methods of administration (intraperitoneal, oral, acute and chronic). Sixty adult female rats were divided into six groups. They received saline, morphine or tramadol (20 to 125 mg/Kg) daily for 15 days. A hot plate test was performed for the rats at the 1st, 8th and 15th days. After drug withdrawal, the hot plate test was repeated at the 17th, 19th, and 22nd days. There was a significant correlation between the day, drug, group, and their interaction (P<0.001). At 1st day (d1), both morphine, and tramadol caused an increase in the hot plate time comparing to the saline groups (P<0.001), while there was no correlation between drug administration methods of morphine and/or tramadol. At the 8th day (d8), morphine and tramadol led to the most powerful analgesic effect comparing to the other experimental days (P<0.001). At the 15th day (d15), their effects diminished comparing to the d8. After drug withdrawal, analgesic effect of morphine, and tramadol disappeared. It can be concluded that the analgesic effect of morphine and tramadol increases with the repeated use of them. Thereafter, it may gradually decrease and reach to a level compatible to d1. The present data also indicated that although the analgesic effect of morphine and tramadol is dose-and-time dependent, but chronic exposure to them may not lead to altered nociceptive responses later in life. PMID:25561936

  12. Creating a VAPEPS database: A VAPEPS tutorial

    NASA Technical Reports Server (NTRS)

    Graves, George

    1989-01-01

    A procedural method is outlined for creating a Vibroacoustic Payload Environment Prediction System (VAPEPS) Database. The method of presentation employs flowcharts of sequential VAPEPS Commands used to create a VAPEPS Database. The commands are accompanied by explanatory text to the right of the command in order to minimize the need for repetitive reference to the VAPEPS user's manual. The method is demonstrated by examples of varying complexity. It is assumed that the reader has acquired a basic knowledge of the VAPEPS software program.

  13. Consumer Product Category Database

    EPA Pesticide Factsheets

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  14. BioImaging Database

    SciTech Connect

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  15. Biological Macromolecule Crystallization Database

    National Institute of Standards and Technology Data Gateway

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  16. Online Database Searching Workbook.

    ERIC Educational Resources Information Center

    Littlejohn, Alice C.; Parker, Joan M.

    Designed primarily for use by first-time searchers, this workbook provides an overview of online searching. Following a brief introduction which defines online searching, databases, and database producers, five steps in carrying out a successful search are described: (1) identifying the main concepts of the search statement; (2) selecting a…

  17. HIV Structural Database

    National Institute of Standards and Technology Data Gateway

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  18. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  19. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  20. Morchella MLST database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Welcome to the Morchella MLST database. This dedicated database was set up at the CBS-KNAW Biodiversity Center by Vincent Robert in February 2012, using BioloMICS software (Robert et al., 2011), to facilitate DNA sequence-based identifications of Morchella species via the Internet. The current datab...

  1. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…

  2. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  3. Database Reviews: Legal Information.

    ERIC Educational Resources Information Center

    Seiser, Virginia

    Detailed reviews of two legal information databases--"Laborlaw I" and "Legal Resource Index"--are presented in this paper. Each database review begins with a bibliographic entry listing the title; producer; vendor; cost per hour contact time; offline print cost per citation; time period covered; frequency of updates; and size…

  4. Comparison of Sample Preparation Methods, Instrumentation Platforms, and Contemporary Commercial Databases for Identification of Clinically Relevant Mycobacteria by Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry

    PubMed Central

    Wilen, Craig B.; McMullen, Allison R.

    2015-01-01

    When mycobacteria are recovered in clinical specimens, timely species-level identification is required to establish the clinical significance of the isolate and facilitate optimization of antimicrobial therapy. Matrix-assisted laser desorption ionization–time of flight mass spectrometry (MALDI-TOF MS) has recently been reported to be a reliable and expedited method for identification of mycobacteria, although various specimen preparation techniques and databases for analysis are reported across studies. Here we compared two MALDI-TOF MS instrumentation platforms and three databases: Bruker Biotyper Real Time Classification 3.1 (Biotyper), Vitek MS Plus Saramis Premium (Saramis), and Vitek MS v3.0. We evaluated two sample preparation techniques and demonstrate that extraction methods are not interchangeable across different platforms or databases. Once testing parameters were established, a panel of 157 mycobacterial isolates (including 16 Mycobacterium tuberculosis isolates) was evaluated, demonstrating that with the appropriate specimen preparation, all three methods provide reliable identification for most species. Using a score cutoff value of ≥1.8, the Biotyper correctly identified 133 (84.7%) isolates with no misidentifications. Using a confidence value of ≥90%, Saramis correctly identified 134 (85.4%) isolates with one misidentification and Vitek MS v3.0 correctly identified 140 (89.2%) isolates with one misidentification. The levels of accuracy were not significantly different across the three platforms (P = 0.14). In addition, we show that Vitek MS v3.0 requires modestly fewer repeat analyses than the Biotyper and Saramis methods (P = 0.04), which may have implications for laboratory workflow. PMID:25972426

  5. The use of regression equations for quality control in a pesticide physical property database

    NASA Astrophysics Data System (ADS)

    Johnson, Bruce; Johnson, Carol; Seiber, James

    1995-01-01

    Quality control is a crucial aspect of database management, particularly for physicochemical parameters that are widely used in modeling environmental fate processes. Complete rechecking of original studies to verify environmental fate parameters is time consuming and difficult. This paper evaluates an alternative, more efficient approach to identifying database errors. The approach focuses verification efforts on a targeted subset of entries by making use of the relationship between water solubility (S) and soil organic carbon partition coefficient ( K oc ). Two regression equations, one selected from the literature and one calculated from entries in the database, were used to evaluate the reasonableness of ( S, K oc ) pairs among control compared to the targeted outlier group from a total of 59 pesticides. Our hypothesis was that ( S, K oc ) pairs that lay far from the regression line were more likely to be in error than those that fit the regression. Database values were checked against original studies. Identified errors in the database included coding mistakes, miscalculations, and incorrect chemical identification codes. The error rate in outlier ( S, K oc ) pairs was about twice that of pairs that conformed to the regression equation; however, the error rate differential was probably not large enough to justify the use of this quality control method. Through our close scrutiny of database entries we were able to identify administrative practices that led to mistakes in the data base. Resolution of these problems will significantly decrease the number of future mistakes.

  6. Obscured by administrative data? Racial disparities in occupational injury.

    PubMed

    Sabbath, Erika L; Boden, Leslie I; Williams, Jessica Ar; Hashimoto, Dean; Hopcia, Karen; Sorensen, Glorian

    2017-03-01

    Objectives Underreporting of occupational injuries is well documented, but underreporting patterns may vary by worker characteristics, obscuring disparities. We tested for racial and ethnic differences in injury reporting patterns by comparing injuries reported via research survey and administrative injury database in the same group of healthcare workers in the US. Methods We used data from a cohort of 1568 hospital patient-care workers who were asked via survey whether they had been injured at work during the year prior (self-reported injury; N=244). Using the hospital's injury database, we determined whether the same workers had reported injuries to the hospital's occupational health service during that year (administratively reported injury; N=126). We compared data sources to test for racial and ethnic differences in injury reporting practices. Results In logistic regression models adjusted for demographic and occupational characteristics, black workers' odds of injury as measured by self-report data were 1.91 [95% confidence interval (95% CI) 1.04-3.49] compared with white workers. The same black workers' odds of injury as measured by administrative data were 1.22 (95% CI 0.54-2.77) compared with white workers. Conclusions The undercount of occupational injuries in administrative versus self-report data may be greater among black compared to white workers, leading to underestimates of racial disparities in workplace injury.

  7. National Database of Geriatrics

    PubMed Central

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    Aim of database The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. Study population The database population consists of patients who were admitted to a geriatric hospital unit. Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14–15,000 admissions per year, and the database completeness has been stable at 90% during the past 5 years. Main variables An important part of the geriatric approach is the interdisciplinary collaboration. Indicators, therefore, reflect the combined efforts directed toward the geriatric patient. The indicators include Barthel index, body mass index, de Morton Mobility Index, Chair Stand, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. Descriptive data Descriptive patient-related data include information about home, mobility aid, need of fall and/or cognitive diagnosing, and categorization of cause (general geriatric, orthogeriatric, or neurogeriatric). Conclusion The National Database of Geriatrics covers ∼90% of geriatric admissions in Danish hospitals and provides valuable information about a large and increasing patient population in the health care system. PMID:27822120

  8. Hazard Analysis Database Report

    SciTech Connect

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  9. Development and Validation of an UPLC-Q-TOF-MS Method for Quantification of Fuziline in Beagle Dog After Intragastric and Intravenous Administration.

    PubMed

    Gong, Xiao-hong; Li, Yan; Li, Yun-xia; Yuan, An; Zhao, Meng-jie; Zhang, Ruo-qi; Zeng, Dai-wen; Peng, Cheng

    2016-03-01

    A specific and sensitive UPLC-Q-TOF-MS method operated in the positive ion mode was developed and validated for the quantification of Fuziline in Beagle dog plasma. Fuziline and Neoline internal standard were separated on an Acquity UPLC BEH C18 column with the total running time of 4 min using gradient elution at the flow rate of 0.25 mL/min. The calibration curves for Fuziline showed good linearity in the concentrations ranging from 2 to 400 ng/mL with correlation coefficients (r) greater than 0.9971. The lower limit of quantification was 0.8 ng/mL. Intra- and interbatch relative standard deviations ranged from 2.11 to 3.11% and 3.12 to 3.81%, respectively. Fuziline was stable under different sample storage and processing conditions. The developed method was successfully applied to the comparative pharmacokinetic study of Fuziline in Beagle dog after intravenous and oral administration. Low absolute bioavailability of Fuziline (1.45 ± 0.76%) suggested a significant metabolism transformation extent in Beagle dog.

  10. Validation of an UPLC-MS-MS Method for Quantitative Analysis of Vincristine in Human Urine After Intravenous Administration of Vincristine Sulfate Liposome Injection.

    PubMed

    Yang, Fen; Wang, Hongyun; Hu, Pei; Jiang, Ji

    2015-07-01

    Vincristine sulfate liposome injection (VSLI) is a liposomal formulation of vincristine (VCR) sulfate, being developed for the systemic treatment of cancer. In this paper, we have developed and validated a method to quantify VCR in human urine to obtain the urinary excretion of VCR after intravenous administration of VSLI. The analyte was extracted from urine samples using liquid-liquid extraction after addition of vinblastine (VBL, used as internal standard) and chromatographed on an Acquity UPLC HSS T3 column with a gradient mobile phase at a flow rate of 0.4 mL/min. The multiple reactions monitoring transitions of m/z 413.2 → 353.2 and m/z 406.2 → 271.6 were used to quantify VCR and VBL, respectively. The lower limit of quantification was 0.5 ng/mL with a precision (RSD%) of 5.7% and an accuracy (RE%) of 6.7%. The calibration curve was linear up to 100.0 ng/mL. Intraday precision and accuracy ranged from 0.8 to 11.0% and from -12.4 to 11.3%, respectively. Interassay precision and accuracy ranged from 8.0 to 10.1% and from -7.7 to 3.6%, respectively. No significant matrix effect was observed for VCR. The method was successfully applied for pharmacokinetic study of VSLI to investigate the route and extent of VCR urinary excretion in Chinese subjects with lymphoma.

  11. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  12. JICST Factual Database

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuaki; Shimura, Kazuki; Monma, Yoshio; Sakamoto, Masao; Morishita, Hiroshi; Kanazawa, Kenji

    The Japan Information Center of Science and Technology (JICST) has started the on-line service of JICST/NRIM Materials Strength Database for Engineering Steels and Alloys (JICST ME) in this March (1990). This database has been developed under the joint research between JICST and the National Research Institute for Metals (NRIM). It provides material strength data (creep, fatigue, etc.) of engineering steels and alloys. It is able to search and display on-line, and to analyze the searched data statistically and plot the result on graphic display. The database system and the data in JICST ME are described.

  13. Investigating the mechanism of hepatocellular carcinoma progression by constructing genetic and epigenetic networks using NGS data identification and big database mining method.

    PubMed

    Li, Cheng-Wei; Chang, Ping-Yao; Chen, Bor-Sen

    2016-11-29

    The mechanisms leading to the development and progression of hepatocellular carcinoma (HCC) are complicated and regulated genetically and epigenetically. The recent advancement in high-throughput sequencing has facilitated investigations into the role of genetic and epigenetic regulations in hepatocarcinogenesis. Therefore, we used systems biology and big database mining to construct genetic and epigenetic networks (GENs) using the information about mRNA, miRNA, and methylation profiles of HCC patients. Our approach involves analyzing gene regulatory networks (GRNs), protein-protein networks (PPINs), and epigenetic networks at different stages of hepatocarcinogenesis. The core GENs, influencing each stage of HCC, were extracted via principal network projection (PNP). The pathways during different stages of HCC were compared. We observed that extracellular signals were further transduced to transcription factors (TFs), resulting in the aberrant regulation of their target genes, in turn inducing mechanisms that are responsible for HCC progression, including cell proliferation, anti-apoptosis, aberrant cell cycle, cell survival, and metastasis. We also selected potential multiple drugs specific to prominent epigenetic network markers of each stage of HCC: lestaurtinib, dinaciclib, and perifosine against the NTRK2, MYC, and AKT1 markers influencing HCC progression from stage I to stage II; celecoxib, axitinib, and vinblastine against the DDIT3, PDGFB, and JUN markers influencing HCC progression from stage II to stage III; and atiprimod, celastrol, and bortezomib against STAT3, IL1B, and NFKB1 markers influencing HCC progression from stage III to stage IV.

  14. Method and system for normalizing biometric variations to authenticate users from a public database and that ensures individual biometric data privacy

    DOEpatents

    Strait, Robert S.; Pearson, Peter K.; Sengupta, Sailes K.

    2000-01-01

    A password system comprises a set of codewords spaced apart from one another by a Hamming distance (HD) that exceeds twice the variability that can be projected for a series of biometric measurements for a particular individual and that is less than the HD that can be encountered between two individuals. To enroll an individual, a biometric measurement is taken and exclusive-ORed with a random codeword to produce a "reference value." To verify the individual later, a biometric measurement is taken and exclusive-ORed with the reference value to reproduce the original random codeword or its approximation. If the reproduced value is not a codeword, the nearest codeword to it is found, and the bits that were corrected to produce the codeword to it is found, and the bits that were corrected to produce the codeword are also toggled in the biometric measurement taken and the codeword generated during enrollment. The correction scheme can be implemented by any conventional error correction code such as Reed-Muller code R(m,n). In the implementation using a hand geometry device an R(2,5) code has been used in this invention. Such codeword and biometric measurement can then be used to see if the individual is an authorized user. Conventional Diffie-Hellman public key encryption schemes and hashing procedures can then be used to secure the communications lines carrying the biometric information and to secure the database of authorized users.

  15. Investigating the mechanism of hepatocellular carcinoma progression by constructing genetic and epigenetic networks using NGS data identification and big database mining method

    PubMed Central

    Li, Cheng-Wei; Chang, Ping-Yao; Chen, Bor-Sen

    2016-01-01

    The mechanisms leading to the development and progression of hepatocellular carcinoma (HCC) are complicated and regulated genetically and epigenetically. The recent advancement in high-throughput sequencing has facilitated investigations into the role of genetic and epigenetic regulations in hepatocarcinogenesis. Therefore, we used systems biology and big database mining to construct genetic and epigenetic networks (GENs) using the information about mRNA, miRNA, and methylation profiles of HCC patients. Our approach involves analyzing gene regulatory networks (GRNs), protein-protein networks (PPINs), and epigenetic networks at different stages of hepatocarcinogenesis. The core GENs, influencing each stage of HCC, were extracted via principal network projection (PNP). The pathways during different stages of HCC were compared. We observed that extracellular signals were further transduced to transcription factors (TFs), resulting in the aberrant regulation of their target genes, in turn inducing mechanisms that are responsible for HCC progression, including cell proliferation, anti-apoptosis, aberrant cell cycle, cell survival, and metastasis. We also selected potential multiple drugs specific to prominent epigenetic network markers of each stage of HCC: lestaurtinib, dinaciclib, and perifosine against the NTRK2, MYC, and AKT1 markers influencing HCC progression from stage I to stage II; celecoxib, axitinib, and vinblastine against the DDIT3, PDGFB, and JUN markers influencing HCC progression from stage II to stage III; and atiprimod, celastrol, and bortezomib against STAT3, IL1B, and NFKB1 markers influencing HCC progression from stage III to stage IV. PMID:27821810

  16. Numeric Databases in the Sciences.

    ERIC Educational Resources Information Center

    Meschel, S. V.

    1984-01-01

    Provides exploration into types of numeric databases available (also known as source databases, nonbibliographic databases, data-files, data-banks, fact banks); examines differences and similarities between bibliographic and numeric databases; identifies disciplines that utilize numeric databases; and surveys representative examples in the…

  17. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  18. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  19. Chemical Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  20. Hawaii bibliographic database

    NASA Astrophysics Data System (ADS)

    Wright, Thomas L.; Takahashi, Taeko Jane

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and s or (if no ) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  1. Enhancing medical database security.

    PubMed

    Pangalos, G; Khair, M; Bozios, L

    1994-08-01

    A methodology for the enhancement of database security in a hospital environment is presented in this paper which is based on both the discretionary and the mandatory database security policies. In this way the advantages of both approaches are combined to enhance medical database security. An appropriate classification of the different types of users according to their different needs and roles and a User Role Definition Hierarchy has been used. The experience obtained from the experimental implementation of the proposed methodology in a major general hospital is briefly discussed. The implementation has shown that the combined discretionary and mandatory security enforcement effectively limits the unauthorized access to the medical database, without severely restricting the capabilities of the system.

  2. Uranium Location Database Compilation

    EPA Pesticide Factsheets

    EPA has compiled mine location information from federal, state, and Tribal agencies into a single database as part of its investigation into the potential environmental hazards of wastes from abandoned uranium mines in the western United States.

  3. Livestock Anaerobic Digester Database

    EPA Pesticide Factsheets

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  4. Hawaii bibliographic database

    USGS Publications Warehouse

    Wright, T.L.; Takahashi, T.J.

    1998-01-01

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  5. Nuclear Science References Database

    SciTech Connect

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  6. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  7. Querying genomic databases

    SciTech Connect

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  8. Database computing in HEP

    SciTech Connect

    Day, C.T.; Loken, S.; MacFarlane, J.F. ); May, E.; Lifka, D.; Lusk, E.; Price, L.E. ); Baden, A. . Dept. of Physics); Grossman, R.; Qin, X. . Dept. of Mathematics, Statistics and Computer Science); Cormell, L.; Leibold, P.; Liu, D

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples.

  9. Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  10. Web Based Database Processing for Turkish Navy Officers in USA

    DTIC Science & Technology

    2002-09-01

    2. Solution Proposed by This Thesis All the officers must prepare paper-based educational , administrative and personal reports according to DKY...Fundamentals, Design, And Implementation, Prentice-Hall, 2000 Topuz, Rasim, Web-based database applications: An educational , Administrative Management

  11. The comprehensive peptaibiotics database.

    PubMed

    Stoppacher, Norbert; Neumann, Nora K N; Burgstaller, Lukas; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2013-05-01

    Peptaibiotics are nonribosomally biosynthesized peptides, which - according to definition - contain the marker amino acid α-aminoisobutyric acid (Aib) and possess antibiotic properties. Being known since 1958, a constantly increasing number of peptaibiotics have been described and investigated with a particular emphasis on hypocrealean fungi. Starting from the existing online 'Peptaibol Database', first published in 1997, an exhaustive literature survey of all known peptaibiotics was carried out and resulted in a list of 1043 peptaibiotics. The gathered information was compiled and used to create the new 'The Comprehensive Peptaibiotics Database', which is presented here. The database was devised as a software tool based on Microsoft (MS) Access. It is freely available from the internet at http://peptaibiotics-database.boku.ac.at and can easily be installed and operated on any computer offering a Windows XP/7 environment. It provides useful information on characteristic properties of the peptaibiotics included such as peptide category, group name of the microheterogeneous mixture to which the peptide belongs, amino acid sequence, sequence length, producing fungus, peptide subfamily, molecular formula, and monoisotopic mass. All these characteristics can be used and combined for automated search within the database, which makes The Comprehensive Peptaibiotics Database a versatile tool for the retrieval of valuable information about peptaibiotics. Sequence data have been considered as to December 14, 2012.

  12. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  13. The Transporter Classification Database

    PubMed Central

    Saier, Milton H.; Reddy, Vamsee S.; Tamang, Dorjee G.; Västermark, Åke

    2014-01-01

    The Transporter Classification Database (TCDB; http://www.tcdb.org) serves as a common reference point for transport protein research. The database contains more than 10 000 non-redundant proteins that represent all currently recognized families of transmembrane molecular transport systems. Proteins in TCDB are organized in a five level hierarchical system, where the first two levels are the class and subclass, the second two are the family and subfamily, and the last one is the transport system. Superfamilies that contain multiple families are included as hyperlinks to the five tier TC hierarchy. TCDB includes proteins from all types of living organisms and is the only transporter classification system that is both universal and recognized by the International Union of Biochemistry and Molecular Biology. It has been expanded by manual curation, contains extensive text descriptions providing structural, functional, mechanistic and evolutionary information, is supported by unique software and is interconnected to many other relevant databases. TCDB is of increasing usefulness to the international scientific community and can serve as a model for the expansion of database technologies. This manuscript describes an update of the database descriptions previously featured in NAR database issues. PMID:24225317

  14. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  15. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  16. Crude Oil Analysis Database

    DOE Data Explorer

    Shay, Johanna Y.

    The composition and physical properties of crude oil vary widely from one reservoir to another within an oil field, as well as from one field or region to another. Although all oils consist of hydrocarbons and their derivatives, the proportions of various types of compounds differ greatly. This makes some oils more suitable than others for specific refining processes and uses. To take advantage of this diversity, one needs access to information in a large database of crude oil analyses. The Crude Oil Analysis Database (COADB) currently satisfies this need by offering 9,056 crude oil analyses. Of these, 8,500 are United States domestic oils. The database contains results of analysis of the general properties and chemical composition, as well as the field, formation, and geographic location of the crude oil sample. [Taken from the Introduction to COAMDATA_DESC.pdf, part of the zipped software and database file at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the zipped file to your PC. When opened, it will contain PDF documents and a large Excel spreadsheet. It will also contain the database in Microsoft Access 2002.

  17. Administrative Methods for Reducing Crime in Primary and Secondary Schools: A Regression Analysis of the U.S. Department of Education School Survey of Crime and Safety

    ERIC Educational Resources Information Center

    Noonan, James H.

    2011-01-01

    Since the 1999 Columbine High School shooting school administrators have been tasked with creating positive education environments while also maximizing the safety of the students and staff. However, limited resources require school administrators to only employ safety policies which are actually effective in reducing crime. In order to help…

  18. Databases: Peter's Picks and Pans.

    ERIC Educational Resources Information Center

    Jacso, Peter

    1995-01-01

    Reviews the best and worst in databases on disk, CD-ROM, and online, and offers judgments and observations on database characteristics. Two databases are praised and three are criticized. (Author/JMV)

  19. REal-time COsmic Ray Database (RECORD)

    NASA Astrophysics Data System (ADS)

    Usoskin, I.; Kozlov, Valery; Ksenofontov, Leonid, Kudela, Karel; Starodubtsev, Sergei; Turpanov, Alexey; Yanke, Victor

    2003-07-01

    In this paper we present a first distributed REal-time COsmic Ray Database (RECORD). The aim of the project is to develop a unified database with data from different neutron monitors collected together, in unified format and to provide a user with several commonly used data access methods. The database contains not only original cosmic ray data but also auxiliary data necessary for scientific data analysis. Currently the database includes Lomn.Stit, Moscow, Oulu; Tixie Bay, Yakutsk stations. The main database server is located in IKFIA SB RAS (Yakutsk) but there will be several mirrors of the database. The database and all its mirrors are up dated on the nearly real-time (1 hour) basis. The data access software includes WWW-interface, Perl scripts and C library, which may be linked to a user program. Most of frequently used functions are implemented to make it operable to users without SQL language knowledge. A draft of the data representation standard is suggested, based on common practice of neutron monitor community. The database engine is freely distributed open-sourced PostgreSQL server coupled with a set of replication to ols developed at Bio engineering division of the IRCCS E.Medea, Italy.

  20. Study on the pharmacokinetics profiles of polyphyllin I and its bioavailability enhancement through co-administration with P-glycoprotein inhibitors by LC-MS/MS method.

    PubMed

    Zhu, He; Zhu, Si-Can; Shakya, Shailendra; Mao, Qian; Ding, Chuan-Hua; Long, Min-Hui; Li, Song-Lin

    2015-03-25

    Polyphyllin I (PPI), one of the steroidal saponins in Paris polyphylla, is a promising natural anticancer candidate. Although the anticancer activity of PPI has been well demonstrated, information regarding the pharmacokinetics and bioavailability is limited. In this study, a series of reliable and rapid liquid chromatography-tandem mass spectrometry methods were developed and successfully applied to determinate PPI in rat plasma, cell incubation media and cell homogenate. Then the pharmacokinetics of PPI in rats was studied and the result revealed that PPI was slowly eliminated with low oral bioavailability (about 0.62%) at a dose of 50 mg/kg, and when co-administrated with verapamil (VPL) and cyclosporine A (CYA), the oral bioavailability of PPI could increase from 0.62% to 3.52% and 3.79% respectively. In addition, in vitro studies showed that with the presence of VPL and CYA in Caco-2 cells, the efflux ratio of PPI decreased from 12.5 to 2.96 and 2.22, and the intracellular concentrations increased 5.8- and 5.0-fold respectively. These results demonstrated that PPI, with poor oral bioavailability, is greatly impeded by P-gp efflux, and inhibition of P-gp can enhance its bioavailability.

  1. QIS: A Framework for Biomedical Database Federation

    PubMed Central

    Marenco, Luis; Wang, Tzuu-Yi; Shepherd, Gordon; Miller, Perry L.; Nadkarni, Prakash

    2004-01-01

    Query Integrator System (QIS) is a database mediator framework intended to address robust data integration from continuously changing heterogeneous data sources in the biosciences. Currently in the advanced prototype stage, it is being used on a production basis to integrate data from neuroscience databases developed for the SenseLab project at Yale University with external neuroscience and genomics databases. The QIS framework uses standard technologies and is intended to be deployable by administrators with a moderate level of technological expertise: It comes with various tools, such as interfaces for the design of distributed queries. The QIS architecture is based on a set of distributed network-based servers, data source servers, integration servers, and ontology servers, that exchange metadata as well as mappings of both metadata and data elements to elements in an ontology. Metadata version difference determination coupled with decomposition of stored queries is used as the basis for partial query recovery when the schema of data sources alters. PMID:15298995

  2. MAC, material accounting database user guide

    SciTech Connect

    Russell, V.K.

    1994-09-22

    The K Basins Material Accounting (MAC) database system user guide describes the user features and functions, and the document is structured like the database menus. This document presents the MAC database system user instructions which explain how to record the movements and configuration of canisters and materials within the K Basins on the computer, the mechanics of handling encapsulation tracking, and administrative functions associated with the system. This document includes the user instructions, which also serve as the software requirements specification for the system implemented on the microcomputer. This includes suggested user keystrokes, examples of screens displayed by the system, and reports generated by the system. It shows how the system is organized, via menus and screens. It does not explain system design nor provide programmer instructions.

  3. The CARLSBAD database: a confederated database of chemical bioactivities.

    PubMed

    Mathias, Stephen L; Hines-Kay, Jarrett; Yang, Jeremy J; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G; Ursu, Oleg; Oprea, Tudor I

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical-protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical-protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure-target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure-target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. 'drugs'). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD database

  4. The CARLSBAD Database: A Confederated Database of Chemical Bioactivities

    PubMed Central

    Mathias, Stephen L.; Hines-Kay, Jarrett; Yang, Jeremy J.; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G.; Ursu, Oleg; Oprea, Tudor I.

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical–protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical–protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure–target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure–target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. ‘drugs’). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD

  5. Great Basin paleontological database

    USGS Publications Warehouse

    Zhang, N.; Blodgett, R.B.; Hofstra, A.H.

    2008-01-01

    The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.

  6. GOTTCHA Database, Version 1

    SciTech Connect

    Freitas, Tracey; Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  7. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... Federal Aviation Administration Passenger Facility Charge Database System for Air Carrier Reporting AGENCY... interested parties of the availability of the Passenger Facility Charge (PFC) database system to report PFC... public agency. The FAA has developed a national PFC database system in order to more easily track the...

  8. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  9. The Chandra Bibliography Database

    NASA Astrophysics Data System (ADS)

    Rots, A. H.; Winkelman, S. L.; Paltani, S.; Blecksmith, S. E.; Bright, J. D.

    2004-07-01

    Early in the mission, the Chandra Data Archive started the development of a bibliography database, tracking publications in refereed journals and on-line conference proceedings that are based on Chandra observations, allowing our users to link directly to articles in the ADS from our archive, and to link to the relevant data in the archive from the ADS entries. Subsequently, we have been working closely with the ADS and other data centers, in the context of the ADEC-ITWG, on standardizing the literature-data linking. We have also extended our bibliography database to include all Chandra-related articles and we are also keeping track of the number of citations of each paper. Obviously, in addition to providing valuable services to our users, this database allows us to extract a wide variety of statistical information. The project comprises five components: the bibliography database-proper, a maintenance database, an interactive maintenance tool, a user browsing interface, and a web services component for exchanging information with the ADS. All of these elements are nearly mission-independent and we intend make the package as a whole available for use by other data centers. The capabilities thus provided represent support for an essential component of the Virtual Observatory.

  10. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  11. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.

    2011-01-01

    With three missions outstanding, the Shuttle Hypervelocity Impact Database has nearly 3000 entries. The data is divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. Details and insights on the contents of the database including examples of descriptive statistics will be provided. Post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will also be discussed. Potential enhancements to the database structure and availability of the data for other researchers will be addressed in the Future Work section. A related database of returned surfaces from the International Space Station will also be introduced.

  12. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James I.; Christiansen, Eric I.; Lear, Dana M.

    2011-01-01

    With three flights remaining on the manifest, the shuttle impact hypervelocity database has over 2800 entries. The data is currently divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. The paper will provide details and insights on the contents of the database including examples of descriptive statistics using the impact data. A discussion of post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will be presented. Future work to be discussed will be possible enhancements to the database structure and availability of the data for other researchers. A related database of ISS returned surfaces that are under development will also be introduced.

  13. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  14. Comparison of Multiple Methods for Determination of FCGR3A/B Genomic Copy Numbers in HapMap Asian Populations with Two Public Databases

    PubMed Central

    Qi, Yuan-yuan; Zhou, Xu-jie; Bu, Ding-fang; Hou, Ping; Lv, Ji-cheng; Zhang, Hong

    2016-01-01

    Low FCGR3 copy numbers (CNs) has been associated with susceptibility to several systemic autoimmune diseases. However, inconsistent associations were reported and errors caused by shaky methods were suggested to be the major causes. In large scale case control association studies, robust copy number determination method is thus warranted, which was the main focus of the current study. In the present study, FCGR3 CNs of 90 HapMap Asians were firstly checked using four assays including paralog ratio test combined with restriction enzyme digest variant ratio (PRT-REDVR), real-time quantitative (qPCR) using TaqMan assay, real-time qPCR using SYBR Green dye and short tenden repeat (STR). To improve the comparison precision reproductively, the results were compared with those from recently released sequencing data from 1000 genomes project as well as whole-genome tiling BAC array data. The tendencies of inconsistent samples by these methods were also characterized. Refined in-home TaqMan qPCR assay showed the highest correlation with array-CGH results (r = 0.726, p < 0.001) and the highest concordant rate with 1000 genome sequencing data (FCGR3A 91.76%, FCGR3B 85.88%, and FCGR3 81.18%). For samples with copy number variations, comprehensive analysis of multiple methods was required in order to improve detection accuracy. All these method were prone to detect copy number to be higher than that from direct sequencing. All the four PCR based CN determination methods (qPCR using TaqMan probes or SYBR Green, PRT, STR) were prone to higher estimation errors and thus may lead to artificial associations in large-scale case-control association studies. But different to previous reports, we observed that properly refined TaqMan qPCR assay was not inferior to or even more accurate than PRT when using sequencing data as the reference. PMID:28083015

  15. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  16. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  17. The PROSITE database

    PubMed Central

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S.; Pagni, Marco; Sigrist, Christian J. A.

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at . PMID:16381852

  18. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  19. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-04

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  20. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  1. National Ambient Radiation Database

    SciTech Connect

    Dziuban, J.; Sears, R.

    2003-02-25

    The U.S. Environmental Protection Agency (EPA) recently developed a searchable database and website for the Environmental Radiation Ambient Monitoring System (ERAMS) data. This site contains nationwide radiation monitoring data for air particulates, precipitation, drinking water, surface water and pasteurized milk. This site provides location-specific as well as national information on environmental radioactivity across several media. It provides high quality data for assessing public exposure and environmental impacts resulting from nuclear emergencies and provides baseline data during routine conditions. The database and website are accessible at www.epa.gov/enviro/. This site contains (1) a query for the general public which is easy to use--limits the amount of information provided, but includes the ability to graph the data with risk benchmarks and (2) a query for a more technical user which allows access to all of the data in the database, (3) background information on ER AMS.

  2. Synthesized Population Databases: A US Geospatial Database for Agent-Based Models.

    PubMed

    Wheaton, William D; Cajka, James C; Chasteen, Bernadette M; Wagener, Diane K; Cooley, Philip C; Ganapathi, Laxminarayana; Roberts, Douglas J; Allpress, Justine L

    2009-05-01

    Agent-based models simulate large-scale social systems. They assign behaviors and activities to "agents" (individuals) within the population being modeled and then allow the agents to interact with the environment and each other in complex simulations. Agent-based models are frequently used to simulate infectious disease outbreaks, among other uses.RTI used and extended an iterative proportional fitting method to generate a synthesized, geospatially explicit, human agent database that represents the US population in the 50 states and the District of Columbia in the year 2000. Each agent is assigned to a household; other agents make up the household occupants.For this database, RTI developed the methods for generating synthesized households and personsassigning agents to schools and workplaces so that complex interactions among agents as they go about their daily activities can be taken into accountgenerating synthesized human agents who occupy group quarters (military bases, college dormitories, prisons, nursing homes).In this report, we describe both the methods used to generate the synthesized population database and the final data structure and data content of the database. This information will provide researchers with the information they need to use the database in developing agent-based models.Portions of the synthesized agent database are available to any user upon request. RTI will extract a portion (a county, region, or state) of the database for users who wish to use this database in their own agent-based models.

  3. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  4. Database Management System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  5. JICST Factual Database(1)

    NASA Astrophysics Data System (ADS)

    Kurosawa, Shinji

    The outline of JICST factual database (JOIS-F), which JICST has started from January, 1988, and its online service are described in this paper. First, the author mentions the circumstances from 1973, when its planning was started, to the present, and its relation to "Project by Special Coordination Founds for Promoting Science and Technology". Secondly, databases, which are now under development aiming to start its services from fiscal 1988 or fiscal 1989, of DNA, metallic material intensity, crystal structure, chemical substance regulations, and so forth, are described. Lastly, its online service is briefly explained.

  6. Drycleaner Database - Region 7

    EPA Pesticide Factsheets

    THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify Region 7 subject to Maximum Achievable Control Technologiy (MACT) standards. The Air and Waste Management Division is the primary managing entity for this database. This work falls under objectives for EPA's 2003-2008 Strategic Plan (Goal 4) for Healthy Communities & Ecosystems, which are to reduce chemical and/or pesticide risks at facilities.

  7. The Genopolis Microarray Database

    PubMed Central

    Splendiani, Andrea; Brandizi, Marco; Even, Gael; Beretta, Ottavio; Pavelka, Norman; Pelizzola, Mattia; Mayhaus, Manuel; Foti, Maria; Mauri, Giancarlo; Ricciardi-Castagnoli, Paola

    2007-01-01

    Background Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood. Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions. Results The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip® platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users. Conclusion The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local

  8. Databases for plant phosphoproteomics.

    PubMed

    Schulze, Waltraud X; Yao, Qiuming; Xu, Dong

    2015-01-01

    Phosphorylation is the most studied posttranslational modification involved in signal transduction in stress responses, development, and growth. In the recent years large-scale phosphoproteomic studies were carried out using various model plants and several growth and stress conditions. Here we present an overview of online resources for plant phosphoproteomic databases: PhosPhAt as a resource for Arabidopsis phosphoproteins, P3DB as a resource expanding to crop plants, and Medicago PhosphoProtein Database as a resource for the model plant Medicago trunculata.

  9. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  10. Thermodynamics of Enzyme-Catalyzed Reactions Database

    National Institute of Standards and Technology Data Gateway

    SRD 74 Thermodynamics of Enzyme-Catalyzed Reactions Database (Web, free access)   The Thermodynamics of Enzyme-Catalyzed Reactions Database contains thermodynamic data on enzyme-catalyzed reactions that have been recently published in the Journal of Physical and Chemical Reference Data (JPCRD). For each reaction the following information is provided: the reference for the data, the reaction studied, the name of the enzyme used and its Enzyme Commission number, the method of measurement, the data and an evaluation thereof.

  11. Selective access and editing in a database

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Gawdiak, Yuri O. (Inventor)

    2010-01-01

    Method and system for providing selective access to different portions of a database by different subgroups of database users. Where N users are involved, up to 2.sup.N-1 distinguishable access subgroups in a group space can be formed, where no two access subgroups have the same members. Two or more members of a given access subgroup can edit, substantially simultaneously, a document accessible to each member.

  12. The peptaibiotics database--a comprehensive online resource.

    PubMed

    Neumann, Nora K N; Stoppacher, Norbert; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2015-05-01

    In this work, we present the 'Peptaibiotics Database' (PDB), a comprehensive online resource, which intends to cover all Aib-containing non-ribosomal fungal peptides currently described in scientific literature. This database shall extend and update the recently published 'Comprehensive Peptaibiotics Database' and currently consists of 1,297 peptaibiotic sequences. In a literature survey, a total of 235 peptaibiotic sequences published between January 2013 and June 2014 have been compiled, and added to the list of 1,062 peptides in the recently published 'Comprehensive Peptaibiotics Database'. The presented database is intended as a public resource freely accessible to the scientific community at peptaibiotics-database.boku.ac.at. The search options of the previously published repository and the presentation of sequence motif searches have been extended significantly. All of the available search options can be combined to create complex database queries. As a public repository, the presented database enables the easy upload of new peptaibiotic sequences or the correction of existing informations. In addition, an administrative interface for maintenance of the content of the database has been implemented, and the design of the database can be easily extended to store additional information to accommodate future needs of the 'peptaibiomics community'.

  13. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…

  14. Danish Gynecological Cancer Database

    PubMed Central

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie; Jensen, Pernille Tine; Thranov, Ingrid Regitze; Hare-Bruun, Helle; Seibæk, Lene; Høgdall, Claus

    2016-01-01

    Aim of database The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures for gynecological cancer. Study population DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. Main variables DGCD data are organized within separate data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish “Pathology Registry”, the “National Patient Registry”, and the “Cause of Death Registry” using the unique Danish personal identification number (CPR number). Descriptive data Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation is the registration of oncological treatment data, which is incomplete for a large number of patients. Conclusion The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation and completeness of data. The success of the DGCD is illustrated through annual reports, high coverage, and several peer-reviewed DGCD-based publications. PMID:27822089

  15. Uranium Location Database

    EPA Pesticide Factsheets

    A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata was cooperatively compiled from Federal and State agency data sets and enables the user to conduct geographic and analytical studies on mine impacts on the public and environment.

  16. Patent Family Databases.

    ERIC Educational Resources Information Center

    Simmons, Edlyn S.

    1985-01-01

    Reports on retrieval of patent information online and includes definition of patent family, basic and equivalent patents, "parents and children" applications, designated states, patent family databases--International Patent Documentation Center, World Patents Index, APIPAT (American Petroleum Institute), CLAIMS (IFI/Plenum). A table…

  17. Diatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 114 Diatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.

  18. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  19. MARC and Relational Databases.

    ERIC Educational Resources Information Center

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  20. Hydrocarbon Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  1. Danish Urogynaecological Database

    PubMed Central

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation, complications if relevant, implants used if relevant, 3–6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number of scientific studies of specific urogynecological topics, broader epidemiological topics, and the use of patient reported outcome measures. PMID:27826217

  2. Food composition databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Food composition is the determination of what is in the foods we eat and is the critical bridge between nutrition, health promotion and disease prevention and food production. Compilation of data into useable databases is essential to the development of dietary guidance for individuals and populat...

  3. The CEBAF Element Database

    SciTech Connect

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  4. Triatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 117 Triatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  5. A comparison of the BAX system method to the U.S. Food and Drug Administration's Bacteriological Analytical Manual and International Organization for Standardization reference methods for the detection of Salmonella in a variety of soy ingredients.

    PubMed

    Belete, Tamrat; Crowley, Erin; Bird, Patrick; Gensic, Joseph; Wallace, F Morgan

    2014-10-01

    The performances of two DuPont BAX System PCR assays for detecting Salmonella on a variety of low-moisture soy ingredients were evaluated against the U. S. Food and Drug Administration's Bacteriological Analytical Manual (FDA BAM) method or the International Organization for Standardization (ISO) 6579 reference method. These evaluations were conducted as a single laboratory validation at an ISO 17025 accredited third-party laboratory. Validations were conducted on five soy ingredients: isolated soy protein (ISP), soy fiber, fluid soy lecithin, deoiled soy lecithin, and soy nuggets, using a paired-study design. The ISP was analyzed as both 25- and 375-g composite test portions, whereas all other sample matrices were analyzed as 375-g composite test portions. To evaluate 25-g test portions of ISP, the test material was inoculated using Salmonella enterica subsp. enterica serovar Mbandaka (Q Laboratories isolate 11031.1). Salmonella enterica subsp. enterica serovar Tennessee (Q Laboratories isolate 11031.3) was used for all other trials. For each trial of the method comparison, 25 samples were analyzed for each matrix: 5 uninoculated controls and 20 samples inoculated at low levels (0.2 to 2 CFU per test portion) that were targeted to achieve fractionally positive results (25 to 75%). Using McNemar's chi-square analysis, no significant difference at P ≥ 0.05 (χ(2) ≤ 3.84) was observed between the number of positives obtained by the BAX System and the reference methods for all five test matrices evaluated. These studies indicate that the BAX System PCR assays, in combination with the single buffered peptone water primary enrichment and subsequent brain heart infusion regrowth step, demonstrate equivalent sensitivity and robustness compared with the FDA BAM and ISO reference methods for both 25- and 375-g composite samples. Moreover, there was no observed reduction of sensitivity in the larger 375-g composite samples for all five matrices.

  6. JDD, Inc. Database

    NASA Technical Reports Server (NTRS)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  7. Tautomerism in large databases

    PubMed Central

    Sitzmann, Markus; Ihlenfeldt, Wolf-Dietrich

    2010-01-01

    We have used the Chemical Structure DataBase (CSDB) of the NCI CADD Group, an aggregated collection of over 150 small-molecule databases totaling 103.5 million structure records, to conduct tautomerism analyses on one of the largest currently existing sets of real (i.e. not computer-generated) compounds. This analysis was carried out using calculable chemical structure identifiers developed by the NCI CADD Group, based on hash codes available in the chemoinformatics toolkit CACTVS and a newly developed scoring scheme to define a canonical tautomer for any encountered structure. CACTVS’s tautomerism definition, a set of 21 transform rules expressed in SMIRKS line notation, was used, which takes a comprehensive stance as to the possible types of tautomeric interconversion included. Tautomerism was found to be possible for more than 2/3 of the unique structures in the CSDB. A total of 680 million tautomers were calculated from, and including, the original structure records. Tautomerism overlap within the same individual database (i.e. at least one other entry was present that was really only a different tautomeric representation of the same compound) was found at an average rate of 0.3% of the original structure records, with values as high as nearly 2% for some of the databases in CSDB. Projected onto the set of unique structures (by FICuS identifier), this still occurred in about 1.5% of the cases. Tautomeric overlap across all constituent databases in CSDB was found for nearly 10% of the records in the collection. PMID:20512400

  8. Computer-Based Administrative Support Systems: The Stanford Experience.

    ERIC Educational Resources Information Center

    Massy, William F.

    1983-01-01

    Computer-based administrative support tools are having a profound effect on the management of colleges and universities. Several such systems at Stanford University are discussed, including modeling, database management systems, networking, and electronic mail. (JN)

  9. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting

  10. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    NASA Astrophysics Data System (ADS)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  11. Pay Equity and the Administrative Staff.

    ERIC Educational Resources Information Center

    Risher, Howard W.; Toller, John M.

    1989-01-01

    In a study conducted for the University of Connecticut, an analysis of the CUPA Administrative Compensation Survey database for 23 public universities was used to study pay equity issues. Job evaluation and internal equity, market analysis, individual salary adjustments, and planning a pay equity study are discussed. (MLW)

  12. People & Change: Success in Implementing Administrative Systems.

    ERIC Educational Resources Information Center

    Hannan, Cecil

    The implementation of an administrative on-line data-base system for the San Diego Community College District (SDCCD) is explained in this report. The report begins by describing the SDCCD environment, a multi-campus district under the direction of a chancellor and a cabinet. District headcount in Fall, 1981 consisted of over 100,000 students and…

  13. DVD Database Astronomical Manuscripts in Georgia

    NASA Astrophysics Data System (ADS)

    Simonia, I.; Simonia, Ts.; Abuladze, T.; Chkhikvadze, N.; Samkurashvili, L.; Pataridze, K.

    2016-06-01

    Little known and unknown Georgian, Persian, and Arabic astronomical manuscripts of IX-XIX centuries are kept in the centers, archives, and libraries of Georgia. These manuscripts has a form of treaties, handbooks, texts, tables, fragments, and comprises various theories, cosmological models, star catalogs, calendars, methods of observations. We investigated this large material and published DVD database Astronomical Manuscripts in Georgia. This unique database contains information about astronomical manuscripts as original works. It contains also descriptions of Georgian translations of Byzantine, Arabic and other sources. The present paper is dedicated to description of obtained results and DVD database. Copies of published DVD database are kept in collections of the libraries of: Ilia State University, Georgia; Royal Observatory, Edinburgh, UK; Congress of the USA, and in other centers.

  14. On data-based analysis of extreme events in multidimensional non-stationary meteorological systems: Based on advanced time series analysis methods and general extreme value theory

    NASA Astrophysics Data System (ADS)

    Kaiser, O.; Horenko, I.

    2012-04-01

    Given an observed series of extreme events we are interested to capture the significant trend in the underlying dynamics. Since the character of such systems is strongly non-linear and non-stationary, the detection of significant characteristics and their attribution is a complex task. A standard tool in statistics to describe the probability distribution of extreme events is the General Extreme Value Theory (GEV). While the univariate stationary GEV distribution is well studied and results in fitting the data to the model parameters using Likelihood Techniques and Bayesian Methods (Coles,'01; Davison, Rames, '00 ), analysis of non-stationary extremes is based on the a priori assumption about the trend behavior (e.g linear combination of external factors/polynomials (Coles,'01)). Additionally, analysis of multivariate, non-stationary extreme events remains still a strong challenge, since analysis without strong a priori assumptions is limited to low dimensional cases (Nychka, Cooley,'09). We introduce FEM-GEV approach, which is based on GEV and advanced Finite Element time series analysis Methods (FEM) (Horenko,'10-11). The main idea of the FEM framework is to interpolate adaptively the corresponding non-stationary model parameters by a linear convex combination of K local stationary models and a switching process between them. To apply FEM framework to a time series of extremes we extend FEM by defining the model parameters wrt GEV distribution, as external factors we consider global atmospheric patterns. The optimal number of local models K and the best combination of external factors is estimated using Akaike Information Criteria. FEM-GEV approach allows to study the non-stationary dynamics of GEV parameters without a priori assumptions on the trend behavior and also captures the non-linear, non-stationary dependence on external factors. The series of extremes has by definition no connection to real time scale, for this reason the results of FEM-GEV can be only

  15. Investigation of the Bioequivalence of Rosuvastatin 20 mg Tablets after a Single Oral Administration in Mediterranean Arabs Using a Validated LC-MS/MS Method

    PubMed Central

    Zaid, Abdel Naser; Al Ramahi, Rowa; Cortesi, Rita; Mousa, Ayman; Jaradat, Nidal; Ghazal, Nadia; Bustami, Rana

    2016-01-01

    There is a wide inter-individual response to statin therapy including rosuvastatin calcium (RC), and it has been hypothesized that genetic differences may contribute to these variations. In fact, several studies have shown that pharmacokinetic (PK) parameters for RC are affected by race. The aim of this study is to demonstrate the interchangeability between two generic RC 20 mg film-coated tablets under fasting conditions among Mediterranean Arabs and to compare the pharmacokinetic results with Asian and Caucasian subjects from other studies. A single oral RC 20 mg dose, randomized, open-label, two-way crossover design study was conducted in 30 healthy Mediterranean Arab volunteers. Blood samples were collected prior to dosing and over a 72-h period. Concentrations in plasma were quantified using a validated liquid chromatography tandem mass spectrometry method. Twenty-six volunteers completed the study. Statistical comparison of the main PK parameters showed no significant difference between the generic and branded products. The point estimates (ratios of geometric mean %) were 107.73 (96.57–120.17), 103.61 (94.03–114.16), and 104.23 (94.84–114.54) for peak plasma concentration (Cmax), Area Under the Curve (AUC)0→last, and AUC0→∞, respectively. The 90% confidence intervals were within the pre-defined limits of 80%–125% as specified by the Food and Drug Administration and European Medicines Agency for bioequivalence studies. Both formulations were well-tolerated and no serious adverse events were reported. The PK results (AUC0→last and Cmax) were close to those of the Caucasian subjects. This study showed that the test and reference products met the regulatory criteria for bioequivalence following a 20 mg oral dose of RC under fasting conditions. Both formulations also showed comparable safety results. The PK results of the test and reference in the study subjects fall within the acceptable interval of 80%–125% and they were very close to the

  16. Investigation of the Bioequivalence of Rosuvastatin 20 mg Tablets after a Single Oral Administration in Mediterranean Arabs Using a Validated LC-MS/MS Method.

    PubMed

    Zaid, Abdel Naser; Al Ramahi, Rowa; Cortesi, Rita; Mousa, Ayman; Jaradat, Nidal; Ghazal, Nadia; Bustami, Rana

    2016-06-30

    There is a wide inter-individual response to statin therapy including rosuvastatin calcium (RC), and it has been hypothesized that genetic differences may contribute to these variations. In fact, several studies have shown that pharmacokinetic (PK) parameters for RC are affected by race. The aim of this study is to demonstrate the interchangeability between two generic RC 20 mg film-coated tablets under fasting conditions among Mediterranean Arabs and to compare the pharmacokinetic results with Asian and Caucasian subjects from other studies. A single oral RC 20 mg dose, randomized, open-label, two-way crossover design study was conducted in 30 healthy Mediterranean Arab volunteers. Blood samples were collected prior to dosing and over a 72-h period. Concentrations in plasma were quantified using a validated liquid chromatography tandem mass spectrometry method. Twenty-six volunteers completed the study. Statistical comparison of the main PK parameters showed no significant difference between the generic and branded products. The point estimates (ratios of geometric mean %) were 107.73 (96.57-120.17), 103.61 (94.03-114.16), and 104.23 (94.84-114.54) for peak plasma concentration (Cmax), Area Under the Curve (AUC)0→last, and AUC0→∞, respectively. The 90% confidence intervals were within the pre-defined limits of 80%-125% as specified by the Food and Drug Administration and European Medicines Agency for bioequivalence studies. Both formulations were well-tolerated and no serious adverse events were reported. The PK results (AUC0→last and Cmax) were close to those of the Caucasian subjects. This study showed that the test and reference products met the regulatory criteria for bioequivalence following a 20 mg oral dose of RC under fasting conditions. Both formulations also showed comparable safety results. The PK results of the test and reference in the study subjects fall within the acceptable interval of 80%-125% and they were very close to the results among

  17. Tying Diverse Student and Administrative Data Bases Together.

    ERIC Educational Resources Information Center

    Rokicki, Phillip S.; Springer, Peggy

    Guidelines for colleges seeking to interface their diverse student and administrative databases are presented. The following key concepts are addressed that need to be understood before solving the problem of sharing and transmitting data between diverse databases: effective management of data from an institutional and a user point of view; the…

  18. QDB: Validated Plasma Chemistries Database

    NASA Astrophysics Data System (ADS)

    Rahimi, Sara; Hamilton, James; Hill, Christian; Tennyson, Jonathan; UCL Team

    2016-09-01

    One of most challenging recurring problems when modelling plasmas is the lack of data. This lack of complete and validated datasets hinders research on plasma processes and curbs development of industrial Applications. We will describe the QDB project which aims to fill this missing link by provide a platform for exchange and validation of chemistry datasets. The database will collate published data on both electron scattering and heavy particle reactions and also facilitates and encourages peer-to-peer data sharing by its users. This data platform is rigorously supported by the validation methodical validation of the datasetsan automated chemistry generator employed; this methodology identifies missing reactions in chemistries which although important are currently unreported in the literature and employs mathematical methods to analyze the importance of these chemistries. Gaps in the datasets are filled using in house theoretical methods.

  19. Senior Administrators Should Have Administrative Contracts.

    ERIC Educational Resources Information Center

    Posner, Gary J.

    1987-01-01

    Recognizing that termination is viewed by the employee as the equivalent to capital punishment of a career, an administrative contract can reduce the emotional and financial entanglements that often result. Administrative contracts are described. (MLW)

  20. Knowledge Discovery from Databases: An Introductory Review.

    ERIC Educational Resources Information Center

    Vickery, Brian

    1997-01-01

    Introduces new procedures being used to extract knowledge from databases and discusses rationales for developing knowledge discovery methods. Methods are described for such techniques as classification, clustering, and the detection of deviations from pre-established norms. Examines potential uses of knowledge discovery in the information field.…