Sample records for administrative databases methods

  1. Validated methods for identifying tuberculosis patients in health administrative databases: systematic review.

    PubMed

    Ronald, L A; Ling, D I; FitzGerald, J M; Schwartzman, K; Bartlett-Esquilant, G; Boivin, J-F; Benedetti, A; Menzies, D

    2017-05-01

    An increasing number of studies are using health administrative databases for tuberculosis (TB) research. However, there are limitations to using such databases for identifying patients with TB. To summarise validated methods for identifying TB in health administrative databases. We conducted a systematic literature search in two databases (Ovid Medline and Embase, January 1980-January 2016). We limited the search to diagnostic accuracy studies assessing algorithms derived from drug prescription, International Classification of Diseases (ICD) diagnostic code and/or laboratory data for identifying patients with TB in health administrative databases. The search identified 2413 unique citations. Of the 40 full-text articles reviewed, we included 14 in our review. Algorithms and diagnostic accuracy outcomes to identify TB varied widely across studies, with positive predictive value ranging from 1.3% to 100% and sensitivity ranging from 20% to 100%. Diagnostic accuracy measures of algorithms using out-patient, in-patient and/or laboratory data to identify patients with TB in health administrative databases vary widely across studies. Use solely of ICD diagnostic codes to identify TB, particularly when using out-patient records, is likely to lead to incorrect estimates of case numbers, given the current limitations of ICD systems in coding TB.

  2. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  3. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  4. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  5. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  6. 47 CFR 52.25 - Database architecture and administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database architecture and administration. 52.25... (CONTINUED) NUMBERING Number Portability § 52.25 Database architecture and administration. (a) The North... databases for the provision of long-term database methods for number portability. (b) All telecommunications...

  7. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  8. CDS - Database Administrator's Guide

    NASA Astrophysics Data System (ADS)

    Day, J. P.

    This guide aims to instruct the CDS database administrator in: o The CDS file system. o The CDS index files. o The procedure for assimilating a new CDS tape into the database. It is assumed that the administrator has read SUN/79.

  9. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  10. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer a TV bands database. Each database administrator shall: (a) Maintain a database that...

  11. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  12. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  13. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  14. 47 CFR 15.715 - TV bands database administrator.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administrator. 15.715 Section... Band Devices § 15.715 TV bands database administrator. The Commission will designate one or more entities to administer the TV bands database(s). The Commission may, at its discretion, permit the...

  15. Validating abortion procedure coding in Canadian administrative databases.

    PubMed

    Samiedaluie, Saied; Peterson, Sandra; Brant, Rollin; Kaczorowski, Janusz; Norman, Wendy V

    2016-07-12

    The British Columbia (BC) Ministry of Health collects abortion procedure data in the Medical Services Plan (MSP) physician billings database and in the hospital information Discharge Abstracts Database (DAD). Our study seeks to validate abortion procedure coding in these databases. Two randomized controlled trials enrolled a cohort of 1031 women undergoing abortion. The researcher collected database includes both enrollment and follow up chart review data. The study cohort was linked to MSP and DAD data to identify all abortions events captured in the administrative databases. We compared clinical chart data on abortion procedures with health administrative data. We considered a match to occur if an abortion related code was found in administrative data within 30 days of the date of the same event documented in a clinical chart. Among 1158 abortion events performed during enrollment and follow-up period, 99.1 % were found in at least one of the administrative data sources. The sensitivities for the two databases, evaluated using a gold standard, were 97.7 % (95 % confidence interval (CI): 96.6-98.5) for the MSP database and 91.9 % (95 % CI: 90.0-93.4) for the DAD. Abortion events coded in the BC health administrative databases are highly accurate. Single-payer health administrative databases at the provincial level in Canada have the potential to offer valid data reflecting abortion events. ClinicalTrials.gov Identifier NCT01174225 , Current Controlled Trials ISRCTN19506752 .

  16. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database administrator...

  17. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database administrator...

  18. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database administrator...

  19. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database administrator...

  20. 47 CFR 15.714 - TV bands database administration fees.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false TV bands database administration fees. 15.714 Section 15.714 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES Television Band Devices § 15.714 TV bands database administration fees. (a) A TV bands database administrator...

  1. Use of administrative medical databases in population-based research.

    PubMed

    Gavrielov-Yusim, Natalie; Friger, Michael

    2014-03-01

    Administrative medical databases are massive repositories of data collected in healthcare for various purposes. Such databases are maintained in hospitals, health maintenance organisations and health insurance organisations. Administrative databases may contain medical claims for reimbursement, records of health services, medical procedures, prescriptions, and diagnoses information. It is clear that such systems may provide a valuable variety of clinical and demographic information as well as an on-going process of data collection. In general, information gathering in these databases does not initially presume and is not planned for research purposes. Nonetheless, administrative databases may be used as a robust research tool. In this article, we address the subject of public health research that employs administrative data. We discuss the biases and the limitations of such research, as well as other important epidemiological and biostatistical key points specific to administrative database studies.

  2. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  3. Morphinome Database - The database of proteins altered by morphine administration - An update.

    PubMed

    Bodzon-Kulakowska, Anna; Padrtova, Tereza; Drabik, Anna; Ner-Kluza, Joanna; Antolak, Anna; Kulakowski, Konrad; Suder, Piotr

    2018-04-13

    Morphine is considered a gold standard in pain treatment. Nevertheless, its use could be associated with severe side effects, including drug addiction. Thus, it is very important to understand the molecular mechanism of morphine action in order to develop new methods of pain therapy, or at least to attenuate the side effects of opioids usage. Proteomics allows for the indication of proteins involved in certain biological processes, but the number of items identified in a single study is usually overwhelming. Thus, researchers face the difficult problem of choosing the proteins which are really important for the investigated processes and worth further studies. Therefore, based on the 29 published articles, we created a database of proteins regulated by morphine administration - The Morphinome Database (addiction-proteomics.org). This web tool allows for indicating proteins that were identified during different proteomics studies. Moreover, the collection and organization of such a vast amount of data allows us to find the same proteins that were identified in various studies and to create their ranking, based on the frequency of their identification. STRING and KEGG databases indicated metabolic pathways which those molecules are involved in. This means that those molecular pathways seem to be strongly affected by morphine administration and could be important targets for further investigations. The data about proteins identified by different proteomics studies of molecular changes caused by morphine administration (29 published articles) were gathered in the Morphinome Database. Unification of those data allowed for the identification of proteins that were indicated several times by distinct proteomics studies, which means that they seem to be very well verified and important for the entire process. Those proteins might be now considered promising aims for more detailed studies of their role in the molecular mechanism of morphine action. Copyright © 2018

  4. Redis database administration tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez, J. J.

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  5. The Dutch Hospital Standardised Mortality Ratio (HSMR) method and cardiac surgery: benchmarking in a national cohort using hospital administration data versus a clinical database

    PubMed Central

    Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H

    2014-01-01

    Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377

  6. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false TRS User Registration Database and... Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall validate... Database on a per-call basis. Emergency 911 calls are excepted from this requirement. (i) Validation shall...

  7. 47 CFR 64.615 - TRS User Registration Database and administrator.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false TRS User Registration Database and... Registration Database and administrator. (a) TRS User Registration Database. (1) VRS providers shall validate... Database on a per-call basis. Emergency 911 calls are excepted from this requirement. (i) Validation shall...

  8. A Database System for Course Administration.

    ERIC Educational Resources Information Center

    Benbasat, Izak; And Others

    1982-01-01

    Describes a computer-assisted testing system which produces multiple-choice examinations for a college course in business administration. The system uses SPIRES (Stanford Public Information REtrieval System) to manage a database of questions and related data, mark-sense cards for machine grading tests, and ACL (6) (Audit Command Language) to…

  9. Expanding the use of administrative claims databases in conducting clinical real-world evidence studies in multiple sclerosis.

    PubMed

    Capkun, Gorana; Lahoz, Raquel; Verdun, Elisabetta; Song, Xue; Chen, Weston; Korn, Jonathan R; Dahlke, Frank; Freitas, Rita; Fraeman, Kathy; Simeone, Jason; Johnson, Barbara H; Nordstrom, Beth

    2015-05-01

    Administrative claims databases provide a wealth of data for assessing the effect of treatments in clinical practice. Our aim was to propose methodology for real-world studies in multiple sclerosis (MS) using these databases. In three large US administrative claims databases: MarketScan, PharMetrics Plus and Department of Defense (DoD), patients with MS were selected using an algorithm identified in the published literature and refined for accuracy. Algorithms for detecting newly diagnosed ('incident') MS cases were also refined and tested. Methodology based on resource and treatment use was developed to differentiate between relapses with and without hospitalization. When various patient selection criteria were applied to the MarketScan database, an algorithm requiring two MS diagnoses at least 30 days apart was identified as the preferred method of selecting patient cohorts. Attempts to detect incident MS cases were confounded by the limited continuous enrollment of patients in these databases. Relapse detection algorithms identified similar proportions of patients in the MarketScan and PharMetrics Plus databases experiencing relapses with (2% in both databases) and without (15-20%) hospitalization in the 1 year follow-up period, providing findings in the range of those in the published literature. Additional validation of the algorithms proposed here would increase their credibility. The methods suggested in this study offer a good foundation for performing real-world research in MS using administrative claims databases, potentially allowing evidence from different studies to be compared and combined more systematically than in current research practice.

  10. Regulatory administrative databases in FDA's Center for Biologics Evaluation and Research: convergence toward a unified database.

    PubMed

    Smith, Jeffrey K

    2013-04-01

    Regulatory administrative database systems within the Food and Drug Administration's (FDA) Center for Biologics Evaluation and Research (CBER) are essential to supporting its core mission, as a regulatory agency. Such systems are used within FDA to manage information and processes surrounding the processing, review, and tracking of investigational and marketed product submissions. This is an area of increasing interest in the pharmaceutical industry and has been a topic at trade association conferences (Buckley 2012). Such databases in CBER are complex, not for the type or relevance of the data to any particular scientific discipline but because of the variety of regulatory submission types and processes the systems support using the data. Commonalities among different data domains of CBER's regulatory administrative databases are discussed. These commonalities have evolved enough to constitute real database convergence and provide a valuable asset for business process intelligence. Balancing review workload across staff, exploring areas of risk in review capacity, process improvement, and presenting a clear and comprehensive landscape of review obligations are just some of the opportunities of such intelligence. This convergence has been occurring in the presence of usual forces that tend to drive information technology (IT) systems development toward separate stovepipes and data silos. CBER has achieved a significant level of convergence through a gradual process, using a clear goal, agreed upon development practices, and transparency of database objects, rather than through a single, discrete project or IT vendor solution. This approach offers a path forward for FDA systems toward a unified database.

  11. The usefulness of administrative databases for identifying disease cohorts is increased with a multivariate model.

    PubMed

    van Walraven, Carl; Austin, Peter C; Manuel, Douglas; Knoll, Greg; Jennings, Allison; Forster, Alan J

    2010-12-01

    Administrative databases commonly use codes to indicate diagnoses. These codes alone are often inadequate to accurately identify patients with particular conditions. In this study, we determined whether we could quantify the probability that a person has a particular disease-in this case renal failure-using other routinely collected information available in an administrative data set. This would allow the accurate identification of a disease cohort in an administrative database. We determined whether patients in a randomly selected 100,000 hospitalizations had kidney disease (defined as two or more sequential serum creatinines or the single admission creatinine indicating a calculated glomerular filtration rate less than 60 mL/min/1.73 m²). The independent association of patient- and hospitalization-level variables with renal failure was measured using a multivariate logistic regression model in a random 50% sample of the patients. The model was validated in the remaining patients. Twenty thousand seven hundred thirteen patients had kidney disease (20.7%). A diagnostic code of kidney disease was strongly associated with kidney disease (relative risk: 34.4), but the accuracy of the code was poor (sensitivity: 37.9%; specificity: 98.9%). Twenty-nine patient- and hospitalization-level variables entered the kidney disease model. This model had excellent discrimination (c-statistic: 90.1%) and accurately predicted the probability of true renal failure. The probability threshold that maximized sensitivity and specificity for the identification of true kidney disease was 21.3% (sensitivity: 80.0%; specificity: 82.2%). Multiple variables available in administrative databases can be combined to quantify the probability that a person has a particular disease. This process permits accurate identification of a disease cohort in an administrative database. These methods may be extended to other diagnoses or procedures and could both facilitate and clarify the use of

  12. A review of accessibility of administrative healthcare databases in the Asia-Pacific region

    PubMed Central

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    Objective We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. Methods The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Results Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3–6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but

  13. The use of administrative health care databases to identify patients with rheumatoid arthritis

    PubMed Central

    Hanly, John G; Thompson, Kara; Skedgel, Chris

    2015-01-01

    Objective To validate and compare the decision rules to identify rheumatoid arthritis (RA) in administrative databases. Methods A study was performed using administrative health care data from a population of 1 million people who had access to universal health care. Information was available on hospital discharge abstracts and physician billings. RA cases in health administrative databases were matched 1:4 by age and sex to randomly selected controls without inflammatory arthritis. Seven case definitions were applied to identify RA cases in the health administrative data, and their performance was compared with the diagnosis by a rheumatologist. The validation study was conducted on a sample of individuals with administrative data who received a rheumatologist consultation at the Arthritis Center of Nova Scotia. Results We identified 535 RA cases and 2,140 non-RA, noninflammatory arthritis controls. Using the rheumatologist’s diagnosis as the gold standard, the overall accuracy of the case definitions for RA cases varied between 68.9% and 82.9% with a kappa statistic between 0.26 and 0.53. The sensitivity and specificity varied from 20.7% to 94.8% and 62.5% to 98.5%, respectively. In a reference population of 1 million, the estimated annual number of incident cases of RA was between 176 and 1,610 and the annual number of prevalent cases was between 1,384 and 5,722. Conclusion The accuracy of case definitions for the identification of RA cases from rheumatology clinics using administrative health care databases is variable when compared to a rheumatologist’s assessment. This should be considered when comparing results across studies. This variability may also be used as an advantage in different study designs, depending on the relative importance of sensitivity and specificity for identifying the population of interest to the research question. PMID:27790047

  14. Validation of Carotid Artery Revascularization Coding in Ontario Health Administrative Databases.

    PubMed

    Hussain, Mohamad A; Mamdani, Muhammad; Saposnik, Gustavo; Tu, Jack V; Turkel-Parrella, David; Spears, Julian; Al-Omran, Mohammed

    2016-04-02

    The positive predictive value (PPV) of carotid endarterectomy (CEA) and carotid artery stenting (CAS) procedure and post-operative complication coding were assessed in Ontario health administrative databases. Between 1 April 2002 and 31 March 2014, a random sample of 428 patients were identified using Canadian Classification of Health Intervention (CCI) procedure codes and Ontario Health Insurance Plan (OHIP) billing codes from administrative data. A blinded chart review was conducted at two high-volume vascular centers to assess the level of agreement between the administrative records and the corresponding patients' hospital charts. PPV was calculated with 95% confidence intervals (CIs) to estimate the validity of CEA and CAS coding, utilizing hospital charts as the gold standard. Sensitivity of CEA and CAS coding were also assessed by linking two independent databases of 540 CEA-treated patients (Ontario Stroke Registry) and 140 CAS-treated patients (single-center CAS database) to administrative records. PPV for CEA ranged from 99% to 100% and sensitivity ranged from 81.5% to 89.6% using CCI and OHIP codes. A CCI code with a PPV of 87% (95% CI, 78.8-92.9) and sensitivity of 92.9% (95% CI, 87.4-96.1) in identifying CAS was also identified. PPV for post-admission complication diagnosis coding was 71.4% (95% CI, 53.7-85.4) for stroke/transient ischemic attack, and 82.4% (95% CI, 56.6-96.2) for myocardial infarction. Our analysis demonstrated that the codes used in administrative databases accurately identify CEA and CAS-treated patients. Researchers can confidently use administrative data to conduct population-based studies of CEA and CAS.

  15. Administrative database research has unique characteristics that can risk biased results.

    PubMed

    van Walraven, Carl; Austin, Peter

    2012-02-01

    The provision of health care frequently creates digitized data--such as physician service claims, medication prescription records, and hospitalization abstracts--that can be used to conduct studies termed "administrative database research." While most guidelines for assessing the validity of observational studies apply to administrative database research, the unique data source and analytical opportunities for these studies create risks that can make them uninterpretable or bias their results. Nonsystematic review. The risks of uninterpretable or biased results can be minimized by; providing a robust description of the data tables used, focusing on both why and how they were created; measuring and reporting the accuracy of diagnostic and procedural codes used; distinguishing between clinical significance and statistical significance; properly accounting for any time-dependent nature of variables; and analyzing clustered data properly to explore its influence on study outcomes. This article reviewed these five issues as they pertain to administrative database research to help maximize the utility of these studies for both readers and writers. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Using linked administrative and disease-specific databases to study end-of-life care on a population level.

    PubMed

    Maetens, Arno; De Schreye, Robrecht; Faes, Kristof; Houttekier, Dirk; Deliens, Luc; Gielen, Birgit; De Gendt, Cindy; Lusyne, Patrick; Annemans, Lieven; Cohen, Joachim

    2016-10-18

    The use of full-population databases is under-explored to study the use, quality and costs of end-of-life care. Using the case of Belgium, we explored: (1) which full-population databases provide valid information about end-of-life care, (2) what procedures are there to use these databases, and (3) what is needed to integrate separate databases. Technical and privacy-related aspects of linking and accessing Belgian administrative databases and disease registries were assessed in cooperation with the database administrators and privacy commission bodies. For all relevant databases, we followed procedures in cooperation with database administrators to link the databases and to access the data. We identified several databases as fitting for end-of-life care research in Belgium: the InterMutualistic Agency's national registry of health care claims data, the Belgian Cancer Registry including data on incidence of cancer, and databases administrated by Statistics Belgium including data from the death certificate database, the socio-economic survey and fiscal data. To obtain access to the data, approval was required from all database administrators, supervisory bodies and two separate national privacy bodies. Two Trusted Third Parties linked the databases via a deterministic matching procedure using multiple encrypted social security numbers. In this article we describe how various routinely collected population-level databases and disease registries can be accessed and linked to study patterns in the use, quality and costs of end-of-life care in the full population and in specific diagnostic groups.

  17. Data-based Organizational Change: The Use of Administrative Data To Improve Child Welfare Programs and Policy.

    ERIC Educational Resources Information Center

    English, Diana J.; Brandford, Carol C.; Coghlan, Laura

    2000-01-01

    Discusses the strengths and weaknesses of administrative databases, issues with their implementation and data analysis, and effective presentation of their data at different levels in child welfare organizations. Focuses on the development and implementation of Washington state's Children's Administration's administrative database, the Case and…

  18. Validity of breast, lung and colorectal cancer diagnoses in administrative databases: a systematic review protocol.

    PubMed

    Abraha, Iosief; Giovannini, Gianni; Serraino, Diego; Fusco, Mario; Montedori, Alessandro

    2016-03-18

    Breast, lung and colorectal cancers constitute the most common cancers worldwide and their epidemiology, related health outcomes and quality indicators can be studied using administrative healthcare databases. To constitute a reliable source for research, administrative healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases 9th and 10th revision codes to identify breast, lung and colorectal cancer diagnoses in administrative healthcare databases. This review protocol has been developed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. We will search the following databases: MEDLINE, EMBASE, Web of Science and the Cochrane Library, using appropriate search strategies. We will include validation studies that used administrative data to identify breast, lung and colorectal cancer diagnoses or studies that evaluated the validity of breast, lung and colorectal cancer codes in administrative data. The following inclusion criteria will be used: (1) the presence of a reference standard case definition for the disease of interest; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc) and (3) the use of data source from an administrative database. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. Ethics approval is not required. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide to identify appropriate case definitions and algorithms of breast, lung and colorectal cancers for researchers involved in validating administrative healthcare databases as well as for outcome research on these conditions that used administrative

  19. [Comparison between administrative and clinical databases in the evaluation of cardiac surgery performance].

    PubMed

    Rosato, Stefano; D'Errigo, Paola; Badoni, Gabriella; Fusco, Danilo; Perucci, Carlo A; Seccareccia, Fulvia

    2008-08-01

    The availability of two contemporary sources of information about coronary artery bypass graft (CABG) interventions, allowed 1) to verify the feasibility of performing outcome evaluation studies using administrative data sources, and 2) to compare hospital performance obtainable using the CABG Project clinical database with hospital performance derived from the use of current administrative data. Interventions recorded in the CABG Project were linked to the hospital discharge record (HDR) administrative database. Only the linked records were considered for subsequent analyses (46% of the total CABG Project). A new selected population "clinical card-HDR" was then defined. Two independent risk-adjustment models were applied, each of them using information derived from one of the two different sources. Then, HDR information was supplemented with some patient preoperative conditions from the CABG clinical database. The two models were compared in terms of their adaptability to data. Hospital performances identified by the two different models and significantly different from the mean was compared. In only 4 of the 13 hospitals considered for analysis, the results obtained using the HDR model did not completely overlap with those obtained by the CABG model. When comparing statistical parameters of the HDR model and the HDR model + patient preoperative conditions, the latter showed the best adaptability to data. In this "clinical card-HDR" population, hospital performance assessment obtained using information from the clinical database is similar to that derived from the use of current administrative data. However, when risk-adjustment models built on administrative databases are supplemented with a few clinical variables, their statistical parameters improve and hospital performance assessment becomes more accurate.

  20. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship: Use of Administrative and Surveillance Databases.

    PubMed

    Drees, Marci; Gerber, Jeffrey S; Morgan, Daniel J; Lee, Grace M

    2016-11-01

    Administrative and surveillance data are used frequently in healthcare epidemiology and antimicrobial stewardship (HE&AS) research because of their wide availability and efficiency. However, data quality issues exist, requiring careful consideration and potential validation of data. This methods paper presents key considerations for using administrative and surveillance data in HE&AS, including types of data available and potential use, data limitations, and the importance of validation. After discussing these issues, we review examples of HE&AS research using administrative data with a focus on scenarios when their use may be advantageous. A checklist is provided to help aid study development in HE&AS using administrative data. Infect Control Hosp Epidemiol 2016;1-10.

  1. National Administrative Databases in Adult Spinal Deformity Surgery: A Cautionary Tale.

    PubMed

    Buckland, Aaron J; Poorman, Gregory; Freitag, Robert; Jalai, Cyrus; Klineberg, Eric O; Kelly, Michael; Passias, Peter G

    2017-08-15

    Comparison between national administrative databases and a prospective multicenter physician managed database. This study aims to assess the applicability of National Administrative Databases (NADs) in adult spinal deformity (ASD). Our hypothesis is that NADs do not include comparable patients as in a physician-managed database (PMD) for surgical outcomes in adult spinal deformity. NADs such as National Inpatient Sample (NIS) and National Surgical Quality Improvement Program (NSQIP) provide large numbers of publications owing to ease of data access and lack of IRB approval requirement. These databases utilize billing codes, not clinical inclusion criteria, and have not been validated against PMDs in ASD surgery. The NIS was searched for years 2002 to 2012 and NSQIP for years 2006 to 2013 using validated spinal deformity diagnostic codes. Procedural codes (ICD-9 and CPT) were then applied to each database. A multicenter PMD including years 2008 to 2015 was used for comparison. Databases were assessed for levels fused, osteotomies, decompressed levels, and invasiveness. Database comparisons for surgical details were made in all patients, and also for patients with ≥ 5 level spinal fusions. Approximately, 37,368 NIS, 1291 NSQIP, and 737 PMD patients were identified. NADs showed an increased use of deformity billing codes over the study period (NIS doubled, 68x NSQIP, P < 0.001), but ASD remained stable in the PMD.Surgical invasiveness, levels fused and use of 3-column osteotomy (3-CO) were significantly lower for all patients in the NIS (11.4-13.7) and NSQIP databases (6.4-12.7) compared with PMD (27.5-32.3). When limited to patients with ≥5 levels, invasiveness, levels fused, and use of 3-CO remained significantly higher in the PMD compared with NADs (P < 0.001). National databases NIS and NSQIP do not capture the same patient population as is captured in PMDs in ASD. Physicians should remain cautious in interpreting conclusions drawn from these databases

  2. Administrative Databases in Orthopaedic Research: Pearls and Pitfalls of Big Data.

    PubMed

    Patel, Alpesh A; Singh, Kern; Nunley, Ryan M; Minhas, Shobhit V

    2016-03-01

    The drive for evidence-based decision-making has highlighted the shortcomings of traditional orthopaedic literature. Although high-quality, prospective, randomized studies in surgery are the benchmark in orthopaedic literature, they are often limited by size, scope, cost, time, and ethical concerns and may not be generalizable to larger populations. Given these restrictions, there is a growing trend toward the use of large administrative databases to investigate orthopaedic outcomes. These datasets afford the opportunity to identify a large numbers of patients across a broad spectrum of comorbidities, providing information regarding disparities in care and outcomes, preoperative risk stratification parameters for perioperative morbidity and mortality, and national epidemiologic rates and trends. Although there is power in these databases in terms of their impact, potential problems include administrative data that are at risk of clerical inaccuracies, recording bias secondary to financial incentives, temporal changes in billing codes, a lack of numerous clinically relevant variables and orthopaedic-specific outcomes, and the absolute requirement of an experienced epidemiologist and/or statistician when evaluating results and controlling for confounders. Despite these drawbacks, administrative database studies are fundamental and powerful tools in assessing outcomes on a national scale and will likely be of substantial assistance in the future of orthopaedic research.

  3. Market Pressure and Government Intervention in the Administration and Development of Molecular Databases.

    ERIC Educational Resources Information Center

    Sillince, J. A. A.; Sillince, M.

    1993-01-01

    Discusses molecular databases and the role that government and private companies play in their administration and development. Highlights include copyright and patent issues relating to public databases and the information contained in them; data quality; data structures and technological questions; the international organization of molecular…

  4. Reliability and validity assessment of administrative databases in measuring the quality of rectal cancer management.

    PubMed

    Corbellini, Carlo; Andreoni, Bruno; Ansaloni, Luca; Sgroi, Giovanni; Martinotti, Mario; Scandroglio, Ildo; Carzaniga, Pierluigi; Longoni, Mauro; Foschi, Diego; Dionigi, Paolo; Morandi, Eugenio; Agnello, Mauro

    2018-01-01

    Measurement and monitoring of the quality of care using a core set of quality measures are increasing in health service research. Although administrative databases include limited clinical data, they offer an attractive source for quality measurement. The purpose of this study, therefore, was to evaluate the completeness of different administrative data sources compared to a clinical survey in evaluating rectal cancer cases. Between May 2012 and November 2014, a clinical survey was done on 498 Lombardy patients who had rectal cancer and underwent surgical resection. These collected data were compared with the information extracted from administrative sources including Hospital Discharge Dataset, drug database, daycare activity data, fee-exemption database, and regional screening program database. The agreement evaluation was performed using a set of 12 quality indicators. Patient complexity was a difficult indicator to measure for lack of clinical data. Preoperative staging was another suboptimal indicator due to the frequent missing administrative registration of tests performed. The agreement between the 2 data sources regarding chemoradiotherapy treatments was high. Screening detection, minimally invasive techniques, length of stay, and unpreventable readmissions were detected as reliable quality indicators. Postoperative morbidity could be a useful indicator but its agreement was lower, as expected. Healthcare administrative databases are large and real-time collected repositories of data useful in measuring quality in a healthcare system. Our investigation reveals that the reliability of indicators varies between them. Ideally, a combination of data from both sources could be used in order to improve usefulness of less reliable indicators.

  5. Using administrative databases in the surveillance of depressive disorders--case definitions.

    PubMed

    Alaghehbandan, Reza; Macdonald, Don; Barrett, Brendan; Collins, Kayla; Chen, Yue

    2012-12-01

    The objective of this study was to assess the usefulness of provincial administrative databases in carrying out surveillance on depressive disorders. Electronic medical records (EMRs) at 3 family practice clinics in St. John's, NL, Canada, were audited; 253 depressive disorder cases and 257 patients not diagnosed with a depressive disorder were selected. The EMR served as the "gold standard," which then was compared to these same patients investigated through the use of various case definitions applied against the provincial hospital and physician administrative databases. Variables used in the development of the case definitions were depressive disorder diagnoses (either in hospital or physician claims data), date of diagnosis, and service provider type [general practitioner (GP) vs. psychiatrist]. Of the 120 case definitions investigated, 26 were found to have a kappa statistic greater than 0.6, of which 5 case definitions were considered the most appropriate for surveillance of depressive disorders. Of the 5 definitions, the following case definition, with a 77.5% sensitivity and 93% specificity, was found to be the most valid ([ ≥1 hospitalizations OR ≥1 psychiatrist visit related to depressive disorders any time] OR ≥2 GP visits related to depressive disorders within the first 2 years of diagnosis). This study found that provincial administrative databases may be useful for carrying out surveillance on depressive disorders among the adult population. The approach used in this study was simple and resulted in rather reasonable sensitivity and specificity.

  6. A review of accessibility of administrative healthcare databases in the Asia-Pacific region.

    PubMed

    Milea, Dominique; Azmi, Soraya; Reginald, Praveen; Verpillat, Patrice; Francois, Clement

    2015-01-01

    We describe and compare the availability and accessibility of administrative healthcare databases (AHDB) in several Asia-Pacific countries: Australia, Japan, South Korea, Taiwan, Singapore, China, Thailand, and Malaysia. The study included hospital records, reimbursement databases, prescription databases, and data linkages. Databases were first identified through PubMed, Google Scholar, and the ISPOR database register. Database custodians were contacted. Six criteria were used to assess the databases and provided the basis for a tool to categorise databases into seven levels ranging from least accessible (Level 1) to most accessible (Level 7). We also categorised overall data accessibility for each country as high, medium, or low based on accessibility of databases as well as the number of academic articles published using the databases. Fifty-four administrative databases were identified. Only a limited number of databases allowed access to raw data and were at Level 7 [Medical Data Vision EBM Provider, Japan Medical Data Centre (JMDC) Claims database and Nihon-Chouzai Pharmacy Claims database in Japan, and Medicare, Pharmaceutical Benefits Scheme (PBS), Centre for Health Record Linkage (CHeReL), HealthLinQ, Victorian Data Linkages (VDL), SA-NT DataLink in Australia]. At Levels 3-6 were several databases from Japan [Hamamatsu Medical University Database, Medi-Trend, Nihon University School of Medicine Clinical Data Warehouse (NUSM)], Australia [Western Australia Data Linkage (WADL)], Taiwan [National Health Insurance Research Database (NHIRD)], South Korea [Health Insurance Review and Assessment Service (HIRA)], and Malaysia [United Nations University (UNU)-Casemix]. Countries were categorised as having a high level of data accessibility (Australia, Taiwan, and Japan), medium level of accessibility (South Korea), or a low level of accessibility (Thailand, China, Malaysia, and Singapore). In some countries, data may be available but accessibility was restricted

  7. A systematic review of administrative and clinical databases of infants admitted to neonatal units.

    PubMed

    Statnikov, Yevgeniy; Ibrahim, Buthaina; Modi, Neena

    2017-05-01

    High quality information, increasingly captured in clinical databases, is a useful resource for evaluating and improving newborn care. We conducted a systematic review to identify neonatal databases, and define their characteristics. We followed a preregistered protocol using MesH terms to search MEDLINE, EMBASE, CINAHL, Web of Science and OVID Maternity and Infant Care Databases for articles identifying patient level databases covering more than one neonatal unit. Full-text articles were reviewed and information extracted on geographical coverage, criteria for inclusion, data source, and maternal and infant characteristics. We identified 82 databases from 2037 publications. Of the country-specific databases there were 39 regional and 39 national. Sixty databases restricted entries to neonatal unit admissions by birth characteristic or insurance cover; 22 had no restrictions. Data were captured specifically for 53 databases; 21 administrative sources; 8 clinical sources. Two clinical databases hold the largest range of data on patient characteristics, USA's Pediatrix BabySteps Clinical Data Warehouse and UK's National Neonatal Research Database. A number of neonatal databases exist that have potential to contribute to evaluating neonatal care. The majority is created by entering data specifically for the database, duplicating information likely already captured in other administrative and clinical patient records. This repetitive data entry represents an unnecessary burden in an environment where electronic patient records are increasingly used. Standardisation of data items is necessary to facilitate linkage within and between countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  8. Incidence of Appendicitis over Time: A Comparative Analysis of an Administrative Healthcare Database and a Pathology-Proven Appendicitis Registry

    PubMed Central

    Clement, Fiona; Zimmer, Scott; Dixon, Elijah; Ball, Chad G.; Heitman, Steven J.; Swain, Mark; Ghosh, Subrata

    2016-01-01

    Importance At the turn of the 21st century, studies evaluating the change in incidence of appendicitis over time have reported inconsistent findings. Objectives We compared the differences in the incidence of appendicitis derived from a pathology registry versus an administrative database in order to validate coding in administrative databases and establish temporal trends in the incidence of appendicitis. Design We conducted a population-based comparative cohort study to identify all individuals with appendicitis from 2000 to2008. Setting & Participants Two population-based data sources were used to identify cases of appendicitis: 1) a pathology registry (n = 8,822); and 2) a hospital discharge abstract database (n = 10,453). Intervention & Main Outcome The administrative database was compared to the pathology registry for the following a priori analyses: 1) to calculate the positive predictive value (PPV) of administrative codes; 2) to compare the annual incidence of appendicitis; and 3) to assess differences in temporal trends. Temporal trends were assessed using a generalized linear model that assumed a Poisson distribution and reported as an annual percent change (APC) with 95% confidence intervals (CI). Analyses were stratified by perforated and non-perforated appendicitis. Results The administrative database (PPV = 83.0%) overestimated the incidence of appendicitis (100.3 per 100,000) when compared to the pathology registry (84.2 per 100,000). Codes for perforated appendicitis were not reliable (PPV = 52.4%) leading to overestimation in the incidence of perforated appendicitis in the administrative database (34.8 per 100,000) as compared to the pathology registry (19.4 per 100,000). The incidence of appendicitis significantly increased over time in both the administrative database (APC = 2.1%; 95% CI: 1.3, 2.8) and pathology registry (APC = 4.1; 95% CI: 3.1, 5.0). Conclusion & Relevance The administrative database overestimated the incidence of appendicitis

  9. The burden of clostridium difficile infection: estimates of the incidence of CDI from U.S. Administrative databases.

    PubMed

    Olsen, Margaret A; Young-Xu, Yinong; Stwalley, Dustin; Kelly, Ciarán P; Gerding, Dale N; Saeed, Mohammed J; Mahé, Cedric; Dubberke, Erik R

    2016-04-22

    Many administrative data sources are available to study the epidemiology of infectious diseases, including Clostridium difficile infection (CDI), but few publications have compared CDI event rates across databases using similar methodology. We used comparable methods with multiple administrative databases to compare the incidence of CDI in older and younger persons in the United States. We performed a retrospective study using three longitudinal data sources (Medicare, OptumInsight LabRx, and Healthcare Cost and Utilization Project State Inpatient Database (SID)), and two hospital encounter-level data sources (Nationwide Inpatient Sample (NIS) and Premier Perspective database) to identify CDI in adults aged 18 and older with calculation of CDI incidence rates/100,000 person-years of observation (pyo) and CDI categorization (onset and association). The incidence of CDI ranged from 66/100,000 in persons under 65 years (LabRx), 383/100,000 in elderly persons (SID), and 677/100,000 in elderly persons (Medicare). Ninety percent of CDI episodes in the LabRx population were characterized as community-onset compared to 41 % in the Medicare population. The majority of CDI episodes in the Medicare and LabRx databases were identified based on only a CDI diagnosis, whereas almost ¾ of encounters coded for CDI in the Premier hospital data were confirmed with a positive test result plus treatment with metronidazole or oral vancomycin. Using only the Medicare inpatient data to calculate encounter-level CDI events resulted in 553 CDI events/100,000 persons, virtually the same as the encounter proportion calculated using the NIS (544/100,000 persons). We found that the incidence of CDI was 35 % higher in the Medicare data and fewer episodes were attributed to hospital acquisition when all medical claims were used to identify CDI, compared to only inpatient data lacking information on diagnosis and treatment in the outpatient setting. The incidence of CDI was 10-fold lower and

  10. Planning the future of JPL's management and administrative support systems around an integrated database

    NASA Technical Reports Server (NTRS)

    Ebersole, M. M.

    1983-01-01

    JPL's management and administrative support systems have been developed piece meal and without consistency in design approach over the past twenty years. These systems are now proving to be inadequate to support effective management of tasks and administration of the Laboratory. New approaches are needed. Modern database management technology has the potential for providing the foundation for more effective administrative tools for JPL managers and administrators. Plans for upgrading JPL's management and administrative systems over a six year period evolving around the development of an integrated management and administrative data base are discussed.

  11. A generic method for improving the spatial interoperability of medical and ecological databases.

    PubMed

    Ghenassia, A; Beuscart, J B; Ficheur, G; Occelli, F; Babykina, E; Chazard, E; Genin, M

    2017-10-03

    The availability of big data in healthcare and the intensive development of data reuse and georeferencing have opened up perspectives for health spatial analysis. However, fine-scale spatial studies of ecological and medical databases are limited by the change of support problem and thus a lack of spatial unit interoperability. The use of spatial disaggregation methods to solve this problem introduces errors into the spatial estimations. Here, we present a generic, two-step method for merging medical and ecological databases that avoids the use of spatial disaggregation methods, while maximizing the spatial resolution. Firstly, a mapping table is created after one or more transition matrices have been defined. The latter link the spatial units of the original databases to the spatial units of the final database. Secondly, the mapping table is validated by (1) comparing the covariates contained in the two original databases, and (2) checking the spatial validity with a spatial continuity criterion and a spatial resolution index. We used our novel method to merge a medical database (the French national diagnosis-related group database, containing 5644 spatial units) with an ecological database (produced by the French National Institute of Statistics and Economic Studies, and containing with 36,594 spatial units). The mapping table yielded 5632 final spatial units. The mapping table's validity was evaluated by comparing the number of births in the medical database and the ecological databases in each final spatial unit. The median [interquartile range] relative difference was 2.3% [0; 5.7]. The spatial continuity criterion was low (2.4%), and the spatial resolution index was greater than for most French administrative areas. Our innovative approach improves interoperability between medical and ecological databases and facilitates fine-scale spatial analyses. We have shown that disaggregation models and large aggregation techniques are not necessarily the best ways to

  12. Geospatial Database for Strata Objects Based on Land Administration Domain Model (ladm)

    NASA Astrophysics Data System (ADS)

    Nasorudin, N. N.; Hassan, M. I.; Zulkifli, N. A.; Rahman, A. Abdul

    2016-09-01

    Recently in our country, the construction of buildings become more complex and it seems that strata objects database becomes more important in registering the real world as people now own and use multilevel of spaces. Furthermore, strata title was increasingly important and need to be well-managed. LADM is a standard model for land administration and it allows integrated 2D and 3D representation of spatial units. LADM also known as ISO 19152. The aim of this paper is to develop a strata objects database using LADM. This paper discusses the current 2D geospatial database and needs for 3D geospatial database in future. This paper also attempts to develop a strata objects database using a standard data model (LADM) and to analyze the developed strata objects database using LADM data model. The current cadastre system in Malaysia includes the strata title is discussed in this paper. The problems in the 2D geospatial database were listed and the needs for 3D geospatial database in future also is discussed. The processes to design a strata objects database are conceptual, logical and physical database design. The strata objects database will allow us to find the information on both non-spatial and spatial strata title information thus shows the location of the strata unit. This development of strata objects database may help to handle the strata title and information.

  13. Administrative Databases Can Yield False Conclusions-An Example of Obesity in Total Joint Arthroplasty.

    PubMed

    George, Jaiben; Newman, Jared M; Ramanathan, Deepak; Klika, Alison K; Higuera, Carlos A; Barsoum, Wael K

    2017-09-01

    Research using large administrative databases has substantially increased in recent years. Accuracy with which comorbidities are represented in these databases has been questioned. The purpose of this study was to evaluate the extent of errors in obesity coding and its impact on arthroplasty research. Eighteen thousand thirty primary total knee arthroplasties (TKAs) and 10,475 total hip arthroplasties (THAs) performed at a single healthcare system from 2004-2014 were included. Patients were classified as obese or nonobese using 2 methods: (1) body mass index (BMI) ≥30 kg/m 2 and (2) international classification of disease, 9th edition codes. Length of stay, operative time, and 90-day complications were collected. Effect of obesity on various outcomes was analyzed separately for both BMI- and coding-based obesity. From 2004 to 2014, the prevalence of BMI-based obesity increased from 54% to 63% and 40% to 45% in TKA and THA, respectively. The prevalence of coding-based obesity increased from 15% to 28% and 8% to 17% in TKA and THA, respectively. Coding overestimated the growth of obesity in TKA and THA by 5.6 and 8.4 times, respectively. When obesity was defined by coding, obesity was falsely shown to be a significant risk factor for deep vein thrombosis (TKA), pulmonary embolism (THA), and longer hospital stay (TKA and THA). The growth in obesity observed in administrative databases may be an artifact because of improvements in coding over the years. Obesity defined by coding can overestimate the actual effect of obesity on complications after arthroplasty. Therefore, studies using large databases should be interpreted with caution, especially when variables prone to coding errors are involved. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Chronic disease prevalence from Italian administrative databases in the VALORE project: a validation through comparison of population estimates with general practice databases and national survey

    PubMed Central

    2013-01-01

    Background Administrative databases are widely available and have been extensively used to provide estimates of chronic disease prevalence for the purpose of surveillance of both geographical and temporal trends. There are, however, other sources of data available, such as medical records from primary care and national surveys. In this paper we compare disease prevalence estimates obtained from these three different data sources. Methods Data from general practitioners (GP) and administrative transactions for health services were collected from five Italian regions (Veneto, Emilia Romagna, Tuscany, Marche and Sicily) belonging to all the three macroareas of the country (North, Center, South). Crude prevalence estimates were calculated by data source and region for diabetes, ischaemic heart disease, heart failure and chronic obstructive pulmonary disease (COPD). For diabetes and COPD, prevalence estimates were also obtained from a national health survey. When necessary, estimates were adjusted for completeness of data ascertainment. Results Crude prevalence estimates of diabetes in administrative databases (range: from 4.8% to 7.1%) were lower than corresponding GP (6.2%-8.5%) and survey-based estimates (5.1%-7.5%). Geographical trends were similar in the three sources and estimates based on treatment were the same, while estimates adjusted for completeness of ascertainment (6.1%-8.8%) were slightly higher. For ischaemic heart disease administrative and GP data sources were fairly consistent, with prevalence ranging from 3.7% to 4.7% and from 3.3% to 4.9%, respectively. In the case of heart failure administrative estimates were consistently higher than GPs’ estimates in all five regions, the highest difference being 1.4% vs 1.1%. For COPD the estimates from administrative data, ranging from 3.1% to 5.2%, fell into the confidence interval of the Survey estimates in four regions, but failed to detect the higher prevalence in the most Southern region (4.0% in

  15. Validity of peptic ulcer disease and upper gastrointestinal bleeding diagnoses in administrative databases: a systematic review protocol.

    PubMed

    Montedori, Alessandro; Abraha, Iosief; Chiatti, Carlos; Cozzolino, Francesco; Orso, Massimiliano; Luchetta, Maria Laura; Rimland, Joseph M; Ambrosio, Giuseppe

    2016-09-15

    Administrative healthcare databases are useful to investigate the epidemiology, health outcomes, quality indicators and healthcare utilisation concerning peptic ulcers and gastrointestinal bleeding, but the databases need to be validated in order to be a reliable source for research. The aim of this protocol is to perform the first systematic review of studies reporting the validation of International Classification of Diseases, 9th Revision and 10th version (ICD-9 and ICD-10) codes for peptic ulcer and upper gastrointestinal bleeding diagnoses. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched, using appropriate search strategies. We will include validation studies that used administrative data to identify peptic ulcer disease and upper gastrointestinal bleeding diagnoses or studies that evaluated the validity of peptic ulcer and upper gastrointestinal bleeding codes in administrative data. The following inclusion criteria will be used: (a) the presence of a reference standard case definition for the diseases of interest; (b) the presence of at least one test measure (eg, sensitivity, etc) and (c) the use of an administrative database as a source of data. Pairs of reviewers will independently abstract data using standardised forms and will evaluate quality using the checklist of the Standards for Reporting of Diagnostic Accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA-P) 2015 statement. Ethics approval is not required given that this is a protocol for a systematic review. We will submit results of this study to a peer-reviewed journal for publication. The results will serve as a guide for researchers validating administrative healthcare databases to determine appropriate case definitions for peptic ulcer disease and upper gastrointestinal bleeding, as well as to perform outcome research using

  16. Comparing data mining methods on the VAERS database.

    PubMed

    Banks, David; Woo, Emily Jane; Burwen, Dale R; Perucci, Phil; Braun, M Miles; Ball, Robert

    2005-09-01

    Data mining may enhance traditional surveillance of vaccine adverse events by identifying events that are reported more commonly after administering one vaccine than other vaccines. Data mining methods find signals as the proportion of times a condition or group of conditions is reported soon after the administration of a vaccine; thus it is a relative proportion compared across vaccines, and not an absolute rate for the condition. The Vaccine Adverse Event Reporting System (VAERS) contains approximately 150 000 reports of adverse events that are possibly associated with vaccine administration. We studied four data mining techniques: empirical Bayes geometric mean (EBGM), lower-bound of the EBGM's 90% confidence interval (EB05), proportional reporting ratio (PRR), and screened PRR (SPRR). We applied these to the VAERS database and compared the agreement among methods and other performance properties, particularly focusing on the vaccine-event combinations with the highest numerical scores in the various methods. The vaccine-event combinations with the highest numerical scores varied substantially among the methods. Not all combinations representing known associations appeared in the top 100 vaccine-event pairs for all methods. The four methods differ in their ranking of vaccine-COSTART pairs. A given method may be superior in certain situations but inferior in others. This paper examines the statistical relationships among the four estimators. Determining which method is best for public health will require additional analysis that focuses on the true alarm and false alarm rates using known vaccine-event associations. Evaluating the properties of these data mining methods will help determine the value of such methods in vaccine safety surveillance. (c) 2005 John Wiley & Sons, Ltd.

  17. Quality of data regarding diagnoses of spinal disorders in administrative databases. A multicenter study.

    PubMed

    Faciszewski, T; Broste, S K; Fardon, D

    1997-10-01

    The purpose of the present study was to evaluate the accuracy of data regarding diagnoses of spinal disorders in administrative databases at eight different institutions. The records of 189 patients who had been managed for a disorder of the lumbar spine were independently reviewed by a physician who assigned the appropriate diagnostic codes according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). The age range of the 189 patients was seventeen to eighty-four years. The six major diagnostic categories studied were herniation of a lumbar disc, a previous operation on the lumbar spine, spinal stenosis, cauda equina syndrome, acquired spondylolisthesis, and congenital spondylolisthesis. The diagnostic codes assigned by the physician were compared with the codes that had been assigned during the ordinary course of events by personnel in the medical records department of each of the eight hospitals. The accuracy of coding was also compared among the eight hospitals, and it was found to vary depending on the diagnosis. Although there were both false-negative and false-positive codes at each institution, most errors were related to the low sensitivity of coding for previous spinal operations: only seventeen (28 per cent) of sixty-one such diagnoses were coded correctly. Other errors in coding were less frequent, but their implications for conclusions drawn from the information in administrative databases depend on the frequency of a diagnosis and its importance in an analysis. This study demonstrated that the accuracy of a diagnosis of a spinal disorder recorded in an administrative database varies according to the specific condition being evaluated. It is necessary to document the relative accuracy of specific ICD-9-CM diagnostic codes in order to improve the ability to validate the conclusions derived from investigations based on administrative databases.

  18. Can use of an administrative database improve accuracy of hospital-reported readmission rates?

    PubMed

    Edgerton, James R; Herbert, Morley A; Hamman, Baron L; Ring, W Steves

    2018-05-01

    Readmission rates after cardiac surgery are being used as a quality indicator; they are also being collected by Medicare and are tied to reimbursement. Accurate knowledge of readmission rates may be difficult to achieve because patients may be readmitted to different hospitals. In our area, 81 hospitals share administrative claims data; 28 of these hospitals (from 5 different hospital systems) do cardiac surgery and share Society of Thoracic Surgeons (STS) clinical data. We used these 2 sources to compare the readmissions data for accuracy. A total of 45,539 STS records from January 2008 to December 2016 were matched with the hospital billing data records. Using the index visit as the start date, the billing records were queried for any subsequent in-patient visits for that patient. The billing records included date of readmission and hospital of readmission data and were compared with the data captured in the STS record. We found 1153 (2.5%) patients who had STS records that were marked "No" or "missing," but there were billing records that showed a readmission. The reported STS readmission rate of 4796 (10.5%) underreported the readmission rate by 2.5 actual percentage points. The true rate should have been 13.0%. Actual readmission rate was 23.8% higher than reported by the clinical database. Approximately 36% of readmissions were to a hospital that was a part of a different hospital system. It is important to know accurate readmission rates for quality improvement processes and institutional financial planning. Matching patient records to an administrative database showed that the clinical database may fail to capture many readmissions. Combining data with an administrative database can enhance accuracy of reporting. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  19. [Technical improvement of cohort constitution in administrative health databases: Providing a tool for integration and standardization of data applicable in the French National Health Insurance Database (SNIIRAM)].

    PubMed

    Ferdynus, C; Huiart, L

    2016-09-01

    Administrative health databases such as the French National Heath Insurance Database - SNIIRAM - are a major tool to answer numerous public health research questions. However the use of such data requires complex and time-consuming data management. Our objective was to develop and make available a tool to optimize cohort constitution within administrative health databases. We developed a process to extract, transform and load (ETL) data from various heterogeneous sources in a standardized data warehouse. This data warehouse is architected as a star schema corresponding to an i2b2 star schema model. We then evaluated the performance of this ETL using data from a pharmacoepidemiology research project conducted in the SNIIRAM database. The ETL we developed comprises a set of functionalities for creating SAS scripts. Data can be integrated into a standardized data warehouse. As part of the performance assessment of this ETL, we achieved integration of a dataset from the SNIIRAM comprising more than 900 million lines in less than three hours using a desktop computer. This enables patient selection from the standardized data warehouse within seconds of the request. The ETL described in this paper provides a tool which is effective and compatible with all administrative health databases, without requiring complex database servers. This tool should simplify cohort constitution in health databases; the standardization of warehouse data facilitates collaborative work between research teams. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  20. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  1. GMDD: a database of GMO detection methods.

    PubMed

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-06-04

    Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.

  2. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis of...

  3. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis of...

  4. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis of...

  5. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis of...

  6. 28 CFR 36.204 - Administrative methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Administrative methods. 36.204 Section 36... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES General Requirements § 36.204 Administrative methods... standards or criteria or methods of administration that have the effect of discriminating on the basis of...

  7. The Effect of Relational Database Technology on Administrative Computing at Carnegie Mellon University.

    ERIC Educational Resources Information Center

    Golden, Cynthia; Eisenberger, Dorit

    1990-01-01

    Carnegie Mellon University's decision to standardize its administrative system development efforts on relational database technology and structured query language is discussed and its impact is examined in one of its larger, more widely used applications, the university information system. Advantages, new responsibilities, and challenges of the…

  8. Regulatory and ethical considerations for linking clinical and administrative databases.

    PubMed

    Dokholyan, Rachel S; Muhlbaier, Lawrence H; Falletta, John M; Jacobs, Jeffrey P; Shahian, David; Haan, Constance K; Peterson, Eric D

    2009-06-01

    Clinical data registries are valuable tools that support evidence development, performance assessment, comparative effectiveness studies, and the adoption of new treatments into routine clinical practice. Although these registries do not have important information on long-term therapies or clinical events, administrative claims databases offer a potentially valuable complement. This article focuses on the regulatory and ethical considerations that arise from the use of registry data for research, including linkage of clinical and administrative data sets. (1) Are such activities primarily designed for quality assessment and improvement, research, or both, as this determines the appropriate ethical and regulatory standards? (2) Does the submission of data to a central registry, which may subsequently be linked to other data sources, require review by the institutional review board (IRB) of each participating organization? (3) What levels and mechanisms of IRB oversight are appropriate for the existence of a linked central data repository and the specific studies that may subsequently be developed using it? (4) Under what circumstances are waivers of informed consent and Health Insurance Portability and Accountability Act authorization required? (5) What are the requirements for a limited data set that would qualify a research activity as not involving human subjects and thus not subject to further IRB review? The approaches outlined in this article represent a local interpretation of the regulations in the context of several clinical data registry projects and focuses on a specific case study of the Society of Thoracic Surgeons National Database.

  9. Nursing leadership succession planning in Veterans Health Administration: creating a useful database.

    PubMed

    Weiss, Lizabeth M; Drake, Audrey

    2007-01-01

    An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.

  10. A systematic review of validated methods to capture acute bronchospasm using administrative or claims data.

    PubMed

    Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L

    2013-12-30

    To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Use of employer administrative databases to identify systematic causes of injury in aluminum manufacturing.

    PubMed

    Pollack, Keshia M; Agnew, Jacqueline; Slade, Martin D; Cantley, Linda; Taiwo, Oyebode; Vegso, Sally; Sircar, Kanta; Cullen, Mark R

    2007-09-01

    Employer administrative files are an underutilized source of data in epidemiologic studies of occupational injuries. Personnel files, occupational health surveillance data, industrial hygiene data, and a real-time incident and injury management system from a large multi-site aluminum manufacturer were linked deterministically. An ecological-level measure of physical job demand was also linked. This method successfully created a database containing over 100 variables for 9,101 hourly employees from eight geographically dispersed U.S. plants. Between 2002 and 2004, there were 3,563 traumatic injuries to 2,495 employees. The most common injuries were sprain/strains (32%), contusions (24%), and lacerations (14%). A multivariable logistic regression model revealed that physical job demand was the strongest predictor of injury risk, in a dose dependent fashion. Other strong predictors of injury included female gender, young age, short company tenure and short time on current job. Employer administrative files are a useful source of data, as they permit the exploration of risk factors and potential confounders that are not included in many population-based surveys. The ability to link employer administrative files with injury surveillance data is a valuable analysis strategy for comprehensively studying workplace injuries, identifying salient risk factors, and targeting workforce populations disproportionately affected. (c) 2007 Wiley-Liss, Inc.

  12. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  13. Contribution of the administrative database and the geographical information system to disaster preparedness and regionalization.

    PubMed

    Kuwabara, Kazuaki; Matsuda, Shinya; Fushimi, Kiyohide; Ishikawa, Koichi B; Horiguchi, Hiromasa; Fujimori, Kenji

    2012-01-01

    Public health emergencies like earthquakes and tsunamis underscore the need for an evidence-based approach to disaster preparedness. Using the Japanese administrative database and the geographical information system (GIS), the interruption of hospital-based mechanical ventilation administration by a hypothetical disaster in three areas of the southeastern mainland (Tokai, Tonankai, and Nankai) was simulated and the repercussions on ventilator care in the prefectures adjacent to the damaged prefectures was estimated. Using the database of 2010 including 3,181,847 hospitalized patients among 952 hospitals, the maximum daily ventilator capacity in each hospital was calculated and the number of patients who were administered ventilation on October xx was counted. Using GIS and patient zip code, the straight-line distances among the damaged hospitals, the hospitals in prefectures nearest to damaged prefectures, and ventilated patients' zip codes were measured. The authors simulated that ventilated patients were transferred to the closest hospitals outside damaged prefectures. The increase in the ventilator operating rates in three areas was aggregated. One hundred twenty-four and 236 patients were administered ventilation in the damaged hospitals and in the closest hospitals outside the damaged prefectures of Tokai, 92 and 561 of Tonankai, and 35 and 85 of Nankai, respectively. The increases in the ventilator operating rates among prefectures ranged from 1.04 to 26.33-fold in Tokai; 1.03 to 1.74-fold in Tonankai, and 1.00 to 2.67-fold in Nankai. Administrative databases and GIS can contribute to evidenced-based disaster preparedness and the determination of appropriate receiving hospitals with available medical resources.

  14. Connecting the Library's Patron Database to Campus Administrative Software: Simplifying the Library's Accounts Receivable Process

    ERIC Educational Resources Information Center

    Oliver, Astrid; Dahlquist, Janet; Tankersley, Jan; Emrich, Beth

    2010-01-01

    This article discusses the processes that occurred when the Library, Controller's Office, and Information Technology Department agreed to create an interface between the Library's Innovative Interfaces patron database and campus administrative software, Banner, using file transfer protocol, in an effort to streamline the Library's accounts…

  15. Evaluation of algorithms to identify incident cancer cases by using French health administrative databases.

    PubMed

    Ajrouche, Aya; Estellat, Candice; De Rycke, Yann; Tubach, Florence

    2017-08-01

    Administrative databases are increasingly being used in cancer observational studies. Identifying incident cancer in these databases is crucial. This study aimed to develop algorithms to estimate cancer incidence by using health administrative databases and to examine the accuracy of the algorithms in terms of national cancer incidence rates estimated from registries. We identified a cohort of 463 033 participants on 1 January 2012 in the Echantillon Généraliste des Bénéficiaires (EGB; a representative sample of the French healthcare insurance system). The EGB contains data on long-term chronic disease (LTD) status, reimbursed outpatient treatments and procedures, and hospitalizations (including discharge diagnoses, and costly medical procedures and drugs). After excluding cases of prevalent cancer, we applied 15 algorithms to estimate the cancer incidence rates separately for men and women in 2012 and compared them to the national cancer incidence rates estimated from French registries by indirect age and sex standardization. The most accurate algorithm for men combined information from LTD status, outpatient anticancer drugs, radiotherapy sessions and primary or related discharge diagnosis of cancer, although it underestimated the cancer incidence (standardized incidence ratio (SIR) 0.85 [0.80-0.90]). For women, the best algorithm used the same definition of the algorithm for men but restricted hospital discharge to only primary or related diagnosis with an additional inpatient procedure or drug reimbursement related to cancer and gave comparable estimates to those from registries (SIR 1.00 [0.94-1.06]). The algorithms proposed could be used for cancer incidence monitoring and for future etiological cancer studies involving French healthcare databases. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. 34 CFR 361.12 - Methods of administration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 34 Education 2 2012-07-01 2012-07-01 false Methods of administration. 361.12 Section 361.12... State Plan and Other Requirements for Vocational Rehabilitation Services Administration § 361.12 Methods... applicable, employs methods of administration found necessary by the Secretary for the proper and efficient...

  17. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that are...

  18. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that are...

  19. 42 CFR 431.15 - Methods of administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Methods of administration. 431.15 Section 431.15 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... § 431.15 Methods of administration. A State plan must provide for methods of administration that are...

  20. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Methods of administration. 441.105 Section 441.105... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods of administration. The agency must have methods of administration to ensure that its responsibilities under this...

  1. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Methods of administration. 441.105 Section 441.105... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods of administration. The agency must have methods of administration to ensure that its responsibilities under this...

  2. 42 CFR 441.105 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Methods of administration. 441.105 Section 441.105... Medicaid for Individuals Age 65 or Over in Institutions for Mental Diseases § 441.105 Methods of administration. The agency must have methods of administration to ensure that its responsibilities under this...

  3. Influenza Vaccine Effectiveness in the Elderly Based on Administrative Databases: Change in Immunization Habit as a Marker for Bias

    PubMed Central

    Hottes, Travis S.; Skowronski, Danuta M.; Hiebert, Brett; Janjua, Naveed Z.; Roos, Leslie L.; Van Caeseele, Paul; Law, Barbara J.; De Serres, Gaston

    2011-01-01

    Background Administrative databases provide efficient methods to estimate influenza vaccine effectiveness (IVE) against severe outcomes in the elderly but are prone to intractable bias. This study returns to one of the linked population databases by which IVE against hospitalization and death in the elderly was first assessed. We explore IVE across six more recent influenza seasons, including periods before, during, and after peak activity to identify potential markers for bias. Methods and Findings Acute respiratory hospitalization and all-cause mortality were compared between immunized/non-immunized community-dwelling seniors ≥65years through administrative databases in Manitoba, Canada between 2000-01 and 2005-06. IVE was compared during pre-season/influenza/post-season periods through logistic regression with multivariable adjustment (age/sex/income/residence/prior influenza or pneumococcal immunization/medical visits/comorbidity), stratification based on prior influenza immunization history, and propensity scores. Analysis during pre-season periods assessed baseline differences between immunized and unimmunized groups. The study population included ∼140,000 seniors, of whom 50–60% were immunized annually. Adjustment for key covariates and use of propensity scores consistently increased IVE. Estimates were paradoxically higher pre-season and for all-cause mortality vs. acute respiratory hospitalization. Stratified analysis showed that those twice consecutively and currently immunized were always at significantly lower hospitalization/mortality risk with odds ratios (OR) of 0.60 [95%CI0.48–0.75] and 0.58 [0.53–0.64] pre-season and 0.77 [0.69–0.86] and 0.71 [0.66–0.77] during influenza circulation, relative to the consistently unimmunized. Conversely, those forgoing immunization when twice previously immunized were always at significantly higher hospitalization/mortality risk with OR of 1.41 [1.14–1.73] and 2.45 [2.21–2.72] pre-season and 1

  4. Inaccurate Ascertainment of Morbidity and Mortality due to Influenza in Administrative Databases: A Population-Based Record Linkage Study

    PubMed Central

    Muscatello, David J.; Amin, Janaki; MacIntyre, C. Raina; Newall, Anthony T.; Rawlinson, William D.; Sintchenko, Vitali; Gilmour, Robin; Thackway, Sarah

    2014-01-01

    Background Historically, counting influenza recorded in administrative health outcome databases has been considered insufficient to estimate influenza attributable morbidity and mortality in populations. We used database record linkage to evaluate whether modern databases have similar limitations. Methods Person-level records were linked across databases of laboratory notified influenza, emergency department (ED) presentations, hospital admissions and death registrations, from the population (∼6.9 million) of New South Wales (NSW), Australia, 2005 to 2008. Results There were 2568 virologically diagnosed influenza infections notified. Among those, 25% of 40 who died, 49% of 1451 with a hospital admission and 7% of 1742 with an ED presentation had influenza recorded on the respective database record. Compared with persons aged ≥65 years and residents of regional and remote areas, respectively, children and residents of major cities were more likely to have influenza coded on their admission record. Compared with older persons and admitted patients, respectively, working age persons and non-admitted persons were more likely to have influenza coded on their ED record. On both ED and admission records, persons with influenza type A infection were more likely than those with type B infection to have influenza coded. Among death registrations, hospital admissions and ED presentations with influenza recorded as a cause of illness, 15%, 28% and 1.4%, respectively, also had laboratory notified influenza. Time trends in counts of influenza recorded on the ED, admission and death databases reflected the trend in counts of virologically diagnosed influenza. Conclusions A minority of the death, hospital admission and ED records for persons with a virologically diagnosed influenza infection identified influenza as a cause of illness. Few database records with influenza recorded as a cause had laboratory confirmation. The databases have limited value for estimating incidence

  5. The development of the Project NetWork administrative records database for policy evaluation.

    PubMed

    Rupp, K; Driessen, D; Kornfeld, R; Wood, M

    1999-01-01

    This article describes the development of SSA's administrative records database for the Project NetWork return-to-work experiment targeting persons with disabilities. The article is part of a series of papers on the evaluation of the Project NetWork demonstration. In addition to 8,248 Project NetWork participants randomly assigned to receive case management services and a control group, the simulation identified 138,613 eligible nonparticipants in the demonstration areas. The output data files contain detailed monthly information on Supplemental Security Income (SSI) and Disability Insurance (DI) benefits, annual earnings, and a set of demographic and diagnostic variables. The data allow for the measurement of net outcomes and the analysis of factors affecting participation. The results suggest that it is feasible to simulate complex eligibility rules using administrative records, and create a clean and edited data file for a comprehensive and credible evaluation. The study shows that it is feasible to use administrative records data for selecting control or comparison groups in future demonstration evaluations.

  6. A unique linkage of administrative and clinical registry databases to expand analytic possibilities in pediatric heart transplantation research.

    PubMed

    Godown, Justin; Thurm, Cary; Dodd, Debra A; Soslow, Jonathan H; Feingold, Brian; Smith, Andrew H; Mettler, Bret A; Thompson, Bryn; Hall, Matt

    2017-12-01

    Large clinical, research, and administrative databases are increasingly utilized to facilitate pediatric heart transplant (HTx) research. Linking databases has proven to be a robust strategy across multiple disciplines to expand the possible analyses that can be performed while leveraging the strengths of each dataset. We describe a unique linkage of the Scientific Registry of Transplant Recipients (SRTR) database and the Pediatric Health Information System (PHIS) administrative database to provide a platform to assess resource utilization in pediatric HTx. All pediatric patients (1999-2016) who underwent HTx at a hospital enrolled in the PHIS database were identified. A linkage was performed between the SRTR and PHIS databases in a stepwise approach using indirect identifiers. To determine the feasibility of using these linked data to assess resource utilization, total and post-HTx hospital costs were assessed. A total of 3188 unique transplants were identified as being present in both databases and amenable to linkage. Linkage of SRTR and PHIS data was successful in 3057 (95.9%) patients, of whom 2896 (90.8%) had complete cost data. Median total and post-HTx hospital costs were $518,906 (IQR $324,199-$889,738), and $334,490 (IQR $235,506-$498,803) respectively with significant differences based on patient demographics and clinical characteristics at HTx. Linkage of the SRTR and PHIS databases is feasible and provides an invaluable tool to assess resource utilization. Our analysis provides contemporary cost data for pediatric HTx from the largest US sample reported to date. It also provides a platform for expanded analyses in the pediatric HTx population. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Incidence and trends of central line associated pneumothorax using radiograph report text search versus administrative database codes.

    PubMed

    Reeson, Marc; Forster, Alan; van Walraven, Carl

    2018-05-25

    Central line associated pneumothorax (CLAP) could be a good quality of care indicator because they are objectively measured, clearly undesirable and possibly avoidable. We measured the incidence and trends of CLAP using radiograph report text search with manual review and compared them with measures using routinely collected health administrative data. For each hospitalisation to a tertiary care teaching hospital between 2002 and 2015, we searched all chest radiography reports for a central line with a sensitive computer algorithm. Screen positive reports were manually reviewed to confirm central lines. The index and subsequent chest radiography reports were screened for pneumothorax followed by manual confirmation. Diagnostic and procedural codes were used to identify CLAP in administrative data. In 685 044 hospitalisations, 10 819 underwent central line insertion (1.6%) with CLAP occurring 181 times (1.7%). CLAP risk did not change over time. Codes for CLAP were inaccurate (sensitivity 13.8%, positive predictive value 6.6%). However, overall code-based CLAP risk (1.8%) was almost identical to actual values possibly because patient strata with inflated CLAP risk were balanced by more common strata having underestimated CLAP risk. Code-based methods inflated central line incidence 2.2 times and erroneously concluded that CLAP risk decreased significantly over time. Using valid methods, CLAP incidence was similar to those in the literature but has not changed over time. Although administrative database codes for CLAP were very inaccurate, they generated CLAP risks very similar to actual values because of offsetting errors. In contrast to those from radiograph report text search with manual review, CLAP trends decreased significantly using administrative data. Hospital CLAP risk should not be measured using administrative data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial

  8. Validity of Heart Failure Diagnoses in Administrative Databases: A Systematic Review and Meta-Analysis

    PubMed Central

    McCormick, Natalie; Lacaille, Diane; Bhole, Vidula; Avina-Zubieta, J. Antonio

    2014-01-01

    Objective Heart failure (HF) is an important covariate and outcome in studies of elderly populations and cardiovascular disease cohorts, among others. Administrative data is increasingly being used for long-term clinical research in these populations. We aimed to conduct the first systematic review and meta-analysis of studies reporting on the validity of diagnostic codes for identifying HF in administrative data. Methods MEDLINE and EMBASE were searched (inception to November 2010) for studies: (a) Using administrative data to identify HF; or (b) Evaluating the validity of HF codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value [PPV], negative predictive value, or Kappa scores) for HF, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2011) of original papers. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Using a random-effects model, pooled sensitivity and specificity values were produced, along with estimates of the positive (LR+) and negative (LR−) likelihood ratios, and diagnostic odds ratios (DOR = LR+/LR−) of HF codes. Results Nineteen studies published from1999–2009 were included in the qualitative review. Specificity was ≥95% in all studies and PPV was ≥87% in the majority, but sensitivity was lower (≥69% in ≥50% of studies). In a meta-analysis of the 11 studies reporting sensitivity and specificity values, the pooled sensitivity was 75.3% (95% CI: 74.7–75.9) and specificity was 96.8% (95% CI: 96.8–96.9). The pooled LR+ was 51.9 (20.5–131.6), the LR− was 0.27 (0.20–0.37), and the DOR was 186.5 (96.8–359.2). Conclusions While most HF diagnoses in administrative databases do correspond to true HF cases, about one-quarter of HF cases are not captured. The use of broader search parameters, along with

  9. Bootstrap imputation with a disease probability model minimized bias from misclassification due to administrative database codes.

    PubMed

    van Walraven, Carl

    2017-04-01

    Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Validity of administrative database code algorithms to identify vascular access placement, surgical revisions, and secondary patency.

    PubMed

    Al-Jaishi, Ahmed A; Moist, Louise M; Oliver, Matthew J; Nash, Danielle M; Fleet, Jamie L; Garg, Amit X; Lok, Charmaine E

    2018-03-01

    We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.

  11. Construction of crystal structure prototype database: methods and applications.

    PubMed

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-26

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  12. Construction of crystal structure prototype database: methods and applications

    NASA Astrophysics Data System (ADS)

    Su, Chuanxun; Lv, Jian; Li, Quan; Wang, Hui; Zhang, Lijun; Wang, Yanchao; Ma, Yanming

    2017-04-01

    Crystal structure prototype data have become a useful source of information for materials discovery in the fields of crystallography, chemistry, physics, and materials science. This work reports the development of a robust and efficient method for assessing the similarity of structures on the basis of their interatomic distances. Using this method, we proposed a simple and unambiguous definition of crystal structure prototype based on hierarchical clustering theory, and constructed the crystal structure prototype database (CSPD) by filtering the known crystallographic structures in a database. With similar method, a program structure prototype analysis package (SPAP) was developed to remove similar structures in CALYPSO prediction results and extract predicted low energy structures for a separate theoretical structure database. A series of statistics describing the distribution of crystal structure prototypes in the CSPD was compiled to provide an important insight for structure prediction and high-throughput calculations. Illustrative examples of the application of the proposed database are given, including the generation of initial structures for structure prediction and determination of the prototype structure in databases. These examples demonstrate the CSPD to be a generally applicable and useful tool for materials discovery.

  13. Users Manual for the Federal Aviation Administration Research and Development Electromagnetic Database (FRED) For Windows: Version 2.0

    DOT National Transportation Integrated Search

    1998-02-01

    This document provides instructional guidelines to users of the Federal Aviation Administration (FAA) Research and Development Electromagnetic Database (FRED) Version 2.0. Instructions are provided on how to access FRED from a compact disk (CD) and h...

  14. Linkage of the Canadian Study of Health and Aging to provincial administrative health care databases in Nova Scotia.

    PubMed

    Yip, A M; Kephart, G; Rockwood, K

    2001-01-01

    The Canadian Study of Health and Aging (CSHA) was a cohort study that included 528 Nova Scotian community-dwelling participants. Linkage of CSHA and provincial Medical Services Insurance (MSI) data enabled examination of health care utilization in this subsample. This article discusses methodological and ethical issues of database linkage and explores variation in the use of health services by demographic variables and health status. Utilization over 24 months following baseline was extracted from MSI's physician claims, hospital discharge abstracts, and Pharmacare claims databases. Twenty-nine subjects refused consent for access to their MSI file; health card numbers for three others could not be retrieved. A significant difference in healthcare use by age and self-rated health was revealed. Linkage of population-based data with provincial administrative health care databases has the potential to guide health care planning and resource allocation. This process must include steps to ensure protection of confidentiality. Standard practices for linkage consent and routine follow-up should be adopted. The Canadian Study of Health and Aging (CSHA) began in 1991-92 to explore dementia, frailty, and adverse health outcomes (Canadian Study of Health and Aging Working Group, 1994). The original CSHA proposal included linkage to provincial administrative health care databases by the individual CSHA study centers to enhance information on health care utilization and outcomes of study participants. In Nova Scotia, the Medical Services Insurance (MSI) administration, which drew the sampling frame for the original CSHA, did not retain the list of corresponding health card numbers. Furthermore, consent for this access was not asked of participants at the time of the first interview. The objectives of this study reported here were to examine the feasibility and ethical considerations of linking data from the CSHA to MSI utilization data, and to explore variation in health

  15. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1992-01-01

    One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.

  16. Random vs. systematic sampling from administrative databases involving human subjects.

    PubMed

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  17. Usefulness of administrative databases for risk adjustment of adverse events in surgical patients.

    PubMed

    Rodrigo-Rincón, Isabel; Martin-Vizcaíno, Marta P; Tirapu-León, Belén; Zabalza-López, Pedro; Abad-Vicente, Francisco J; Merino-Peralta, Asunción; Oteiza-Martínez, Fabiola

    2016-03-01

    The aim of this study was to assess the usefulness of clinical-administrative databases for the development of risk adjustment in the assessment of adverse events in surgical patients. The study was conducted at the Hospital of Navarra, a tertiary teaching hospital in northern Spain. We studied 1602 hospitalizations of surgical patients from 2008 to 2010. We analysed 40 comorbidity variables included in the National Surgical Quality Improvement (NSQIP) Program of the American College of Surgeons using 2 sources of information: The clinical and administrative database (CADB) and the data extracted from the complete clinical records (CR), which was considered the gold standard. Variables were catalogued according to compliance with the established criteria: sensitivity, positive predictive value and kappa coefficient >0.6. The average number of comorbidities per study participant was 1.6 using the CR and 0.95 based on CADB (p<.0001). Thirteen types of comorbidities (accounting for 8% of the comorbidities detected in the CR) were not identified when the CADB was the source of information. Five of the 27 remaining comorbidities complied with the 3 established criteria; 2 pathologies fulfilled 2 criteria, whereas 11 fulfilled 1, and 9 did not fulfil any criterion. CADB detected prevalent comorbidities such as comorbid hypertension and diabetes. However, the CABD did not provide enough information to assess the variables needed to perform the risk adjustment proposed by the NSQIP for the assessment of adverse events in surgical patients. Copyright © 2015. Publicado por Elsevier España, S.L.U.

  18. Administrative database concerns: accuracy of International Classification of Diseases, Ninth Revision coding is poor for preoperative anemia in patients undergoing spinal fusion.

    PubMed

    Golinvaux, Nicholas S; Bohl, Daniel D; Basques, Bryce A; Grauer, Jonathan N

    2014-11-15

    Cross-sectional study. To objectively evaluate the ability of International Classification of Diseases, Ninth Revision (ICD-9) codes, which are used as the foundation for administratively coded national databases, to identify preoperative anemia in patients undergoing spinal fusion. National database research in spine surgery continues to rise. However, the validity of studies based on administratively coded data, such as the Nationwide Inpatient Sample, are dependent on the accuracy of ICD-9 coding. Such coding has previously been found to have poor sensitivity to conditions such as obesity and infection. A cross-sectional study was performed at an academic medical center. Hospital-reported anemia ICD-9 codes (those used for administratively coded databases) were directly compared with the chart-documented preoperative hematocrits (true laboratory values). A patient was deemed to have preoperative anemia if the preoperative hematocrit was less than the lower end of the normal range (36.0% for females and 41.0% for males). The study included 260 patients. Of these, 37 patients (14.2%) were anemic; however, only 10 patients (3.8%) received an "anemia" ICD-9 code. Of the 10 patients coded as anemic, 7 were anemic by definition, whereas 3 were not, and thus were miscoded. This equates to an ICD-9 code sensitivity of 0.19, with a specificity of 0.99, and positive and negative predictive values of 0.70 and 0.88, respectively. This study uses preoperative anemia to demonstrate the potential inaccuracies of ICD-9 coding. These results have implications for publications using databases that are compiled from ICD-9 coding data. Furthermore, the findings of the current investigation raise concerns regarding the accuracy of additional comorbidities. Although administrative databases are powerful resources that provide large sample sizes, it is crucial that we further consider the quality of the data source relative to its intended purpose.

  19. VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, N.; Sellis, Timos

    1993-01-01

    One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.

  20. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 2 2014-10-01 2012-10-01 true Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC...

  1. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 2 2011-10-01 2011-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC...

  2. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 2 2012-10-01 2012-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC...

  3. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 2 2013-10-01 2012-10-01 true Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC...

  4. 45 CFR 205.30 - Methods of administration.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false Methods of administration. 205.30 Section 205.30 Public Welfare Regulations Relating to Public Welfare OFFICE OF FAMILY ASSISTANCE (ASSISTANCE PROGRAMS), ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION-PUBLIC...

  5. A practical approach for inexpensive searches of radiology report databases.

    PubMed

    Desjardins, Benoit; Hamilton, R Curtis

    2007-06-01

    We present a method to perform full text searches of radiology reports for the large number of departments that do not have this ability as part of their radiology or hospital information system. A tool written in Microsoft Access (front-end) has been designed to search a server (back-end) containing the indexed backup weekly copy of the full relational database extracted from a radiology information system (RIS). This front end-/back-end approach has been implemented in a large academic radiology department, and is used for teaching, research and administrative purposes. The weekly second backup of the 80 GB, 4 million record RIS database takes 2 hours. Further indexing of the exported radiology reports takes 6 hours. Individual searches of the indexed database typically take less than 1 minute on the indexed database and 30-60 minutes on the nonindexed database. Guidelines to properly address privacy and institutional review board issues are closely followed by all users. This method has potential to improve teaching, research, and administrative programs within radiology departments that cannot afford more expensive technology.

  6. Database Administration: Concepts, Tools, Experiences, and Problems.

    ERIC Educational Resources Information Center

    Leong-Hong, Belkis; Marron, Beatrice

    The concepts of data base administration, the role of the data base administrator (DBA), and computer software tools useful in data base administration are described in order to assist data base technologists and managers. A study of DBA's in the Federal Government is detailed in terms of the functions they perform, the software tools they use,…

  7. A Relational Database System for Student Use.

    ERIC Educational Resources Information Center

    Fertuck, Len

    1982-01-01

    Describes an APL implementation of a relational database system suitable for use in a teaching environment in which database development and database administration are studied, and discusses the functions of the user and the database administrator. An appendix illustrating system operation and an eight-item reference list are attached. (Author/JL)

  8. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations.

    PubMed

    Kim, Seung Won; Kim, Bae-Hwan

    2016-07-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials.

  9. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...

  10. 47 CFR 64.623 - Administrator requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... administrator of the TRS User Registration Database, the administrator of the VRS Access Technology Reference... parties with a vested interest in the outcome of TRS-related numbering administration and activities. (4) None of the administrator of the TRS User Registration Database, the administrator of the VRS Access...

  11. Usefulness of Canadian Public Health Insurance Administrative Databases to Assess Breast and Ovarian Cancer Screening Imaging Technologies for BRCA1/2 Mutation Carriers.

    PubMed

    Larouche, Geneviève; Chiquette, Jocelyne; Plante, Marie; Pelletier, Sylvie; Simard, Jacques; Dorval, Michel

    2016-11-01

    In Canada, recommendations for clinical management of hereditary breast and ovarian cancer among individuals carrying a deleterious BRCA1 or BRCA2 mutation have been available since 2007. Eight years later, very little is known about the uptake of screening and risk-reduction measures in this population. Because Canada's public health care system falls under provincial jurisdictions, using provincial health care administrative databases appears a valuable option to assess management of BRCA1/2 mutation carriers. The objective was to explore the usefulness of public health insurance administrative databases in British Columbia, Ontario, and Quebec to assess management after BRCA1/2 genetic testing. Official public health insurance documents were considered potentially useful if they had specific procedure codes, and pertained to procedures performed in the public and private health care systems. All 3 administrative databases have specific procedures codes for mammography and breast ultrasounds. Only Quebec and Ontario have a specific procedure code for breast magnetic resonance imaging. It is impossible to assess, on an individual basis, the frequency of others screening exams, with the exception of CA-125 testing in British Columbia. Screenings done in private practice are excluded from the administrative databases unless covered by special agreements for reimbursement, such as all breast imaging exams in Ontario and mammograms in British Columbia and Quebec. There are no specific procedure codes for risk-reduction surgeries for breast and ovarian cancer. Population-based assessment of breast and ovarian cancer risk management strategies other than mammographic screening, using only administrative data, is currently challenging in the 3 Canadian provinces studied. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  12. A Web-based Alternative Non-animal Method Database for Safety Cosmetic Evaluations

    PubMed Central

    Kim, Seung Won; Kim, Bae-Hwan

    2016-01-01

    Animal testing was used traditionally in the cosmetics industry to confirm product safety, but has begun to be banned; alternative methods to replace animal experiments are either in development, or are being validated, worldwide. Research data related to test substances are critical for developing novel alternative tests. Moreover, safety information on cosmetic materials has neither been collected in a database nor shared among researchers. Therefore, it is imperative to build and share a database of safety information on toxicological mechanisms and pathways collected through in vivo, in vitro, and in silico methods. We developed the CAMSEC database (named after the research team; the Consortium of Alternative Methods for Safety Evaluation of Cosmetics) to fulfill this purpose. On the same website, our aim is to provide updates on current alternative research methods in Korea. The database will not be used directly to conduct safety evaluations, but researchers or regulatory individuals can use it to facilitate their work in formulating safety evaluations for cosmetic materials. We hope this database will help establish new alternative research methods to conduct efficient safety evaluations of cosmetic materials. PMID:27437094

  13. Prehospital aspirin administration for acute coronary syndrome (ACS) in the USA: an EMS quality assessment using the NEMSIS 2011 database.

    PubMed

    Tataris, Katie L; Mercer, Mary P; Govindarajan, Prasanthi

    2015-11-01

    National practice guidelines recommend early aspirin administration to reduce mortality in acute coronary syndrome (ACS). Although timely administration of aspirin has been shown to reduce mortality in ACS by 23%, prior regional Emergency Medical Service (EMS) data have shown inadequate prehospital administration of aspirin in patients with suspected cardiac ischaemia. Using the National EMS Information System (NEMSIS) database, we sought to determine (1) the proportion of patients with suspected cardiac ischaemia who received aspirin and (2) patient and prehospital characteristics that independently predicted administration of aspirin. Analysis of the 2011 NEMSIS database targeted patients aged ≥40 years with a paramedic primary impression of 'chest pain'. To identify patients with chest pain of suspected cardiac aetiology, we included those for whom an ECG or cardiac monitoring had been performed. Trauma-related chest pain and basic life support transports were excluded. The primary outcome was presence of aspirin administration. Patient (age, sex, race/ethnicity and insurance status) and regional characteristics where the EMS transport occurred were also obtained. Multivariate logistic regression was used to assess the independent association of patient and regional factors with aspirin administration for suspected cardiac ischaemia. Of the total 14,371,941 EMS incidents in the 2011 database, 198,231 patients met our inclusion criteria (1.3%). Of those, 45.4% received aspirin from the EMS provider. When compared with non-Hispanic white patients, several groups had greater odds of aspirin administration by EMS: non-Hispanic black patients (OR 1.49, 95% CI 1.44 to 1.55), non-Hispanic Asians (OR 1.62, 95% CI 1.21 to 2.18), Hispanics (OR 1.71, 95% CI 1.54 to 1.91) and other non-Hispanics (OR 1.27, 95% CI 1.07 to 1.51). Patients living in the Southern region of the USA (OR 0.59, 95% CI 0.57 to 0.62) and patients with governmental (federally administered such as

  14. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  15. Validity of ICD-9-CM codes for breast, lung and colorectal cancers in three Italian administrative healthcare databases: a diagnostic accuracy study protocol.

    PubMed

    Abraha, Iosief; Serraino, Diego; Giovannini, Gianni; Stracci, Fabrizio; Casucci, Paola; Alessandrini, Giuliana; Bidoli, Ettore; Chiari, Rita; Cirocchi, Roberto; De Giorgi, Marcello; Franchini, David; Vitale, Maria Francesca; Fusco, Mario; Montedori, Alessandro

    2016-03-25

    Administrative healthcare databases are useful tools to study healthcare outcomes and to monitor the health status of a population. Patients with cancer can be identified through disease-specific codes, prescriptions and physician claims, but prior validation is required to achieve an accurate case definition. The objective of this protocol is to assess the accuracy of International Classification of Diseases Ninth Revision-Clinical Modification (ICD-9-CM) codes for breast, lung and colorectal cancers in identifying patients diagnosed with the relative disease in three Italian administrative databases. Data from the administrative databases of Umbria Region (910,000 residents), Local Health Unit 3 of Napoli (1,170,000 residents) and Friuli--Venezia Giulia Region (1,227,000 residents) will be considered. In each administrative database, patients with the first occurrence of diagnosis of breast, lung or colorectal cancer between 2012 and 2014 will be identified using the following groups of ICD-9-CM codes in primary position: (1) 233.0 and (2) 174.x for breast cancer; (3) 162.x for lung cancer; (4) 153.x for colon cancer and (5) 154.0-154.1 and 154.8 for rectal cancer. Only incident cases will be considered, that is, excluding cases that have the same diagnosis in the 5 years (2007-2011) before the period of interest. A random sample of cases and non-cases will be selected from each administrative database and the corresponding medical charts will be assessed for validation by pairs of trained, independent reviewers. Case ascertainment within the medical charts will be based on (1) the presence of a primary nodular lesion in the breast, lung or colon-rectum, documented with imaging or endoscopy and (2) a cytological or histological documentation of cancer from a primary or metastatic site. Sensitivity and specificity with 95% CIs will be calculated. Study results will be disseminated widely through peer-reviewed publications and presentations at national and

  16. Detecting chronic kidney disease in population-based administrative databases using an algorithm of hospital encounter and physician claim codes

    PubMed Central

    2013-01-01

    Background Large, population-based administrative healthcare databases can be used to identify patients with chronic kidney disease (CKD) when serum creatinine laboratory results are unavailable. We examined the validity of algorithms that used combined hospital encounter and physician claims database codes for the detection of CKD in Ontario, Canada. Methods We accrued 123,499 patients over the age of 65 from 2007 to 2010. All patients had a baseline serum creatinine value to estimate glomerular filtration rate (eGFR). We developed an algorithm of physician claims and hospital encounter codes to search administrative databases for the presence of CKD. We determined the sensitivity, specificity, positive and negative predictive values of this algorithm to detect our primary threshold of CKD, an eGFR <45 mL/min per 1.73 m2 (15.4% of patients). We also assessed serum creatinine and eGFR values in patients with and without CKD codes (algorithm positive and negative, respectively). Results Our algorithm required evidence of at least one of eleven CKD codes and 7.7% of patients were algorithm positive. The sensitivity was 32.7% [95% confidence interval: (95% CI): 32.0 to 33.3%]. Sensitivity was lower in women compared to men (25.7 vs. 43.7%; p <0.001) and in the oldest age category (over 80 vs. 66 to 80; 28.4 vs. 37.6 %; p < 0.001). All specificities were over 94%. The positive and negative predictive values were 65.4% (95% CI: 64.4 to 66.3%) and 88.8% (95% CI: 88.6 to 89.0%), respectively. In algorithm positive patients, the median [interquartile range (IQR)] baseline serum creatinine value was 135 μmol/L (106 to 179 μmol/L) compared to 82 μmol/L (69 to 98 μmol/L) for algorithm negative patients. Corresponding eGFR values were 38 mL/min per 1.73 m2 (26 to 51 mL/min per 1.73 m2) vs. 69 mL/min per 1.73 m2 (56 to 82 mL/min per 1.73 m2), respectively. Conclusions Patients with CKD as identified by our database algorithm had distinctly higher baseline serum

  17. Cost and cost-effectiveness studies in urologic oncology using large administrative databases.

    PubMed

    Wang, Ye; Mossanen, Matthew; Chang, Steven L

    2018-04-01

    Urologic cancers are not only among the most common types of cancers, but also among the most expensive cancers to treat in the United States. This study aimed to review the use of CEAs and other cost analyses in urologic oncology using large databases to better understand the value of management strategies of these cancers. A literature review on CEAs and other cost analyses in urologic oncology using large databases. The options for and costs of diagnosing, treating, and following patients with urologic cancers can be expected to rise in the coming years. There are numerous opportunities in each urologic cancer to use CEAs to both lower costs and provide high-quality services. Improved cancer care must balance the integration of novelty with ensuring reasonable costs to patients and the health care system. With the increasing focus cost containment, appreciating the value of competing strategies in caring for our patients is pivotal. Leveraging methods such as CEAs and harnessing large databases may help evaluate the merit of established or emerging strategies. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. The MAO NASU Plate Archive Database. Current Status and Perspectives

    NASA Astrophysics Data System (ADS)

    Pakuliak, L. K.; Sergeeva, T. P.

    2006-04-01

    The preliminary online version of the database of the MAO NASU plate archive is constructed on the basis of the relational database management system MySQL and permits an easy supplement of database with new collections of astronegatives, provides a high flexibility in constructing SQL-queries for data search optimization, PHP Basic Authorization protected access to administrative interface and wide range of search parameters. The current status of the database will be reported and the brief description of the search engine and means of the database integrity support will be given. Methods and means of the data verification and tasks for the further development will be discussed.

  19. Effects of impairment in activities of daily living on predicting mortality following hip fracture surgery in studies using administrative healthcare databases

    PubMed Central

    2014-01-01

    Background Impairment in activities of daily living (ADL) is an important predictor of outcomes although many administrative databases lack information on ADL function. We evaluated the impact of ADL function on predicting postoperative mortality among older adults with hip fractures in Ontario, Canada. Methods Sociodemographic and medical correlates of ADL impairment were first identified in a population of older adults with hip fractures who had ADL information available prior to hip fracture. A logistic regression model was developed to predict 360-day postoperative mortality and the predictive ability of this model were compared when ADL impairment was included or omitted from the model. Results The study sample (N = 1,329) had a mean age of 85.2 years, were 72.8% female and the majority resided in long-term care (78.5%). Overall, 36.4% of individuals died within 360 days of surgery. After controlling for age, sex, medical comorbidity and medical conditions correlated with ADL impairment, addition of ADL measures improved the logistic regression model for predicting 360 day mortality (AIC = 1706.9 vs. 1695.0; c -statistic = 0.65 vs 0.67; difference in - 2 log likelihood ratios: χ2 = 16.9, p = 0.002). Conclusions Direct measures of ADL impairment provides additional prognostic information on mortality for older adults with hip fractures even after controlling for medical comorbidity. Observational studies using administrative databases without measures of ADLs may be potentially prone to confounding and bias and case-mix adjustment for hip fracture outcomes should include ADL measures where these are available. PMID:24472282

  20. Understanding Differences in Administrative and Audited Patient Data in Cardiac Surgery: Comparison of the University HealthSystem Consortium and Society of Thoracic Surgeons Databases.

    PubMed

    Prasad, Anjali; Helder, Meghana R; Brown, Dwight A; Schaff, Hartzell V

    2016-10-01

    The University HealthSystem Consortium (UHC) administrative database has been used increasingly as a quality indicator for hospitals and even individual surgeons. We aimed to determine the accuracy of cardiac surgical data in the administrative UHC database vs data in the clinical Society of Thoracic Surgeons database. We reviewed demographic and outcomes information of patients with aortic valve replacement (AVR), mitral valve replacement (MVR), and coronary artery bypass grafting (CABG) surgery between January 1, 2012, and December 31, 2013. Data collected in aggregate and compared across the databases included case volume, physician specialty coding, patient age and sex, comorbidities, mortality rate, and postoperative complications. In these 2 years, the UHC database recorded 1,270 AVRs, 355 MVRs, and 1,473 CABGs. The Society of Thoracic Surgeons database case volumes were less by 2% to 12% (1,219 AVRs; 316 MVRs; and 1,442 CABGs). Errors in physician specialty coding occurred in UHC data (AVR, 0.6%; MVR, 0.8%; and CABG, 0.7%). In matched patients from each database, demographic age and sex information was identical. Although definitions differed in the databases, percentages of patients with at least one comorbidity were similar. Hospital mortality rates were similar as well, but postoperative recorded complications differed greatly. In comparing the 2 databases, we found similarity in patient demographic information and percentage of patients with comorbidities. The small difference in volumes of each operation type and the larger disparity in postoperative complications between the databases were related to differences in data definition, data collection, and coding errors. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  1. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest Metropolitan...

  2. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest Metropolitan...

  3. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest Metropolitan...

  4. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest Metropolitan...

  5. 47 CFR 52.23 - Deployment of long-term database methods for number portability by LECs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... database methods for number portability by LECs. (a) Subject to paragraphs (b) and (c) of this section, all... LECs must provide a long-term database method for number portability in the 100 largest Metropolitan...

  6. A two-step approach for mining patient treatment pathways in administrative healthcare databases.

    PubMed

    Najjar, Ahmed; Reinharz, Daniel; Girouard, Catherine; Gagné, Christian

    2018-05-01

    Clustering electronic medical records allows the discovery of information on healthcare practices. Entries in such medical records are usually composed of a succession of diagnostics or therapeutic steps. The corresponding processes are complex and heterogeneous since they depend on medical knowledge integrating clinical guidelines, the physician's individual experience, and patient data and conditions. To analyze such data, we are first proposing to cluster medical visits, consultations, and hospital stays into homogeneous groups, and then to construct higher-level patient treatment pathways over these different groups. These pathways are then also clustered to distill typical pathways, enabling interpretation of clusters by experts. This approach is evaluated on a real-world administrative database of elderly people in Québec suffering from heart failures. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. GMOMETHODS: the European Union database of reference methods for GMO analysis.

    PubMed

    Bonfini, Laura; Van den Bulcke, Marc H; Mazzara, Marco; Ben, Enrico; Patak, Alexandre

    2012-01-01

    In order to provide reliable and harmonized information on methods for GMO (genetically modified organism) analysis we have published a database called "GMOMETHODS" that supplies information on PCR assays validated according to the principles and requirements of ISO 5725 and/or the International Union of Pure and Applied Chemistry protocol. In addition, the database contains methods that have been verified by the European Union Reference Laboratory for Genetically Modified Food and Feed in the context of compliance with an European Union legislative act. The web application provides search capabilities to retrieve primers and probes sequence information on the available methods. It further supplies core data required by analytical labs to carry out GM tests and comprises information on the applied reference material and plasmid standards. The GMOMETHODS database currently contains 118 different PCR methods allowing identification of 51 single GM events and 18 taxon-specific genes in a sample. It also provides screening assays for detection of eight different genetic elements commonly used for the development of GMOs. The application is referred to by the Biosafety Clearing House, a global mechanism set up by the Cartagena Protocol on Biosafety to facilitate the exchange of information on Living Modified Organisms. The publication of the GMOMETHODS database can be considered an important step toward worldwide standardization and harmonization in GMO analysis.

  8. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  9. Mobile tablet-based therapies following stroke: A systematic scoping review of administrative methods and patient experiences

    PubMed Central

    Ramsay, Tim; Johnson, Dylan; Dowlatshahi, Dar

    2018-01-01

    Background and purpose Stroke survivors are often left with deficits requiring rehabilitation to recover function and yet, many are unable to access rehabilitative therapies. Mobile tablet-based therapies (MTBTs) may be a resource-efficient means of improving access to timely rehabilitation. It is unclear what MTBTs have been attempted following stroke, how they were administered, and how patients experienced the therapies. The review summarizes studies of MTBTs following stroke in terms of administrative methods and patient experiences to inform treatment feasibility. Methods Articles were eligible if they reported the results of an MTBT attempted with stroke participants. Six research databases were searched along with grey literature sources, trial registries, and article references. Intervention administration details and patient experiences were summarized. Results The search returned 903 articles of which 23 were eligible for inclusion. Most studies were small, observational, and enrolled chronic stroke patients. Interventions commonly targeted communication, cognition, or fine-motor skills. Therapies tended to be personalized based on patient deficits using commercially available applications. The complexity of therapy instructions, fine-motor requirements, and unreliability of internet or cellular connections were identified as common barriers to tablet-based care. Conclusions Stroke patients responded positively to MTBTs in both the inpatient and home settings. However, some support from therapists or caregivers may be required for patients to overcome barriers to care. Feasibility studies should continue to identify the administrative methods that minimize barriers to care and maximize patient adherence to prescribed therapy regiments. PMID:29360872

  10. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  11. Computer systems and methods for the query and visualization of multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2006-08-08

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  12. Computer systems and methods for the query and visualization of multidimensional database

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2010-05-11

    A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.

  13. Evaluating the Validity of a Two-stage Sample in a Birth Cohort Established from Administrative Databases.

    PubMed

    El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude

    2016-01-01

    When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.

  14. Glycemic control and diabetes-related health care costs in type 2 diabetes; retrospective analysis based on clinical and administrative databases.

    PubMed

    Degli Esposti, Luca; Saragoni, Stefania; Buda, Stefano; Sturani, Alessandra; Degli Esposti, Ezio

    2013-01-01

    Diabetes is one of the most prevalent chronic diseases, and its prevalence is predicted to increase in the next two decades. Diabetes imposes a staggering financial burden on the health care system, so information about the costs and experiences of collecting and reporting quality measures of data is vital for practices deciding whether to adopt quality improvements or monitor existing initiatives. The aim of this study was to quantify the association between health care costs and level of glycemic control in patients with type 2 diabetes using clinical and administrative databases. A retrospective analysis using a large administrative database and a clinical registry containing laboratory results was performed. Patients were subdivided according to their glycated hemoglobin level. Multivariate analyses were used to control for differences in potential confounding factors, including age, gender, Charlson comorbidity index, presence of dyslipidemia, hypertension, or cardiovascular disease, and degree of adherence with antidiabetic drugs among the study groups. Of the total population of 700,000 subjects, 31,022 were identified as being diabetic (4.4% of the entire population). Of these, 21,586 met the study inclusion criteria. In total, 31.5% of patients had very poor glycemic control and 25.7% had excellent control. Over 2 years, the mean diabetes-related cost per person was: €1291.56 in patients with excellent control; €1545.99 in those with good control; €1584.07 in those with fair control; €1839.42 in those with poor control; and €1894.80 in those with very poor control. After adjustment, compared with the group having excellent control, the estimated excess cost per person associated with the groups with good control, fair control, poor control, and very poor control was €219.28, €264.65, €513.18, and €564.79, respectively. Many patients showed suboptimal glycemic control. Lower levels of glycated hemoglobin were associated with lower diabetes

  15. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including the...

  16. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including the...

  17. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including the...

  18. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including the...

  19. 47 CFR 52.31 - Deployment of long-term database methods for number portability by CMRS providers.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment of long-term database methods for... long-term database methods for number portability by CMRS providers. (a) By November 24, 2003, all covered CMRS providers must provide a long-term database method for number portability, including the...

  20. Administrative database code accuracy did not vary notably with changes in disease prevalence.

    PubMed

    van Walraven, Carl; English, Shane; Austin, Peter C

    2016-11-01

    Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. The Québec BCG Vaccination Registry (1956–1992): assessing data quality and linkage with administrative health databases

    PubMed Central

    2014-01-01

    Background Vaccination registries have undoubtedly proven useful for estimating vaccination coverage as well as examining vaccine safety and effectiveness. However, their use for population health research is often limited. The Bacillus Calmette-Guérin (BCG) Vaccination Registry for the Canadian province of Québec comprises some 4 million vaccination records (1926-1992). This registry represents a unique opportunity to study potential associations between BCG vaccination and various health outcomes. So far, such studies have been hampered by the absence of a computerized version of the registry. We determined the completeness and accuracy of the recently computerized BCG Vaccination Registry, as well as examined its linkability with demographic and administrative medical databases. Methods Two systematically selected verification samples, each representing ~0.1% of the registry, were used to ascertain accuracy and completeness of the electronic BCG Vaccination Registry. Agreement between the paper [listings (n = 4,987 records) and vaccination certificates (n = 4,709 records)] and electronic formats was determined along several nominal and BCG-related variables. Linkage feasibility with the Birth Registry (probabilistic approach) and provincial Healthcare Registration File (deterministic approach) was examined using nominal identifiers for a random sample of 3,500 individuals born from 1961 to 1974 and BCG vaccinated between 1970 and 1974. Results Exact agreement was observed for 99.6% and 81.5% of records upon comparing, respectively, the paper listings and vaccination certificates to their corresponding computerized records. The proportion of successful linkage was 77% with the Birth Registry, 70% with the Healthcare Registration File, 57% with both, and varied by birth year. Conclusions Computerization of this Registry yielded excellent results. The registry was complete and accurate, and linkage with administrative databases was highly feasible. This

  2. A Database Practicum for Teaching Database Administration and Software Development at Regis University

    ERIC Educational Resources Information Center

    Mason, Robert T.

    2013-01-01

    This research paper compares a database practicum at the Regis University College for Professional Studies (CPS) with technology oriented practicums at other universities. Successful andragogy for technology courses can motivate students to develop a genuine interest in the subject, share their knowledge with peers and can inspire students to…

  3. A two-step database search method improves sensitivity in peptide sequence matches for metaproteomics and proteogenomics studies.

    PubMed

    Jagtap, Pratik; Goslinga, Jill; Kooren, Joel A; McGowan, Thomas; Wroblewski, Matthew S; Seymour, Sean L; Griffin, Timothy J

    2013-04-01

    Large databases (>10(6) sequences) used in metaproteomic and proteogenomic studies present challenges in matching peptide sequences to MS/MS data using database-search programs. Most notably, strict filtering to avoid false-positive matches leads to more false negatives, thus constraining the number of peptide matches. To address this challenge, we developed a two-step method wherein matches derived from a primary search against a large database were used to create a smaller subset database. The second search was performed against a target-decoy version of this subset database merged with a host database. High confidence peptide sequence matches were then used to infer protein identities. Applying our two-step method for both metaproteomic and proteogenomic analysis resulted in twice the number of high confidence peptide sequence matches in each case, as compared to the conventional one-step method. The two-step method captured almost all of the same peptides matched by the one-step method, with a majority of the additional matches being false negatives from the one-step method. Furthermore, the two-step method improved results regardless of the database search program used. Our results show that our two-step method maximizes the peptide matching sensitivity for applications requiring large databases, especially valuable for proteogenomics and metaproteomics studies. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A Completed Reference Database of Lung Nodules on CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2011-02-15

    Purpose: The development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.more » Methods: Seven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (''nodule{>=}3 mm,''''nodule<3 mm,'' and ''non-nodule{>=}3 mm''). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus. Results: The Database contains 7371 lesions marked ''nodule'' by at least one radiologist. 2669 of these lesions were marked ''nodule{>=}3 mm'' by at least one radiologist, of which 928 (34.7%) received such marks

  5. The intelligent database machine

    NASA Technical Reports Server (NTRS)

    Yancey, K. E.

    1985-01-01

    The IDM data base was compared with the data base crack to determine whether IDM 500 would better serve the needs of the MSFC data base management system than Oracle. The two were compared and the performance of the IDM was studied. Implementations that work best on which database are implicated. The choice is left to the database administrator.

  6. Comparison of scientific and administrative database management systems

    NASA Technical Reports Server (NTRS)

    Stoltzfus, J. C.

    1983-01-01

    Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.

  7. Brief report: Comparison of methods to identify Iraq and Afghanistan war veterans using Department of Veterans Affairs administrative data.

    PubMed

    Bangerter, Ann; Gravely, Amy; Cutting, Andrea; Clothier, Barb; Spoont, Michele; Sayer, Nina

    2010-01-01

    The Department of Veterans Affairs (VA) has made treatment and care of Operation Iraqi Freedom/Operation Enduring Freedom (OIF/OEF) veterans a priority. Researchers face challenges identifying the OIF/OEF population because until fiscal year 2008, no indicator of OIF/OEF service was present in the Veterans Health Administration (VHA) administrative databases typically used for research. In this article, we compare an algorithm we developed to identify OIF/OEF veterans using the Austin Information Technology Center administrative data with the VHA Support Service Center OIF/OEF Roster and veterans' self-report of military service. We drew data from two different institutional review board-approved funded studies. The positive predictive value of our algorithm compared with the VHA Support Service Center OIF/OEF Roster and self-report was 92% and 98%, respectively. However, this method of identifying OIF/OEF veterans failed to identify a large proportion of OIF/OEF veterans listed in the VHA Support Service Center OIF/OEF Roster. Demographic, diagnostic, and VA service use differences were found between veterans identified using our method and those we failed to identify but who were in the VHA Support Service Center OIF/OEF Roster. Therefore, depending on the research objective, this method may not be a viable alternative to the VHA Support Service Center OIF/OEF Roster for identifying OIF/OEF veterans.

  8. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  9. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  10. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  11. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  12. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  13. 47 CFR 68.610 - Database of terminal equipment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Database of terminal equipment. 68.610 Section... Attachments § 68.610 Database of terminal equipment. (a) The Administrative Council for Terminal Attachments shall operate and maintain a database of all approved terminal equipment. The database shall meet the...

  14. [Assessment of an algorithm to identify paediatric-onset celiac disease cases through administrative healthcare databases].

    PubMed

    Pitter, Gisella; Gnavi, Roberto; Romor, Pierantonio; Zanotti, Renzo; Simonato, Lorenzo; Canova, Cristina

    2017-01-01

    to assess the role of four administrative healthcare databases (pathology reports, copayment exemptions, hospital discharge records, gluten-free food prescriptions) for the identification of possible paediatric cases of celiac disease. population-based observational study with record linkage of administrative healthcare databases. SETTING AND PARTICIPANT S: children born alive in the Friuli Venezia Giulia Region (Northern Italy) to resident mothers in the years 1989-2012, identified using the regional Medical Birth Register. we defined possible celiac disease as having at least one of the following, from 2002 onward: 1. a pathology report of intestinal villous atrophy; 2. a copayment exemption for celiac disease; 3. a hospital discharge record with ICD-9-CM code of celiac disease; 4. a gluten-free food prescription. We evaluated the proportion of subjects identified by each archive and by combinations of archives, and examined the temporal relationship of the different sources in cases identified by more than one source. RESULT S: out of 962 possible cases of celiac disease, 660 (68.6%) had a pathology report, 714 (74.2%) a copayment exemption, 667 (69.3%) a hospital discharge record, and 636 (66.1%) a gluten-free food prescription. The four sources coexisted in 42.2% of subjects, whereas 30.2% were identified by two or three sources and 27.6% by a single source (16.9% by pathology reports, 4.2% by hospital discharge records, 3.9% by copayment exemptions, and 2.6% by gluten-free food prescriptions). Excluding pathology reports, 70.6% of cases were identified by at least two sources. A definition based on copayment exemptions and discharge records traced 80.5% of the 962 possible cases of celiac disease; whereas a definition based on copayment exemptions, discharge records, and gluten-free food prescriptions traced 83.1% of those cases. The temporal relationship of the different sources was compatible with the typical diagnostic pathway of subjects with celiac

  15. Computer systems and methods for the query and visualization multidimensional databases

    DOEpatents

    Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick

    2017-04-25

    A method of generating a data visualization is performed at a computer having a display, one or more processors, and memory. The memory stores one or more programs for execution by the one or more processors. The process receives user specification of a plurality of characteristics of a data visualization. The data visualization is based on data from a multidimensional database. The characteristics specify at least x-position and y-position of data marks corresponding to tuples of data retrieved from the database. The process generates a data visualization according to the specified plurality of characteristics. The data visualization has an x-axis defined based on data for one or more first fields from the database that specify x-position of the data marks and the data visualization has a y-axis defined based on data for one or more second fields from the database that specify y-position of the data marks.

  16. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  17. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  18. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  19. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  20. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  1. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  2. 42 CFR 455.436 - Federal database checks.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Federal database checks. 455.436 Section 455.436....436 Federal database checks. The State Medicaid agency must do all of the following: (a) Confirm the... databases. (b) Check the Social Security Administration's Death Master File, the National Plan and Provider...

  3. 40 CFR 1400.13 - Read-only database.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Read-only database. 1400.13 Section... INFORMATION Other Provisions § 1400.13 Read-only database. The Administrator is authorized to establish... public off-site consequence analysis information by means of a central database under the control of the...

  4. The Socratic Method: analyzing ethical issues in health administration.

    PubMed

    Gac, E J; Boerstler, H; Ruhnka, J C

    1998-01-01

    The Socratic Method has long been recognized by the legal profession as an effective tool for promoting critical thinking and analysis in the law. This article describes ways the technique can be used in health administration education to help future administrators develop the "ethical rudder" they will need for effective leadership. An illustrative dialogue is provided.

  5. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  6. Measuring hospital performance in congenital heart surgery: Administrative vs. clinical registry data

    PubMed Central

    Pasquali, Sara K.; He, Xia; Jacobs, Jeffrey P.; Jacobs, Marshall L.; Gaies, Michael G.; Shah, Samir S.; Hall, Matthew; Gaynor, J. William; Peterson, Eric D.; Mayer, John E.; Hirsch-Romano, Jennifer C.

    2015-01-01

    Background In congenital heart surgery, hospital performance has historically been assessed using widely available administrative datasets. Recent studies have demonstrated inaccuracies in case ascertainment (coding and inclusion of eligible cases) in administrative vs. clinical registry data, however it is unclear whether this impacts assessment of performance on a hospital-level. Methods Merged data from the Society of Thoracic Surgeons (STS) Database (clinical registry), and Pediatric Health Information Systems Database (administrative dataset) on 46,056 children undergoing heart surgery (2006–2010) were utilized to evaluate in-hospital mortality for 33 hospitals based on their administrative vs. registry data. Standard methods to identify/classify cases were used: Risk Adjustment in Congenital Heart Surgery (RACHS-1) in the administrative data, and STS–European Association for Cardiothoracic Surgery (STAT) methodology in the registry. Results Median hospital surgical volume based on the registry data was 269 cases/yr; mortality was 2.9%. Hospital volumes and mortality rates based on the administrative data were on average 10.7% and 4.7% lower, respectively, although this varied widely across hospitals. Hospital rankings for mortality based on the administrative vs. registry data differed by ≥ 5 rank-positions for 24% of hospitals, with a change in mortality tertile classification (high, middle, or low mortality) for 18%, and change in statistical outlier classification for 12%. Higher volume/complexity hospitals were most impacted. Agency for Healthcare Quality and Research methods in the administrative data yielded similar results. Conclusions Inaccuracies in case ascertainment in administrative vs. clinical registry data can lead to important differences in assessment of hospital mortality rates for congenital heart surgery. PMID:25624057

  7. DOMe: A deduplication optimization method for the NewSQL database backups

    PubMed Central

    Wang, Longxiang; Zhu, Zhengdong; Zhang, Xingjun; Wang, Yinfeng

    2017-01-01

    Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely. NewSQL systems need to improve data reliability by periodically backing up in-memory data, resulting in a lot of duplicated data. The traditional deduplication method is not optimized for the NewSQL server system and cannot take full advantage of hardware resources to optimize deduplication performance. A recent research pointed out that the future NewSQL server will have thousands of CPU cores, large DRAM and huge NVRAM. Therefore, how to utilize these hardware resources to optimize the performance of data deduplication is an important issue. To solve this problem, we propose a deduplication optimization method (DOMe) for NewSQL system backup. To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. The fingerprint index, which is the key data structure in the deduplication process, is implemented as pure in-memory hash table, which makes full use of the large DRAM in NewSQL system, eliminating the performance bottleneck problem of fingerprint index existing in traditional deduplication method. The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data. The experimental results show that: 1) DOMe can reduce the duplicated NewSQL backup data. 2) DOMe significantly improves deduplication performance by parallelizing CDC algorithms. In the case of the theoretical speedup ratio of the server is 20.8, the speedup ratio of DOMe can achieve up to 18; 3) DOMe improved the deduplication throughput by 1.5 times through the pure in-memory index optimization method. PMID:29049307

  8. Methods to Secure Databases Against Vulnerabilities

    DTIC Science & Technology

    2015-12-01

    for several languages such as C, C++, PHP, Java and Python [16]. MySQL will work well with very large databases. The documentation references...using Eclipse and connected to each database management system using Python and Java drivers provided by MySQL , MongoDB, and Datastax (for Cassandra...tiers in Python and Java . Problem MySQL MongoDB Cassandra 1. Injection a. Tautologies Vulnerable Vulnerable Not Vulnerable b. Illegal query

  9. Applications of GIS and database technologies to manage a Karst Feature Database

    USGS Publications Warehouse

    Gao, Y.; Tipping, R.G.; Alexander, E.C.

    2006-01-01

    This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.

  10. 76 FR 30997 - National Transit Database: Amendments to Urbanized Area Annual Reporting Manual

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-27

    ... Transit Database: Amendments to Urbanized Area Annual Reporting Manual AGENCY: Federal Transit Administration (FTA), DOT. ACTION: Notice of Amendments to 2011 National Transit Database Urbanized Area Annual... Administration's (FTA) 2011 National Transit Database (NTD) Urbanized Area Annual Reporting Manual (Annual Manual...

  11. EMEN2: An Object Oriented Database and Electronic Lab Notebook

    PubMed Central

    Rees, Ian; Langley, Ed; Chiu, Wah; Ludtke, Steven J.

    2013-01-01

    Transmission electron microscopy and associated methods such as single particle analysis, 2-D crystallography, helical reconstruction and tomography, are highly data-intensive experimental sciences, which also have substantial variability in experimental technique. Object-oriented databases present an attractive alternative to traditional relational databases for situations where the experiments themselves are continually evolving. We present EMEN2, an easy to use object-oriented database with a highly flexible infrastructure originally targeted for transmission electron microscopy and tomography, which has been extended to be adaptable for use in virtually any experimental science. It is a pure object-oriented database designed for easy adoption in diverse laboratory environments, and does not require professional database administration. It includes a full featured, dynamic web interface in addition to APIs for programmatic access. EMEN2 installations currently support roughly 800 scientists worldwide with over 1/2 million experimental records and over 20 TB of experimental data. The software is freely available with complete source. PMID:23360752

  12. Detection of co-eluted peptides using database search methods

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y; Kwok, Siwei; Wu, Wells W; Wang, Guanghui; Shen, Rong-Fong; Yu, Yi-Kuo

    2008-01-01

    Background Current experimental techniques, especially those applying liquid chromatography mass spectrometry, have made high-throughput proteomic studies possible. The increase in throughput however also raises concerns on the accuracy of identification or quantification. Most experimental procedures select in a given MS scan only a few relatively most intense parent ions, each to be fragmented (MS2) separately, and most other minor co-eluted peptides that have similar chromatographic retention times are ignored and their information lost. Results We have computationally investigated the possibility of enhancing the information retrieval during a given LC/MS experiment by selecting the two or three most intense parent ions for simultaneous fragmentation. A set of spectra is created via superimposing a number of MS2 spectra, each can be identified by all search methods tested with high confidence, to mimick the spectra of co-eluted peptides. The generated convoluted spectra were used to evaluate the capability of several database search methods – SEQUEST, Mascot, X!Tandem, OMSSA, and RAId_DbS – in identifying true peptides from superimposed spectra of co-eluted peptides. We show that using these simulated spectra, all the database search methods will gain eventually in the number of true peptides identified by using the compound spectra of co-eluted peptides. Open peer review Reviewed by Vlad Petyuk (nominated by Arcady Mushegian), King Jordan and Shamil Sunyaev. For the full reviews, please go to the Reviewers' comments section. PMID:18597684

  13. Detecting chronic kidney disease in population-based administrative databases using an algorithm of hospital encounter and physician claim codes.

    PubMed

    Fleet, Jamie L; Dixon, Stephanie N; Shariff, Salimah Z; Quinn, Robert R; Nash, Danielle M; Harel, Ziv; Garg, Amit X

    2013-04-05

    Large, population-based administrative healthcare databases can be used to identify patients with chronic kidney disease (CKD) when serum creatinine laboratory results are unavailable. We examined the validity of algorithms that used combined hospital encounter and physician claims database codes for the detection of CKD in Ontario, Canada. We accrued 123,499 patients over the age of 65 from 2007 to 2010. All patients had a baseline serum creatinine value to estimate glomerular filtration rate (eGFR). We developed an algorithm of physician claims and hospital encounter codes to search administrative databases for the presence of CKD. We determined the sensitivity, specificity, positive and negative predictive values of this algorithm to detect our primary threshold of CKD, an eGFR <45 mL/min per 1.73 m² (15.4% of patients). We also assessed serum creatinine and eGFR values in patients with and without CKD codes (algorithm positive and negative, respectively). Our algorithm required evidence of at least one of eleven CKD codes and 7.7% of patients were algorithm positive. The sensitivity was 32.7% [95% confidence interval: (95% CI): 32.0 to 33.3%]. Sensitivity was lower in women compared to men (25.7 vs. 43.7%; p <0.001) and in the oldest age category (over 80 vs. 66 to 80; 28.4 vs. 37.6 %; p < 0.001). All specificities were over 94%. The positive and negative predictive values were 65.4% (95% CI: 64.4 to 66.3%) and 88.8% (95% CI: 88.6 to 89.0%), respectively. In algorithm positive patients, the median [interquartile range (IQR)] baseline serum creatinine value was 135 μmol/L (106 to 179 μmol/L) compared to 82 μmol/L (69 to 98 μmol/L) for algorithm negative patients. Corresponding eGFR values were 38 mL/min per 1.73 m² (26 to 51 mL/min per 1.73 m²) vs. 69 mL/min per 1.73 m² (56 to 82 mL/min per 1.73 m²), respectively. Patients with CKD as identified by our database algorithm had distinctly higher baseline serum creatinine values and lower eGFR values

  14. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  15. Evaluating and improving pressure ulcer care: the VA experience with administrative data.

    PubMed

    Berlowitz, D R; Halpern, J

    1997-08-01

    A number of state initiatives are using databases originally developed for nursing home reimbursements to assess the quality of care. Since 1991 the Department of Veterans Affairs (VA; Washington, DC) has been using a long term care administrative database to calculate facility-specific rates of pressure ulcer development. This information is disseminated to all 140 long term care facilities as part of a quality assessment and improvement program. Assessments are performed on all long term care residents on April 1 and October 1, as well as at the time of admission or transfer to a long term care unit. Approximately 18,000 long term care residents are evaluated in each six-month period; the VA rate of pressure ulcer development is approximately 3.5%. Reports of the rates of pressure ulcer development are then disseminated to all facilities, generally within two months of the assessment date. The VA's more than five years' experience in using administrative data to assess outcomes for long term care highlights several important issues that should be considered when using outcome measures based on administrative data. These include the importance of carefully selecting the outcome measure, the need to consider the structure of the database, the role of case-mix adjustment, strategies for reporting rates to small facilities, and methods for information dissemination. Attention to these issues will help ensure that results from administrative databases lead to improvements in the quality of care.

  16. Comparing methods for estimation of heterogeneous treatment effects using observational data from health care databases.

    PubMed

    Wendling, T; Jung, K; Callahan, A; Schuler, A; Shah, N H; Gallego, B

    2018-06-03

    There is growing interest in using routinely collected data from health care databases to study the safety and effectiveness of therapies in "real-world" conditions, as it can provide complementary evidence to that of randomized controlled trials. Causal inference from health care databases is challenging because the data are typically noisy, high dimensional, and most importantly, observational. It requires methods that can estimate heterogeneous treatment effects while controlling for confounding in high dimensions. Bayesian additive regression trees, causal forests, causal boosting, and causal multivariate adaptive regression splines are off-the-shelf methods that have shown good performance for estimation of heterogeneous treatment effects in observational studies of continuous outcomes. However, it is not clear how these methods would perform in health care database studies where outcomes are often binary and rare and data structures are complex. In this study, we evaluate these methods in simulation studies that recapitulate key characteristics of comparative effectiveness studies. We focus on the conditional average effect of a binary treatment on a binary outcome using the conditional risk difference as an estimand. To emulate health care database studies, we propose a simulation design where real covariate and treatment assignment data are used and only outcomes are simulated based on nonparametric models of the real outcomes. We apply this design to 4 published observational studies that used records from 2 major health care databases in the United States. Our results suggest that Bayesian additive regression trees and causal boosting consistently provide low bias in conditional risk difference estimates in the context of health care database studies. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Fluoxetine Dose and Administration Method Differentially Affect Hippocampal Plasticity in Adult Female Rats

    PubMed Central

    Pawluski, Jodi L.; van Donkelaar, Eva; Abrams, Zipporah; Steinbusch, Harry W. M.; Charlier, Thierry D.

    2014-01-01

    Selective serotonin reuptake inhibitor medications are one of the most common treatments for mood disorders. In humans, these medications are taken orally, usually once per day. Unfortunately, administration of antidepressant medications in rodent models is often through injection, oral gavage, or minipump implant, all relatively stressful procedures. The aim of the present study was to investigate how administration of the commonly used SSRI, fluoxetine, via a wafer cookie, compares to fluoxetine administration using an osmotic minipump, with regards to serum drug levels and hippocampal plasticity. For this experiment, adult female Sprague-Dawley rats were divided over the two administration methods: (1) cookie and (2) osmotic minipump and three fluoxetine treatment doses: 0, 5, or 10 mg/kg/day. Results show that a fluoxetine dose of 5 mg/kg/day, but not 10 mg/kg/day, results in comparable serum levels of fluoxetine and its active metabolite norfluoxetine between the two administration methods. Furthermore, minipump administration of fluoxetine resulted in higher levels of cell proliferation in the granule cell layer (GCL) at a 5 mg dose compared to a 10 mg dose. Synaptophysin expression in the GCL, but not CA3, was significantly lower after fluoxetine treatment, regardless of administration method. These data suggest that the administration method and dose of fluoxetine can differentially affect hippocampal plasticity in the adult female rat. PMID:24757568

  18. Three insulation methods to minimize intravenous fluid administration set heat loss.

    PubMed

    Piek, Richardt; Stein, Christopher

    2013-01-01

    To assess the effect of three methods for insulating an intravenous (IV) fluid administration set on the temperature of warmed fluid delivered rapidly in a cold environment. The three chosen techniques for insulation of the IV fluid administration set involved enclosing the tubing of the set in 1) a cotton conforming bandage, 2) a reflective emergency blanket, and 3) a combination of technique 2 followed by technique 1. Intravenous fluid warmed to 44°C was infused through a 20-drop/mL 180-cm-long fluid administration set in a controlled environmental temperature of 5°C. Temperatures in the IV fluid bag, the distal end of the fluid administration set, and the environment were continuously measured with resistance thermosensors. Twenty repetitions were performed in four conditions, namely, a control condition (with no insulation) and the three different insulation methods described above. One-way analysis of variance was used to assess the mean difference in temperature between the IV fluid bag and the distal fluid administration set under the four conditions. In the control condition, a mean of 5.28°C was lost between the IV fluid bag and the distal end of the fluid administration set. There was a significant difference found between the four conditions (p < 0.001). A mean of 3.53°C was lost between the IV fluid bag and the distal end of the fluid administration set for both the bandage and reflective emergency blanket, and a mean of 3.06°C was lost when the two methods were combined. Using inexpensive and readily available materials to insulate a fluid administration set can result in a reduction of heat loss in rapidly infused, warmed IV fluid in a cold environment.

  19. Data-Based Decision-Making: Developing a Method for Capturing Teachers' Understanding of CBM Graphs

    ERIC Educational Resources Information Center

    Espin, Christine A.; Wayman, Miya Miura; Deno, Stanley L.; McMaster, Kristen L.; de Rooij, Mark

    2017-01-01

    In this special issue, we explore the decision-making aspect of "data-based decision-making". The articles in the issue address a wide range of research questions, designs, methods, and analyses, but all focus on data-based decision-making for students with learning difficulties. In this first article, we introduce the topic of…

  20. Distance correlation methods for discovering associations in large astrophysical databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P., E-mail: elizabeth.martinez@itam.mx, E-mail: mrichards@astro.psu.edu, E-mail: richards@stat.psu.edu

    2014-01-20

    High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension,more » can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.« less

  1. National Transportation Atlas Databases : 2014

    DOT National Transportation Integrated Search

    2014-01-01

    The National Transportation Atlas Databases 2014 : (NTAD2014) is a set of nationwide geographic datasets of : transportation facilities, transportation networks, associated : infrastructure, and other political and administrative entities. : These da...

  2. National Transportation Atlas Databases : 2015

    DOT National Transportation Integrated Search

    2015-01-01

    The National Transportation Atlas Databases 2015 : (NTAD2015) is a set of nationwide geographic datasets of : transportation facilities, transportation networks, associated : infrastructure, and other political and administrative entities. : These da...

  3. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  4. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  5. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  6. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  7. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  8. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  9. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  10. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  11. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  12. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  13. National Transportation Atlas Databases : 2013

    DOT National Transportation Integrated Search

    2013-01-01

    The National Transportation Atlas Databases 2013 (NTAD2013) is a set of nationwide geographic datasets of transportation facilities, transportation networks, associated infrastructure, and other political and administrative entities. These datasets i...

  14. Non-animal methods to predict skin sensitization (I): the Cosmetics Europe database.

    PubMed

    Hoffmann, Sebastian; Kleinstreuer, Nicole; Alépée, Nathalie; Allen, David; Api, Anne Marie; Ashikaga, Takao; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Goebel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Lalko, Jon F; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Parakhia, Rahul; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk

    2018-05-01

    Cosmetics Europe, the European Trade Association for the cosmetics and personal care industry, is conducting a multi-phase program to develop regulatory accepted, animal-free testing strategies enabling the cosmetics industry to conduct safety assessments. Based on a systematic evaluation of test methods for skin sensitization, five non-animal test methods (DPRA (Direct Peptide Reactivity Assay), KeratinoSens TM , h-CLAT (human cell line activation test), U-SENS TM , SENS-IS) were selected for inclusion in a comprehensive database of 128 substances. Existing data were compiled and completed with newly generated data, the latter amounting to one-third of all data. The database was complemented with human and local lymph node assay (LLNA) reference data, physicochemical properties and use categories, and thoroughly curated. Focused on the availability of human data, the substance selection resulted nevertheless resulted in a high diversity of chemistries in terms of physico-chemical property ranges and use categories. Predictivities of skin sensitization potential and potency, where applicable, were calculated for the LLNA as compared to human data and for the individual test methods compared to both human and LLNA reference data. In addition, various aspects of applicability of the test methods were analyzed. Due to its high level of curation, comprehensiveness, and completeness, we propose our database as a point of reference for the evaluation and development of testing strategies, as done for example in the associated work of Kleinstreuer et al. We encourage the community to use it to meet the challenge of conducting skin sensitization safety assessment without generating new animal data.

  15. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  16. Describing the linkages of the immigration, refugees and citizenship Canada permanent resident data and vital statistics death registry to Ontario's administrative health database.

    PubMed

    Chiu, Maria; Lebenbaum, Michael; Lam, Kelvin; Chong, Nelson; Azimaee, Mahmoud; Iron, Karey; Manuel, Doug; Guttmann, Astrid

    2016-10-21

    Ontario, the most populous province in Canada, has a universal healthcare system that routinely collects health administrative data on its 13 million legal residents that is used for health research. Record linkage has become a vital tool for this research by enriching this data with the Immigration, Refugees and Citizenship Canada Permanent Resident (IRCC-PR) database and the Office of the Registrar General's Vital Statistics-Death (ORG-VSD) registry. Our objectives were to estimate linkage rates and compare characteristics of individuals in the linked versus unlinked files. We used both deterministic and probabilistic linkage methods to link the IRCC-PR database (1985-2012) and ORG-VSD registry (1990-2012) to the Ontario's Registered Persons Database. Linkage rates were estimated and standardized differences were used to assess differences in socio-demographic and other characteristics between the linked and unlinked records. The overall linkage rates for the IRCC-PR database and ORG-VSD registry were 86.4 and 96.2 %, respectively. The majority (68.2 %) of the record linkages in IRCC-PR were achieved after three deterministic passes, 18.2 % were linked probabilistically, and 13.6 % were unlinked. Similarly the majority (79.8 %) of the record linkages in the ORG-VSD were linked using deterministic record linkage, 16.3 % were linked after probabilistic and manual review, and 3.9 % were unlinked. Unlinked and linked files were similar for most characteristics, such as age and marital status for IRCC-PR and sex and most causes of death for ORG-VSD. However, lower linkage rates were observed among people born in East Asia (78 %) in the IRCC-PR database and certain causes of death in the ORG-VSD registry, namely perinatal conditions (61.3 %) and congenital anomalies (81.3 %). The linkages of immigration and vital statistics data to existing population-based healthcare data in Ontario, Canada will enable many novel cross-sectional and longitudinal studies to

  17. Predicting missing values in a home care database using an adaptive uncertainty rule method.

    PubMed

    Konias, S; Gogou, G; Bamidis, P D; Vlahavas, I; Maglaveras, N

    2005-01-01

    Contemporary literature illustrates an abundance of adaptive algorithms for mining association rules. However, most literature is unable to deal with the peculiarities, such as missing values and dynamic data creation, that are frequently encountered in fields like medicine. This paper proposes an uncertainty rule method that uses an adaptive threshold for filling missing values in newly added records. A new approach for mining uncertainty rules and filling missing values is proposed, which is in turn particularly suitable for dynamic databases, like the ones used in home care systems. In this study, a new data mining method named FiMV (Filling Missing Values) is illustrated based on the mined uncertainty rules. Uncertainty rules have quite a similar structure to association rules and are extracted by an algorithm proposed in previous work, namely AURG (Adaptive Uncertainty Rule Generation). The main target was to implement an appropriate method for recovering missing values in a dynamic database, where new records are continuously added, without needing to specify any kind of thresholds beforehand. The method was applied to a home care monitoring system database. Randomly, multiple missing values for each record's attributes (rate 5-20% by 5% increments) were introduced in the initial dataset. FiMV demonstrated 100% completion rates with over 90% success in each case, while usual approaches, where all records with missing values are ignored or thresholds are required, experienced significantly reduced completion and success rates. It is concluded that the proposed method is appropriate for the data-cleaning step of the Knowledge Discovery process in databases. The latter, containing much significance for the output efficiency of any data mining technique, can improve the quality of the mined information.

  18. Effect of dementia on outcomes of elderly patients with hemorrhagic peptic ulcer disease based on a national administrative database.

    PubMed

    Murata, Atsuhiko; Mayumi, Toshihiko; Muramatsu, Keiji; Ohtani, Makoto; Matsuda, Shinya

    2015-10-01

    Little information is available on the effect of dementia on outcomes of elderly patients with hemorrhagic peptic ulcer disease at the population level. This study aimed to investigate the effect of dementia on outcomes of elderly patients with hemorrhagic peptic ulcer based on a national administrative database. A total of 14,569 elderly patients (≥80 years) who were treated by endoscopic hemostasis for hemorrhagic peptic ulcer were referred to 1073 hospitals between 2010 and 2012 in Japan. We collected patients' data from the administrative database to compare clinical and medical economic outcomes of elderly patients with hemorrhagic peptic ulcers. Patients were divided into two groups according to the presence of dementia: patients with dementia (n = 695) and those without dementia (n = 13,874). There were no significant differences in in-hospital mortality within 30 days and overall mortality between the groups (odds ratio; OR 1.00, 95 % confidence interval; CI 0.68-1.46, p = 0.986 and OR 1.02, 95 % CI 0.74-1.41, p = 0.877). However, the length of stay (LOS) and medical costs during hospitalization were significantly higher in patients with dementia compared with those without dementia. The unstandardized coefficient for LOS was 3.12 days (95 % CI 1.58-4.67 days, p < 0.001), whereas that for medical costs was 1171.7 US dollars (95 % CI 533.8-1809.5 US dollars, p < 0.001). Length of stay and medical costs during hospitalization are significantly increased in elderly patients with dementia undergoing endoscopic hemostasis for hemorrhagic peptic ulcer disease.

  19. An Efficient Method for the Retrieval of Objects by Topological Relations in Spatial Database Systems.

    ERIC Educational Resources Information Center

    Lin, P. L.; Tan, W. H.

    2003-01-01

    Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)

  20. Instrument Failures for the da Vinci Surgical System: a Food and Drug Administration MAUDE Database Study.

    PubMed

    Friedman, Diana C W; Lendvay, Thomas S; Hannaford, Blake

    2013-05-01

    Our goal was to analyze reported instances of the da Vinci robotic surgical system instrument failures using the FDA's MAUDE (Manufacturer and User Facility Device Experience) database. From these data we identified some root causes of failures as well as trends that may assist surgeons and users of the robotic technology. We conducted a survey of the MAUDE database and tallied robotic instrument failures that occurred between January 2009 and December 2010. We categorized failures into five main groups (cautery, shaft, wrist or tool tip, cable, and control housing) based on technical differences in instrument design and function. A total of 565 instrument failures were documented through 528 reports. The majority of failures (285) were of the instrument's wrist or tool tip. Cautery problems comprised 174 failures, 76 were shaft failures, 29 were cable failures, and 7 were control housing failures. Of the reports, 10 had no discernible failure mode and 49 exhibited multiple failures. The data show that a number of robotic instrument failures occurred in a short period of time. In reality, many instrument failures may go unreported, thus a true failure rate cannot be determined from these data. However, education of hospital administrators, operating room staff, surgeons, and patients should be incorporated into discussions regarding the introduction and utilization of robotic technology. We recommend institutions incorporate standard failure reporting policies so that the community of robotic surgery companies and surgeons can improve on existing technologies for optimal patient safety and outcomes.

  1. Measuring hospital performance in congenital heart surgery: administrative versus clinical registry data.

    PubMed

    Pasquali, Sara K; He, Xia; Jacobs, Jeffrey P; Jacobs, Marshall L; Gaies, Michael G; Shah, Samir S; Hall, Matthew; Gaynor, J William; Peterson, Eric D; Mayer, John E; Hirsch-Romano, Jennifer C

    2015-03-01

    In congenital heart surgery, hospital performance has historically been assessed using widely available administrative data sets. Recent studies have demonstrated inaccuracies in case ascertainment (coding and inclusion of eligible cases) in administrative versus clinical registry data; however, it is unclear whether this impacts assessment of performance on a hospital level. Merged data from The Society of Thoracic Surgeons (STS) database (clinical registry) and the Pediatric Health Information Systems (PHIS) database (administrative data set) for 46,056 children undergoing cardiac operations (2006-2010) were used to evaluate in-hospital mortality for 33 hospitals based on their administrative versus registry data. Standard methods to identify/classify cases were used: Risk Adjustment in Congenital Heart Surgery, version 1 (RACHS-1) in the administrative data and STS-European Association for Cardiothoracic Surgery (STAT) methodology in the registry. Median hospital surgical volume based on the registry data was 269 cases per year; mortality was 2.9%. Hospital volumes and mortality rates based on the administrative data were on average 10.7% and 4.7% lower, respectively, although this varied widely across hospitals. Hospital rankings for mortality based on the administrative versus registry data differed by 5 or more rank positions for 24% of hospitals, with a change in mortality tertile classification (high, middle, or low mortality) for 18% and a change in statistical outlier classification for 12%. Higher volume/complexity hospitals were most impacted. Agency for Healthcare Quality and Research (AHRQ) methods in the administrative data yielded similar results. Inaccuracies in case ascertainment in administrative versus clinical registry data can lead to important differences in assessment of hospital mortality rates for congenital heart surgery. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Using administrative data to track fall-related ambulatory care services in the Veterans Administration Healthcare system.

    PubMed

    Luther, Stephen L; French, Dustin D; Powell-Cope, Gail; Rubenstein, Laurence Z; Campbell, Robert

    2005-10-01

    The Veterans Administration (VA) Healthcare system, containing hospital and community-based outpatient clinics, provides the setting for the study. Summary data was obtained from the VA Ambulatory Events Database for fiscal years (FY) 1997-2001 and in-depth data for FY 2001. In FY 2001, the database included approximately 4 million unique patients with 60 million encounters. The purpose of this study was: 1) to quantify injuries and use of services associated with falls among the elderly treated in Veterans Administration (VA) ambulatory care settings using administrative data; 2) to compare fall-related services provided to elderly veterans with those provided to younger veterans. Retrospective analysis of administrative data. This study describes the trends (FY 1997-2001) and patterns of fall-related ambulatory care encounters (FY 2001) in the VA Healthcare System. An approximately four-fold increase in both encounters and patients seen was observed in FY 1997-2001, largely paralleling the growth of VA ambulatory care services. More than two-thirds of the patients treated were found to be over the age of 65. Veterans over the age of 65 were found to be more likely to receive care in the non-urgent setting and had higher numbers of co-morbid conditions than younger veterans. While nearly half of the encounters occurred in the Emergency/Urgent Care setting, fall-related injuries led to services across a wide spectrum of medical and surgical providers/departments. This study represents the first attempt to use the VA Ambulatory Events Database to study fall-related services provided to elderly veterans. In view of the aging population served by the VA and the movement to provide increased services in the outpatient setting, this database provides an important resource for researchers and administrators interested in the prevention and treatment of fall-related injuries.

  3. Influence of postnatal glucocorticoids on hippocampal-dependent learning varies with elevation patterns and administration methods

    DTIC Science & Technology

    2017-05-22

    Influence of postnatal glucocorticoids on hippocampal-dependent learning varies with elevation patterns and administration methods 5b. GRANT NUMBER...of these effects varies with the elevation patterns (level, duration, temporal fluctuation) achieved by different administration methods . In general...learning varies with elevation patterns and administration methods Dragana I. Claflin a, Kevin D. Schmidt a, Zachary D. Vallandingham b, Michal

  4. Electronic surveillance and using administrative data to identify healthcare associated infections.

    PubMed

    Gastmeier, Petra; Behnke, Michael

    2016-08-01

    Traditional surveillance of healthcare associated infections (HCAI) is time consuming and error-prone. We have analysed literature of the past year to look at new developments in this field. It is divided into three parts: new algorithms for electronic surveillance, the use of administrative data for surveillance of HCAI, and the definition of new endpoints of surveillance, in accordance with an automatic surveillance approach. Most studies investigating electronic surveillance of HCAI have concentrated on bloodstream infection or surgical site infection. However, the lack of important parameters in hospital databases can lead to misleading results. The accuracy of administrative coding data was poor at identifying HCAI. New endpoints should be defined for automatic detection, with the most crucial step being to win clinicians' acceptance. Electronic surveillance with conventional endpoints is a successful method when hospital information systems implemented key changes and enhancements. One requirement is the access to systems for hospital administration and clinical databases.Although the primary source of data for HCAI surveillance is not administrative coding data, these are important components of a hospital-wide programme of automated surveillance. The implementation of new endpoints for surveillance is an approach which needs to be discussed further.

  5. The Protein Disease Database of human body fluids: II. Computer methods and data issues.

    PubMed

    Lemkin, P F; Orr, G A; Goldstein, M P; Creed, G J; Myrick, J E; Merril, C R

    1995-01-01

    The Protein Disease Database (PDD) is a relational database of proteins and diseases. With this database it is possible to screen for quantitative protein abnormalities associated with disease states. These quantitative relationships use data drawn from the peer-reviewed biomedical literature. Assays may also include those observed in high-resolution electrophoretic gels that offer the potential to quantitate many proteins in a single test as well as data gathered by enzymatic or immunologic assays. We are using the Internet World Wide Web (WWW) and the Web browser paradigm as an access method for wide distribution and querying of the Protein Disease Database. The WWW hypertext transfer protocol and its Common Gateway Interface make it possible to build powerful graphical user interfaces that can support easy-to-use data retrieval using query specification forms or images. The details of these interactions are totally transparent to the users of these forms. Using a client-server SQL relational database, user query access, initial data entry and database maintenance are all performed over the Internet with a Web browser. We discuss the underlying design issues, mapping mechanisms and assumptions that we used in constructing the system, data entry, access to the database server, security, and synthesis of derived two-dimensional gel image maps and hypertext documents resulting from SQL database searches.

  6. Software Application for Supporting the Education of Database Systems

    ERIC Educational Resources Information Center

    Vágner, Anikó

    2015-01-01

    The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…

  7. EasyKSORD: A Platform of Keyword Search Over Relational Databases

    NASA Astrophysics Data System (ADS)

    Peng, Zhaohui; Li, Jing; Wang, Shan

    Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.

  8. 40 CFR 53.7 - Testing of methods at the initiative of the Administrator.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Testing of methods at the initiative of the Administrator. 53.7 Section 53.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Provisions § 53.7 Testing of methods at the initiative of the Administrator. (a) In the absence of an...

  9. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    PubMed

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as

  10. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    PubMed

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed

  11. Rhabdomyolysis after co-administration of a statin and fusidic acid: an analysis of the literature and of the WHO database of adverse drug reactions.

    PubMed

    Deljehier, Thomas; Pariente, Antoine; Miremont-Salamé, Ghada; Haramburu, Françoise; Nguyen, Linh; Rubin, Sébastien; Rigothier, Claire; Théophile, Hélène

    2018-05-01

    Following a severe case of rhabdomyolysis in our University Hospital after a co-administration of atorvastatin and fusidic acid, we describe this interaction as this combination is not clearly contraindicated in some countries, particularly for long-term treatment by fusidic acid. All cases of rhabdomyolysis during a co-administration of a statin and fusidic acid were identified in the literature and in the World and Health Organization database, VigiBase ® . In the literature, 29 cases of rhabdomyolysis were identified; mean age was 66 years, median duration of co-administration before rhabdomyolysis occurrence was 21 days, 28% of cases were fatal. In the VigiBase ® , 182 cases were retrieved; mean age was 68 years, median duration of co-administration before rhabdomyolysis was 31 days and 24% of cases were fatal. Owing to the high fatality associated with this co-administration and the long duration of treatment before rhabdomyolysis occurrence, fusidic acid should be used if there is no appropriate alternative, as long as statin therapy is interrupted for the duration of fusidic acid therapy, and perhaps a week longer. Rarely will interruption of this sort have adverse consequences for the patient. © 2018 The British Pharmacological Society.

  12. Practical Applications of a Building Method to Construct Aerodynamic Database of Guided Missile Using Wind Tunnel Test Data

    NASA Astrophysics Data System (ADS)

    Kim, Duk-hyun; Lee, Hyoung-Jin

    2018-04-01

    A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.

  13. In an occupational health surveillance study, auxiliary data from administrative health and occupational databases effectively corrected for nonresponse.

    PubMed

    Santin, Gaëlle; Geoffroy, Béatrice; Bénézet, Laetitia; Delézire, Pauline; Chatelot, Juliette; Sitta, Rémi; Bouyer, Jean; Gueguen, Alice

    2014-06-01

    To show how reweighting can correct for unit nonresponse bias in an occupational health surveillance survey by using data from administrative databases in addition to classic sociodemographic data. In 2010, about 10,000 workers covered by a French health insurance fund were randomly selected and were sent a postal questionnaire. Simultaneously, auxiliary data from routine health insurance and occupational databases were collected for all these workers. To model the probability of response to the questionnaire, logistic regressions were performed with these auxiliary data to compute weights for correcting unit nonresponse. Corrected prevalences of questionnaire variables were estimated under several assumptions regarding the missing data process. The impact of reweighting was evaluated by a sensitivity analysis. Respondents had more reimbursement claims for medical services than nonrespondents but fewer reimbursements for medical prescriptions or hospitalizations. Salaried workers, workers in service companies, or who had held their job longer than 6 months were more likely to respond. Corrected prevalences after reweighting were slightly different from crude prevalences for some variables but meaningfully different for others. Linking health insurance and occupational data effectively corrects for nonresponse bias using reweighting techniques. Sociodemographic variables may be not sufficient to correct for nonresponse. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Accuracy of Canadian health administrative databases in identifying patients with rheumatoid arthritis: a validation study using the medical records of rheumatologists.

    PubMed

    Widdifield, Jessica; Bernatsky, Sasha; Paterson, J Michael; Tu, Karen; Ng, Ryan; Thorne, J Carter; Pope, Janet E; Bombardier, Claire

    2013-10-01

    Health administrative data can be a valuable tool for disease surveillance and research. Few studies have rigorously evaluated the accuracy of administrative databases for identifying rheumatoid arthritis (RA) patients. Our aim was to validate administrative data algorithms to identify RA patients in Ontario, Canada. We performed a retrospective review of a random sample of 450 patients from 18 rheumatology clinics. Using rheumatologist-reported diagnosis as the reference standard, we tested and validated different combinations of physician billing, hospitalization, and pharmacy data. One hundred forty-nine rheumatology patients were classified as having RA and 301 were classified as not having RA based on our reference standard definition (study RA prevalence 33%). Overall, algorithms that included physician billings had excellent sensitivity (range 94-100%). Specificity and positive predictive value (PPV) were modest to excellent and increased when algorithms included multiple physician claims or specialist claims. The addition of RA medications did not significantly improve algorithm performance. The algorithm of "(1 hospitalization RA code ever) OR (3 physician RA diagnosis codes [claims] with ≥1 by a specialist in a 2-year period)" had a sensitivity of 97%, specificity of 85%, PPV of 76%, and negative predictive value of 98%. Most RA patients (84%) had an RA diagnosis code present in the administrative data within ±1 year of a rheumatologist's documented diagnosis date. We demonstrated that administrative data can be used to identify RA patients with a high degree of accuracy. RA diagnosis date and disease duration are fairly well estimated from administrative data in jurisdictions of universal health care insurance. Copyright © 2013 by the American College of Rheumatology.

  15. Treatment patterns in hyperlipidaemia patients based on administrative claim databases in Japan.

    PubMed

    Wake, Mayumi; Onishi, Yoshie; Guelfucci, Florent; Oh, Akinori; Hiroi, Shinzo; Shimasaki, Yukio; Teramoto, Tamio

    2018-05-01

    Real-world evidence on treatment of hyperlipidaemia (HLD) in Japan is limited. We aimed to describe treatment patterns, persistence with, and adherence to treatment in Japanese patients with HLD. Retrospective analyses of adult HLD patients receiving drug therapy in 2014-2015 were conducted using the Japan Medical Data Center (JMDC) and Medical Data Vision (MDV) databases. Depending on their HLD treatment history, individuals were categorised as untreated (UT) or previously treated (PT), and were followed for at least 12 months. Outcomes of interest included prescribing patterns of HLD drug classes, persistence with treatment at 12 months, and adherence to treatment. Data for 49,582 and 53,865 patients from the JMDC and MDV databases, respectively, were analysed. First-line HLD prescriptions for UT patients were predominantly for moderate statins (JMDC: 75.9%, MDV: 77.0%). PT patients most commonly received combination therapy (JMDC: 43.9%, MDV: 52.6%). Approximately half of the UT patients discontinued treatment during observation. Within each cohort, persistence rates were lower in UT patients than in PT patients (JMDC: 45.0% vs. 77.5%; MDV: 51.9% vs. 85.3%). Adherence was ≥80% across almost all HLD drug classes, and was slightly lower in the JMDC cohort than MDV cohort. Most common prescriptions were moderate statins in UT patients and combination therapy in PT patients. The high discontinuation rate of HLD therapy in UT patients warrants further investigation and identification of methods to encourage and support long-term persistence. Copyright © 2018. Published by Elsevier B.V.

  16. Attenuation relation for strong motion in Eastern Java based on appropriate database and method

    NASA Astrophysics Data System (ADS)

    Mahendra, Rian; Rohadi, Supriyanto; Rudyanto, Ariska

    2017-07-01

    The selection and determination of attenuation relation has become important for seismic hazard assessment in active seismic region. This research initially constructs the appropriate strong motion database, including site condition and type of the earthquake. The data set consisted of large number earthquakes of 5 ≤ Mw ≤ 9 and distance less than 500 km that occurred around Java from 2009 until 2016. The location and depth of earthquake are being relocated using double difference method to improve the quality of database. Strong motion data from twelve BMKG's accelerographs which are located in east Java is used. The site condition is known by using dominant period and Vs30. The type of earthquake is classified into crustal earthquake, interface, and intraslab based on slab geometry analysis. A total of 10 Ground Motion Prediction Equations (GMPEs) are tested using Likelihood (Scherbaum et al., 2004) and Euclidean Distance Ranking method (Kale and Akkar, 2012) with the associated database. The evaluation of these methods lead to a set of GMPEs that can be applied for seismic hazard in East Java where the strong motion data is collected. The result of these methods found that there is still high deviation of GMPEs, so the writer modified some GMPEs using inversion method. Validation was performed by analysing the attenuation curve of the selected GMPE and observation data in period 2015 up to 2016. The results show that the selected GMPE is suitable for estimated PGA value in East Java.

  17. Aptamer Database

    PubMed Central

    Lee, Jennifer F.; Hesselberth, Jay R.; Meyers, Lauren Ancel; Ellington, Andrew D.

    2004-01-01

    The aptamer database is designed to contain comprehensive sequence information on aptamers and unnatural ribozymes that have been generated by in vitro selection methods. Such data are not normally collected in ‘natural’ sequence databases, such as GenBank. Besides serving as a storehouse of sequences that may have diagnostic or therapeutic utility, the database serves as a valuable resource for theoretical biologists who describe and explore fitness landscapes. The database is updated monthly and is publicly available at http://aptamer.icmb.utexas.edu/. PMID:14681367

  18. Development and validation of an administrative case definition for inflammatory bowel diseases

    PubMed Central

    Rezaie, Ali; Quan, Hude; Fedorak, Richard N; Panaccione, Remo; Hilsden, Robert J

    2012-01-01

    BACKGROUND: A population-based database of inflammatory bowel disease (IBD) patients is invaluable to explore and monitor the epidemiology and outcome of the disease. In this context, an accurate and validated population-based case definition for IBD becomes critical for researchers and health care providers. METHODS: IBD and non-IBD individuals were identified through an endoscopy database in a western Canadian health region (Calgary Health Region, Calgary, Alberta). Subsequently, using a novel algorithm, a series of case definitions were developed to capture IBD cases in the administrative databases. In the second stage of the study, the criteria were validated in the Capital Health Region (Edmonton, Alberta). RESULTS: A total of 150 IBD case definitions were developed using 1399 IBD patients and 15,439 controls in the development phase. In the validation phase, 318,382 endoscopic procedures were searched and 5201 IBD patients were identified. After consideration of sensitivity, specificity and temporal stability of each validated case definition, a diagnosis of IBD was assigned to individuals who experienced at least two hospitalizations or had four physician claims, or two medical contacts in the Ambulatory Care Classification System database with an IBD diagnostic code within a two-year period (specificity 99.8%; sensitivity 83.4%; positive predictive value 97.4%; negative predictive value 98.5%). An alternative case definition was developed for regions without access to the Ambulatory Care Classification System database. A novel scoring system was developed that detected Crohn disease and ulcerative colitis patients with a specificity of >99% and a sensitivity of 99.1% and 86.3%, respectively. CONCLUSION: Through a robust methodology, a reproducible set of criteria to capture IBD patients through administrative databases was developed. The methodology may be used to develop similar administrative definitions for chronic diseases. PMID:23061064

  19. 47 CFR 52.12 - North American Numbering Plan Administrator and B&C Agent.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false North American Numbering Plan Administrator and... Administrator and B&C Agent. The North American Numbering Plan Administrator (“NANPA”) and the associated “B&C... rating information, into the industry-approved database(s) for dissemination of such information. This...

  20. 47 CFR 52.12 - North American Numbering Plan Administrator and B&C Agent.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false North American Numbering Plan Administrator and... Administrator and B&C Agent. The North American Numbering Plan Administrator (“NANPA”) and the associated “B&C... rating information, into the industry-approved database(s) for dissemination of such information. This...

  1. 47 CFR 52.12 - North American Numbering Plan Administrator and B&C Agent.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false North American Numbering Plan Administrator and... Administrator and B&C Agent. The North American Numbering Plan Administrator (“NANPA”) and the associated “B&C... rating information, into the industry-approved database(s) for dissemination of such information. This...

  2. 47 CFR 52.12 - North American Numbering Plan Administrator and B&C Agent.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false North American Numbering Plan Administrator and... Administrator and B&C Agent. The North American Numbering Plan Administrator (“NANPA”) and the associated “B&C... rating information, into the industry-approved database(s) for dissemination of such information. This...

  3. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research

    PubMed Central

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Background Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Method Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases’ characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Results Forty databases– 20 from Thailand and 20 from Japan—were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Conclusion Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed. PMID:26560127

  4. The Microcomputer in the Administrative Office.

    ERIC Educational Resources Information Center

    Huntington, Fred

    1983-01-01

    Discusses microcomputer uses for administrative computing in education at site level and central office and recommends that administrators start with a word processing program for time management, an electronic spreadsheet for financial accounting, a database management system for inventories, and self-written programs to alleviate paper…

  5. Staradmin -- Starlink User Database Maintainer

    NASA Astrophysics Data System (ADS)

    Fish, Adrian

    The subject of this SSN is a utility called STARADMIN. This utility allows the system administrator to build and maintain a Starlink User Database (UDB). The principal source of information for each user is a text file, named after their username. The content of each file is a list consisting of one keyword followed by the relevant user data per line. These user database files reside in a single directory. The STARADMIN program is used to manipulate these user data files and automatically generate user summary lists.

  6. Mapping the literature of nursing administration.

    PubMed

    Galganski, Carol J

    2006-04-01

    As part of Phase I of a project to map the literature of nursing, sponsored by the Nursing and Allied Health Resources Section of the Medical Library Association, this study identifies the core literature cited in nursing administration and the indexing services that provide access to the core journals. The results of this study will assist librarians and end users searching for information related to this nursing discipline, as well as database producers who might consider adding specific titles to their indexing services. Using the common methodology described in the overview article, five source journals for nursing administration were identified and selected for citation analysis over a three-year period, 1996 to 1998, to identify the most frequently cited titles according to Bradford's Law of Scattering. From this core of most productive journal titles, the bibliographic databases that provide the best access to these titles were identified. Results reveal that nursing administration literature relies most heavily on journal articles and on those titles identified as core nursing administrative titles. When the indexing coverage of nine services is compared, PubMed/MEDLINE and CINAHL provide the most comprehensive coverage of this nursing discipline. No one indexing service adequately covers this nursing discipline. Researchers needing comprehensive coverage in this area must search more than one database to effectively research their projects. While PubMed/MEDLINE and CINAHL provide more coverage for this discipline than the other indexing services, none is sufficiently broad in scope to provide indexing of nursing, health care management, and medical literature in a single file. Nurse administrators using the literature to research current work issues need to review not only the nursing titles covered by CINAHL but should also include the major weekly medical titles, core titles in health care administration, and general business sources if they wish to

  7. The Québec BCG Vaccination Registry (1956-1992): assessing data quality and linkage with administrative health databases.

    PubMed

    Rousseau, Marie-Claude; Conus, Florence; Li, Jun; Parent, Marie-Élise; El-Zein, Mariam

    2014-01-09

    Vaccination registries have undoubtedly proven useful for estimating vaccination coverage as well as examining vaccine safety and effectiveness. However, their use for population health research is often limited. The Bacillus Calmette-Guérin (BCG) Vaccination Registry for the Canadian province of Québec comprises some 4 million vaccination records (1926-1992). This registry represents a unique opportunity to study potential associations between BCG vaccination and various health outcomes. So far, such studies have been hampered by the absence of a computerized version of the registry. We determined the completeness and accuracy of the recently computerized BCG Vaccination Registry, as well as examined its linkability with demographic and administrative medical databases. Two systematically selected verification samples, each representing ~0.1% of the registry, were used to ascertain accuracy and completeness of the electronic BCG Vaccination Registry. Agreement between the paper [listings (n = 4,987 records) and vaccination certificates (n = 4,709 records)] and electronic formats was determined along several nominal and BCG-related variables. Linkage feasibility with the Birth Registry (probabilistic approach) and provincial Healthcare Registration File (deterministic approach) was examined using nominal identifiers for a random sample of 3,500 individuals born from 1961 to 1974 and BCG vaccinated between 1970 and 1974. Exact agreement was observed for 99.6% and 81.5% of records upon comparing, respectively, the paper listings and vaccination certificates to their corresponding computerized records. The proportion of successful linkage was 77% with the Birth Registry, 70% with the Healthcare Registration File, 57% with both, and varied by birth year. Computerization of this Registry yielded excellent results. The registry was complete and accurate, and linkage with administrative databases was highly feasible. This study represents the first step towards

  8. Video Databases: An Emerging Tool in Business Education

    ERIC Educational Resources Information Center

    MacKinnon, Gregory; Vibert, Conor

    2014-01-01

    A video database of business-leader interviews has been implemented in the assignment work of students in a Bachelor of Business Administration program at a primarily-undergraduate liberal arts university. This action research study was designed to determine the most suitable assignment work to associate with the database in a Business Strategy…

  9. A Web-based database for pathology faculty effort reporting.

    PubMed

    Dee, Fred R; Haugen, Thomas H; Wynn, Philip A; Leaven, Timothy C; Kemp, John D; Cohen, Michael B

    2008-04-01

    To ensure appropriate mission-based budgeting and equitable distribution of funds for faculty salaries, our compensation committee developed a pathology-specific effort reporting database. Principles included the following: (1) measurement should be done by web-based databases; (2) most entry should be done by departmental administration or be relational to other databases; (3) data entry categories should be aligned with funding streams; and (4) units of effort should be equal across categories of effort (service, teaching, research). MySQL was used for all data transactions (http://dev.mysql.com/downloads), and scripts were constructed using PERL (http://www.perl.org). Data are accessed with forms that correspond to fields in the database. The committee's work resulted in a novel database using pathology value units (PVUs) as a standard quantitative measure of effort for activities in an academic pathology department. The most common calculation was to estimate the number of hours required for a specific task, divide by 2080 hours (a Medicare year) and then multiply by 100. Other methods included assigning a baseline PVU for program, laboratory, or course directorship with an increment for each student or staff in that unit. With these methods, a faculty member should acquire approximately 100 PVUs. Some outcomes include (1) plotting PVUs versus salary to identify outliers for salary correction, (2) quantifying effort in activities outside the department, (3) documenting salary expenditure for unfunded research, (4) evaluating salary equity by plotting PVUs versus salary by sex, and (5) aggregating data by category of effort for mission-based budgeting and long-term planning.

  10. New tools and methods for direct programmatic access to the dbSNP relational database.

    PubMed

    Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.

  11. New tools and methods for direct programmatic access to the dbSNP relational database

    PubMed Central

    Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.

    2011-01-01

    Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260

  12. Accuracy of ICD-10 Coding System for Identifying Comorbidities and Infectious Conditions Using Data from a Thai University Hospital Administrative Database.

    PubMed

    Rattanaumpawan, Pinyo; Wongkamhla, Thanyarak; Thamlikitkul, Visanu

    2016-04-01

    To determine the accuracy of International Statistical Classification of Disease and Related Health Problems, 10th Revision (ICD-10) coding system in identifying comorbidities and infectious conditions using data from a Thai university hospital administrative database. A retrospective cross-sectional study was conducted among patients hospitalized in six general medicine wards at Siriraj Hospital. ICD-10 code data was identified and retrieved directly from the hospital administrative database. Patient comorbidities were captured using the ICD-10 coding algorithm for the Charlson comorbidity index. Infectious conditions were captured using the groups of ICD-10 diagnostic codes that were carefully prepared by two independent infectious disease specialists. Accuracy of ICD-10 codes combined with microbiological dataf or diagnosis of urinary tract infection (UTI) and bloodstream infection (BSI) was evaluated. Clinical data gathered from chart review was considered the gold standard in this study. Between February 1 and May 31, 2013, a chart review of 546 hospitalization records was conducted. The mean age of hospitalized patients was 62.8 ± 17.8 years and 65.9% of patients were female. Median length of stay [range] was 10.0 [1.0-353.0] days and hospital mortality was 21.8%. Conditions with ICD-10 codes that had good sensitivity (90% or higher) were diabetes mellitus and HIV infection. Conditions with ICD-10 codes that had good specificity (90% or higher) were cerebrovascular disease, chronic lung disease, diabetes mellitus, cancer HIV infection, and all infectious conditions. By combining ICD-10 codes with microbiological results, sensitivity increased from 49.5 to 66%for UTI and from 78.3 to 92.8%for BS. The ICD-10 coding algorithm is reliable only in some selected conditions, including underlying diabetes mellitus and HIV infection. Combining microbiological results with ICD-10 codes increased sensitivity of ICD-10 codes for identifying BSI. Future research is

  13. The Education of Librarians for Data Administration.

    ERIC Educational Resources Information Center

    Koenig, Michael E. D.; Kochoff, Stephen T.

    1983-01-01

    Argues that the increasing importance of database management systems (DBMS) and recognition of the information dependency of business planning are creating new job opportunities for librarians/information technicians. Highlights include development and functions of DBMSs, data and database administration, potential for librarians, and implications…

  14. [Innovative therapeutic strategies for intravesical drug administration].

    PubMed

    Moch, C; Salmon, D; Rome, P; Marginean, R; Pivot, C; Colombel, M; Pirot, F

    2013-05-01

    Perspectives for innovative pharmaceutical molecules and intravesical administration of pharmacological agents are presented in the present review carried out from a recent literature. This review of the literature was built by using the PubMed and ScienceDirect databases running 20keywords revealing 34publications between 1983 and 2012. The number of referenced articles on ScienceDirect has increased in recent years, highlighting the interest of scientists for intravesical drug administration and the relevance of innovating drug delivery systems. Different modalities of intravesical administration using physical (e.g., iontophoresis, electroporation) or chemical techniques (e.g., enzyme, solvent, nanoparticles, liposomes, hydrogels) based on novel formulation methods are reported. Finally, the development of biopharmaceuticals (e.g., bacillus Calmette-Guérin, interferon α) and gene therapies is also presented and analyzed in this review. The present review exhibits new development in the pipeline for emerging intravesical drug administration strategies. Knowledge of all these therapies allows practitioners to propose a specific and tailored treatment to each patient with limiting systemic side effects. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  15. Leadership Styles of Nursing Home Administrators and Their Association with Staff Turnover

    ERIC Educational Resources Information Center

    Donoghue, Christopher; Castle, Nicholas G.

    2009-01-01

    Purpose: The purpose of this study was to examine the associations between nursing home administrator (NHA) leadership style and staff turnover. Design and Methods: We analyzed primary data from a survey of 2,900 NHAs conducted in 2005. The Online Survey Certification and Reporting database and the Area Resource File were utilized to extract…

  16. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  17. A systematic review of validated methods to capture stillbirth and spontaneous abortion using administrative or claims data.

    PubMed

    Likis, Frances E; Sathe, Nila A; Carnahan, Ryan; McPheeters, Melissa L

    2013-12-30

    To identify and assess diagnosis, procedure and pharmacy dispensing codes used to identify stillbirths and spontaneous abortion in administrative and claims databases from the United States or Canada. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to stillbirth or spontaneous abortion. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics and assessed each study's methodological rigor using a pre-defined approach. Ten publications addressing stillbirth and four addressing spontaneous abortion met our inclusion criteria. The International Classification of Diseases, Ninth Revision (ICD-9) codes most commonly used in algorithms for stillbirth were those for intrauterine death (656.4) and stillborn outcomes of delivery (V27.1, V27.3-V27.4, and V27.6-V27.7). Papers identifying spontaneous abortion used codes for missed abortion and spontaneous abortion: 632, 634.x, as well as V27.0-V27.7. Only two studies identifying stillbirth reported validation of algorithms. The overall positive predictive value of the algorithms was high (99%-100%), and one study reported an algorithm with 86% sensitivity. However, the predictive value of individual codes was not assessed and study populations were limited to specific geographic areas. Additional validation studies with a nationally representative sample are needed to confirm the optimal algorithm to identify stillbirths or spontaneous abortion in administrative and claims databases.' Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Appending Limited Clinical Data to an Administrative Database for Acute Myocardial Infarction Patients: The Impact on the Assessment of Hospital Quality.

    PubMed

    Hannan, Edward L; Samadashvili, Zaza; Cozzens, Kimberly; Jacobs, Alice K; Venditti, Ferdinand J; Holmes, David R; Berger, Peter B; Stamato, Nicholas J; Hughes, Suzanne; Walford, Gary

    2016-05-01

    Hospitals' risk-standardized mortality rates and outlier status (significantly higher/lower rates) are reported by the Centers for Medicare and Medicaid Services (CMS) for acute myocardial infarction (AMI) patients using Medicare claims data. New York now has AMI claims data with blood pressure and heart rate added. The objective of this study was to see whether the appended database yields different hospital assessments than standard claims data. New York State clinically appended claims data for AMI were used to create 2 different risk models based on CMS methods: 1 with and 1 without the added clinical data. Model discrimination was compared, and differences between the models in hospital outlier status and tertile status were examined. Mean arterial pressure and heart rate were both significant predictors of mortality in the clinically appended model. The C statistic for the model with the clinical variables added was significantly higher (0.803 vs. 0.773, P<0.001). The model without clinical variables identified 10 low outliers and all of them were percutaneous coronary intervention hospitals. When clinical variables were included in the model, only 6 of those 10 hospitals were low outliers, but there were 2 new low outliers. The model without clinical variables had only 3 high outliers, and the model with clinical variables included identified 2 new high outliers. Appending even a small number of clinical data elements to administrative data resulted in a difference in the assessment of hospital mortality outliers for AMI. The strategy of adding limited but important clinical data elements to administrative datasets should be considered when evaluating hospital quality for procedures and other medical conditions.

  19. Organizations challenged by global database development

    USGS Publications Warehouse

    Sturdevant, J.A.; Eidenshink, J.C.; Loveland, Thomas R.

    1991-01-01

    Several international programs have identified the need for a global 1-kilometer spatial database for land cover and land characterization studies. In 1992, the US Geological Survey (USGS) EROS Data Center (EDC), the European Space Agency (ESA), the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) will collect and archive all 1-kilometer Advanced Very High Resolution Radiometer (AVHRR) data acquired during afternoon orbital passes over land.

  20. Surveillance of systemic autoimmune rheumatic diseases using administrative data.

    PubMed

    Bernatsky, S; Lix, L; Hanly, J G; Hudson, M; Badley, E; Peschken, C; Pineau, C A; Clarke, A E; Fortin, P R; Smith, M; Bélisle, P; Lagace, C; Bergeron, L; Joseph, L

    2011-04-01

    There is growing interest in developing tools and methods for the surveillance of chronic rheumatic diseases, using existing resources such as administrative health databases. To illustrate how this might work, we used population-based administrative data to estimate and compare the prevalence of systemic autoimmune rheumatic diseases (SARDs) across three Canadian provinces, assessing for regional differences and the effects of demographic factors. Cases of SARDs (systemic lupus erythematosus, scleroderma, primary Sjogren's, polymyositis/dermatomyositis) were ascertained from provincial physician billing and hospitalization data. We combined information from three case definitions, using hierarchical Bayesian latent class regression models that account for the imperfect nature of each case definition. Using methods that account for the imperfect nature of both billing and hospitalization databases, we estimated the over-all prevalence of SARDs to be approximately 2-3 cases per 1,000 residents. Stratified prevalence estimates suggested similar demographic trends across provinces (i.e. greater prevalence in females-versus-males, and in persons of older age). The prevalence in older females approached or exceeded 1 in 100, which may reflect the high burden of primary Sjogren's syndrome in this group. Adjusting for demographics, there was a greater prevalence in urban-versus-rural settings. In our work, prevalence estimates had good face validity and provided useful information about potential regional and demographic variations. Our results suggest that surveillance of some rheumatic diseases using administrative data may indeed be feasible. Our work highlights the usefulness of using multiple data sources, adjusting for the error in each.

  1. The Development of a Korean Drug Dosing Database

    PubMed Central

    Kim, Sun Ah; Kim, Jung Hoon; Jang, Yoo Jin; Jeon, Man Ho; Hwang, Joong Un; Jeong, Young Mi; Choi, Kyung Suk; Lee, Iyn Hyang; Jeon, Jin Ok; Lee, Eun Sook; Lee, Eun Kyung; Kim, Hong Bin; Chin, Ho Jun; Ha, Ji Hye; Kim, Young Hoon

    2011-01-01

    Objectives This report describes the development process of a drug dosing database for ethical drugs approved by the Korea Food & Drug Administration (KFDA). The goal of this study was to develop a computerized system that supports physicians' prescribing decisions, particularly in regards to medication dosing. Methods The advisory committee, comprised of doctors, pharmacists, and nurses from the Seoul National University Bundang Hospital, pharmacists familiar with drug databases, KFDA officials, and software developers from the BIT Computer Co. Ltd. analyzed approved KFDA drug dosing information, defined the fields and properties of the information structure, and designed a management program used to enter dosing information. The management program was developed using a web based system that allows multiple researchers to input drug dosing information in an organized manner. The whole process was improved by adding additional input fields and eliminating the unnecessary existing fields used when the dosing information was entered, resulting in an improved field structure. Results A total of 16,994 drugs sold in the Korean market in July 2009, excluding the exclusion criteria (e.g., radioactivity drugs, X-ray contrast medium), usage and dosing information were made into a database. Conclusions The drug dosing database was successfully developed and the dosing information for new drugs can be continually maintained through the management mode. This database will be used to develop the drug utilization review standards and to provide appropriate dosing information. PMID:22259729

  2. Accuracy of lung cancer ICD-9-CM codes in Umbria, Napoli 3 Sud and Friuli Venezia Giulia administrative healthcare databases: a diagnostic accuracy study

    PubMed Central

    Montedori, Alessandro; Bidoli, Ettore; Serraino, Diego; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Franchini, David; Granata, Annalisa; Ciullo, Valerio; Vitale, Maria Francesca; Gobbato, Michele; Chiari, Rita; Cozzolino, Francesco; Orso, Massimiliano; Orlandi, Walter

    2018-01-01

    Objectives To assess the accuracy of International Classification of Diseases 9th Revision–Clinical Modification (ICD-9-CM) codes in identifying subjects with lung cancer. Design A cross-sectional diagnostic accuracy study comparing ICD-9-CM 162.x code (index test) in primary position with medical chart (reference standard). Case ascertainment was based on the presence of a primary nodular lesion in the lung and cytological or histological documentation of cancer from a primary or metastatic site. Setting Three operative units: administrative databases from Umbria Region (890 000 residents), ASL Napoli 3 Sud (NA) (1 170 000 residents) and Friuli Venezia Giulia (FVG) Region (1 227 000 residents). Participants Incident subjects with lung cancer (n=386) diagnosed in primary position between 2012 and 2014 and a population of non-cases (n=280). Outcome measures Sensitivity, specificity and positive predictive value (PPV) for 162.x code. Results 130 cases and 94 non-cases were randomly selected from each database and the corresponding medical charts were reviewed. Most of the diagnoses for lung cancer were performed in medical departments. True positive rates were high for all the three units. Sensitivity was 99% (95% CI 95% to 100%) for Umbria, 97% (95% CI 91% to 100%) for NA, and 99% (95% CI 95% to 100%) for FVG. The false positive rates were 24%, 37% and 23% for Umbria, NA and FVG, respectively. PPVs were 79% (73% to 83%)%) for Umbria, 58% (53% to 63%)%) for NA and 79% (73% to 84%)%) for FVG. Conclusions Case ascertainment for lung cancer based on imaging or endoscopy associated with histological examination yielded an excellent sensitivity in all the three administrative databases. PPV was moderate for Umbria and FVG but lower for NA. PMID:29773701

  3. Use of Patient Registries and Administrative Datasets for the Study of Pediatric Cancer

    PubMed Central

    Rice, Henry E.; Englum, Brian R.; Gulack, Brian C.; Adibe, Obinna O.; Tracy, Elizabeth T.; Kreissman, Susan G.; Routh, Jonathan C.

    2015-01-01

    Analysis of data from large administrative databases and patient registries is increasingly being used to study childhood cancer care, although the value of these data sources remains unclear to many clinicians. Interpretation of large databases requires a thorough understanding of how the dataset was designed, how data were collected, and how to assess data quality. This review will detail the role of administrative databases and registry databases for the study of childhood cancer, tools to maximize information from these datasets, and recommendations to improve the use of these databases for the study of pediatric oncology. PMID:25807938

  4. 75 FR 18255 - Passenger Facility Charge Database System for Air Carrier Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... Facility Charge Database System for Air Carrier Reporting AGENCY: Federal Aviation Administration (FAA... the Passenger Facility Charge (PFC) database system to report PFC quarterly report information. In... developed a national PFC database system in order to more easily track the PFC program on a nationwide basis...

  5. Drug-Associated Acute Kidney Injury Identified in the United States Food and Drug Administration Adverse Event Reporting System Database.

    PubMed

    Welch, Hanna K; Kellum, John A; Kane-Gill, Sandra L

    2018-06-08

    Acute kidney injury (AKI) is a common condition associated with both short-term and long-term consequences including dialysis, chronic kidney disease, and mortality. Although the United States Food and Drug Administration Adverse Event Reporting System (FAERS) database is a powerful tool to examine drug-associated events, to our knowledge, no study has analyzed this database to identify the most common drugs reported with AKI. The objective of this study was to analyze AKI reports and associated medications in the FAERS database. Retrospective pharmacovigilance disproportionality analysis. FAERS database. We queried the FAERS database for reports of AKI from 2004 quarter 1 through 2015 quarter 3. Extracted drugs were assessed using published references and categorized as known, possible, or new potential nephrotoxins. The reporting odds ratio (ROR), a measure of reporting disproportionality, was calculated for the 20 most frequently reported drugs in each category. We retrieved 7,241,385 adverse event reports, of which 193,996 (2.7%) included a report of AKI. Of the AKI reports, 16.5% were known nephrotoxins, 18.6% were possible nephrotoxins, and 64.8% were new potential nephrotoxins. Among the most commonly reported drugs, those with the highest AKI ROR were aprotinin (7,614 reports; ROR 115.70, 95% confidence interval [CI] 110.63-121.01), sodium phosphate (1,687 reports; ROR 55.81, 95% CI 51.78-60.17), furosemide (1,743 reports; ROR 12.61, 95% CI 11.94-13.32), vancomycin (1,270 reports, ROR 12.19, 95% CI 11.45-12.99), and metformin (4,701 reports; ROR 10.65, 95% CI 10.31-11.00). The combined RORs for the 20 most frequently reported drugs with each nephrotoxin classification were 3.71 (95% CI 3.66-3.76) for known nephrotoxins, 2.09 (95% CI 2.06-2.12) for possible nephrotoxins, and 1.55 (95% CI 1.53-1.57) for new potential nephrotoxins. AKI was a common reason for adverse event reporting in the FAERS. Most AKI reports were generated for medications not recognized

  6. Geometric methods for estimating representative sidewalk widths applied to Vienna's streetscape surfaces database

    NASA Astrophysics Data System (ADS)

    Brezina, Tadej; Graser, Anita; Leth, Ulrich

    2017-04-01

    Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.

  7. Attitudes toward Public Administration Education, Professional Role Perceptions and Political Values among the Public Administrators in an American State--Kentucky. Research Report.

    ERIC Educational Resources Information Center

    Mohapatra, Manindra K.; Rose, Bruce; Woods, Don A.; Lake, Gashaw

    The analyses reported are based on a computerized set of survey research data from an archived database containing responses of 1,456 state public administrators in Kentucky to a mail survey conducted in 1987-1989. Using this data, researchers analyzed attitudes toward public administration among these public administrators, the professional role…

  8. 47 CFR 52.15 - Central office code administration.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... assignment databases; (3) Conducting the Numbering Resource Utilization and Forecast (NRUF) data collection... telecommunications carrier that receives numbering resources from the NANPA, a Pooling Administrator or another... Administrator. (2) State commissions may investigate and determine whether service providers have activated...

  9. 47 CFR 52.15 - Central office code administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... assignment databases; (3) Conducting the Numbering Resource Utilization and Forecast (NRUF) data collection... telecommunications carrier that receives numbering resources from the NANPA, a Pooling Administrator or another... Administrator. (2) State commissions may investigate and determine whether service providers have activated...

  10. 47 CFR 52.15 - Central office code administration.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... assignment databases; (3) Conducting the Numbering Resource Utilization and Forecast (NRUF) data collection... telecommunications carrier that receives numbering resources from the NANPA, a Pooling Administrator or another... Administrator. (2) State commissions may investigate and determine whether service providers have activated...

  11. A Study of the Efficiency of Spatial Indexing Methods Applied to Large Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Donaldson, Tom; Berriman, G. Bruce; Good, John; Shiao, Bernie

    2018-01-01

    Spatial indexing of astronomical databases generally uses quadrature methods, which partition the sky into cells used to create an index (usually a B-tree) written as database column. We report the results of a study to compare the performance of two common indexing methods, HTM and HEALPix, on Solaris and Windows database servers installed with a PostgreSQL database, and a Windows Server installed with MS SQL Server. The indexing was applied to the 2MASS All-Sky Catalog and to the Hubble Source catalog. On each server, the study compared indexing performance by submitting 1 million queries at each index level with random sky positions and random cone search radius, which was computed on a logarithmic scale between 1 arcsec and 1 degree, and measuring the time to complete the query and write the output. These simulated queries, intended to model realistic use patterns, were run in a uniform way on many combinations of indexing method and indexing level. The query times in all simulations are strongly I/O-bound and are linear with number of records returned for large numbers of sources. There are, however, considerable differences between simulations, which reveal that hardware I/O throughput is a more important factor in managing the performance of a DBMS than the choice of indexing scheme. The choice of index itself is relatively unimportant: for comparable index levels, the performance is consistent within the scatter of the timings. At small index levels (large cells; e.g. level 4; cell size 3.7 deg), there is large scatter in the timings because of wide variations in the number of sources found in the cells. At larger index levels, performance improves and scatter decreases, but the improvement at level 8 (14 min) and higher is masked to some extent in the timing scatter caused by the range of query sizes. At very high levels (20; 0.0004 arsec), the granularity of the cells becomes so high that a large number of extraneous empty cells begin to degrade

  12. Research on keyword retrieval method of HBase database based on index structure

    NASA Astrophysics Data System (ADS)

    Gong, Pijin; Lv, Congmin; Gong, Yongsheng; Ma, Haozhi; Sun, Yang; Wang, Lu

    2017-10-01

    With the rapid development of manned spaceflight engineering, the scientific experimental data in space application system is increasing rapidly. How to efficiently query the specific data in the mass data volume has become a problem. In this paper, a method of retrieving the object data based on the object attribute as the keyword is proposed. The HBase database is used to store the object data and object attributes, and the secondary index is constructed. The research shows that this method is a good way to retrieve specified data based on object attributes.

  13. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    PubMed

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  14. Application GIS on university planning: building a spatial database aided spatial decision

    NASA Astrophysics Data System (ADS)

    Miao, Lei; Wu, Xiaofang; Wang, Kun; Nong, Yu

    2007-06-01

    With the development of university and its size enlarging, kinds of resource need to effective management urgently. Spacial database is the right tool to assist administrator's spatial decision. And it's ready for digital campus with integrating existing OMS. It's researched about the campus planning in detail firstly. Following instanced by south china agriculture university it is practiced that how to build the geographic database of the campus building and house for university administrator's spatial decision.

  15. Big Data and Total Hip Arthroplasty: How Do Large Databases Compare?

    PubMed

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael A; Lux, Nathan R; Bozic, Kevin J; Callaghan, John J

    2018-01-01

    Use of large databases for orthopedic research has become extremely popular in recent years. Each database varies in the methods used to capture data and the population it represents. The purpose of this study was to evaluate how these databases differed in reported demographics, comorbidities, and postoperative complications for primary total hip arthroplasty (THA) patients. Primary THA patients were identified within National Surgical Quality Improvement Programs (NSQIP), Nationwide Inpatient Sample (NIS), Medicare Standard Analytic Files (MED), and Humana administrative claims database (HAC). NSQIP definitions for comorbidities and complications were matched to corresponding International Classification of Diseases, 9th Revision/Current Procedural Terminology codes to query the other databases. Demographics, comorbidities, and postoperative complications were compared. The number of patients from each database was 22,644 in HAC, 371,715 in MED, 188,779 in NIS, and 27,818 in NSQIP. Age and gender distribution were clinically similar. Overall, there was variation in prevalence of comorbidities and rates of postoperative complications between databases. As an example, NSQIP had more than twice the obesity than NIS. HAC and MED had more than 2 times the diabetics than NSQIP. Rates of deep infection and stroke 30 days after THA had more than 2-fold difference between all databases. Among databases commonly used in orthopedic research, there is considerable variation in complication rates following THA depending upon the database used for analysis. It is important to consider these differences when critically evaluating database research. Additionally, with the advent of bundled payments, these differences must be considered in risk adjustment models. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Implementing a Microcomputer Database Management System.

    ERIC Educational Resources Information Center

    Manock, John J.; Crater, K. Lynne

    1985-01-01

    Current issues in selecting, structuring, and implementing microcomputer database management systems in research administration offices are discussed, and their capabilities are illustrated with the system used by the University of North Carolina at Wilmington. Trends in microcomputer technology and their likely impact on research administration…

  17. NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project

    Science.gov Websites

    About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available data collection and analysis methods. Finding consistent and transparent LCI data for life cycle and maintain the database. The 2009 U.S. Life Cycle Inventory (LCI) Data Stakeholder meeting was an

  18. Report: EPA Needs to Strengthen Financial Database Security Oversight and Monitor Compliance

    EPA Pesticide Factsheets

    Report #2007-P-00017, March 29, 2007. Weaknesses in how EPA offices monitor databases for known security vulnerabilities, communicate the status of critical system patches, and monitor the access to database administrator accounts and privileges.

  19. Impact of an electronic medication administration record on medication administration efficiency and errors.

    PubMed

    McComas, Jeffery; Riingen, Michelle; Chae Kim, Son

    2014-12-01

    The study aims were to evaluate the impact of electronic medication administration record implementation on medication administration efficiency and occurrence of medication errors as well as to identify the predictors of medication administration efficiency in an acute care setting. A prospective, observational study utilizing time-and-motion technique was conducted before and after electronic medication administration record implementation in November 2011. A total of 156 cases of medication administration activities (78 pre- and 78 post-electronic medication administration record) involving 38 nurses were observed at the point of care. A separate retrospective review of the hospital Midas+ medication error database was also performed to collect the rates and origin of medication errors for 6 months before and after electronic medication administration record implementation. The mean medication administration time actually increased from 11.3 to 14.4 minutes post-electronic medication administration record (P = .039). In a multivariate analysis, electronic medication administration record was not a predictor of medication administration time, but the distractions/interruptions during medication administration process were significant predictors. The mean hospital-wide medication errors significantly decreased from 11.0 to 5.3 events per month post-electronic medication administration record (P = .034). Although no improvement in medication administration efficiency was observed, electronic medication administration record improved the quality of care with a significant decrease in medication errors.

  20. Creative Classroom Assignment Through Database Management.

    ERIC Educational Resources Information Center

    Shah, Vivek; Bryant, Milton

    1987-01-01

    The Faculty Scheduling System (FSS), a database management system designed to give administrators the ability to schedule faculty in a fast and efficient manner is described. The FSS, developed using dBASE III, requires an IBM compatible microcomputer with a minimum of 256K memory. (MLW)

  1. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  2. A curated gluten protein sequence database to support development of proteomics methods for determination of gluten in gluten-free foods.

    PubMed

    Bromilow, Sophie; Gethings, Lee A; Buckley, Mike; Bromley, Mike; Shewry, Peter R; Langridge, James I; Clare Mills, E N

    2017-06-23

    The unique physiochemical properties of wheat gluten enable a diverse range of food products to be manufactured. However, gluten triggers coeliac disease, a condition which is treated using a gluten-free diet. Analytical methods are required to confirm if foods are gluten-free, but current immunoassay-based methods can unreliable and proteomic methods offer an alternative but require comprehensive and well annotated sequence databases which are lacking for gluten. A manually a curated database (GluPro V1.0) of gluten proteins, comprising 630 discrete unique full length protein sequences has been compiled. It is representative of the different types of gliadin and glutenin components found in gluten. An in silico comparison of their coeliac toxicity was undertaken by analysing the distribution of coeliac toxic motifs. This demonstrated that whilst the α-gliadin proteins contained more toxic motifs, these were distributed across all gluten protein sub-types. Comparison of annotations observed using a discovery proteomics dataset acquired using ion mobility MS/MS showed that more reliable identifications were obtained using the GluPro V1.0 database compared to the complete reviewed Viridiplantae database. This highlights the value of a curated sequence database specifically designed to support the proteomic workflows and the development of methods to detect and quantify gluten. We have constructed the first manually curated open-source wheat gluten protein sequence database (GluPro V1.0) in a FASTA format to support the application of proteomic methods for gluten protein detection and quantification. We have also analysed the manually verified sequences to give the first comprehensive overview of the distribution of sequences able to elicit a reaction in coeliac disease, the prevalent form of gluten intolerance. Provision of this database will improve the reliability of gluten protein identification by proteomic analysis, and aid the development of targeted mass

  3. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    ERIC Educational Resources Information Center

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  4. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  5. Naval Ship Database: Database Design, Implementation, and Schema

    DTIC Science & Technology

    2013-09-01

    incoming data. The solution allows database users to store and analyze data collected by navy ships in the Royal Canadian Navy ( RCN ). The data...understanding RCN jargon and common practices on a typical RCN vessel. This experience led to the development of several error detection methods to...data to be stored in the database. Mr. Massel has also collected data pertaining to day to day activities on RCN vessels that has been imported into

  6. Administrative decision making: a stepwise method.

    PubMed

    Oetjen, Reid M; Oetjen, Dawn M; Rotarius, Timothy

    2008-01-01

    Today's health care organizations face tremendous challenges and fierce competition. These pressures impact the decisions that managers must execute on any given day, not to mention the ever-present constraints of time, personnel, competencies, and finances. The importance of making quality and informed decisions cannot be underestimated. Traditional decision making methods are inadequate for today's larger, more complex health care organizations and the rapidly changing health care environment. As a result, today's health care managers and their teams need new approaches to making decisions for their organizations. This article examines the managerial decision making process and offers a model that can be used as a decision making template to help managers successfully navigate the choppy health care seas. The administrative decision making model will enable health care managers and other key decision makers to avoid the common pitfalls of poor decision making and guide their organizations to success.

  7. Creating a sampling frame for population-based veteran research: representativeness and overlap of VA and Department of Defense databases.

    PubMed

    Washington, Donna L; Sun, Su; Canning, Mark

    2010-01-01

    Most veteran research is conducted in Department of Veterans Affairs (VA) healthcare settings, although most veterans obtain healthcare outside the VA. Our objective was to determine the adequacy and relative contributions of Veterans Health Administration (VHA), Veterans Benefits Administration (VBA), and Department of Defense (DOD) administrative databases for representing the U.S. veteran population, using as an example the creation of a sampling frame for the National Survey of Women Veterans. In 2008, we merged the VHA, VBA, and DOD databases. We identified the number of unique records both overall and from each database. The combined databases yielded 925,946 unique records, representing 51% of the 1,802,000 U.S. women veteran population. The DOD database included 30% of the population (with 8% overlap with other databases). The VHA enrollment database contributed an additional 20% unique women veterans (with 6% overlap with VBA databases). VBA databases contributed an additional 2% unique women veterans (beyond 10% overlap with other databases). Use of VBA and DOD databases substantially expands access to the population of veterans beyond those in VHA databases, regardless of VA use. Adoption of these additional databases would enhance the value and generalizability of a wide range of studies of both male and female veterans.

  8. Challenges in converting an interviewer-administered food probe database to self-administration in the National Cancer Institute Automated Self-administered 24-Hour Recall (ASA24).

    PubMed

    Zimmerman, Thea Palmer; Hull, Stephen G; McNutt, Suzanne; Mittl, Beth; Islam, Noemi; Guenther, Patricia M; Thompson, Frances E; Potischman, Nancy A; Subar, Amy F

    2009-12-01

    The National Cancer Institute (NCI) is developing an automated, self-administered 24-hour dietary recall (ASA24) application to collect and code dietary intake data. The goal of the ASA24 development is to create a web-based dietary interview based on the US Department of Agriculture (USDA) Automated Multiple Pass Method (AMPM) instrument currently used in the National Health and Nutrition Examination Survey (NHANES). The ASA24 food list, detail probes, and portion probes were drawn from the AMPM instrument; portion-size pictures from Baylor College of Medicine's Food Intake Recording Software System (FIRSSt) were added; and the food code/portion code assignments were linked to the USDA Food and Nutrient Database for Dietary Studies (FNDDS). The requirements that the interview be self-administered and fully auto-coded presented several challenges as the AMPM probes and responses were linked with the FNDDS food codes and portion pictures. This linking was accomplished through a "food pathway," or the sequence of steps that leads from a respondent's initial food selection, through the AMPM probes and portion pictures, to the point at which a food code and gram weight portion size are assigned. The ASA24 interview database that accomplishes this contains more than 1,100 food probes and more than 2 million food pathways and will include about 10,000 pictures of individual foods depicting up to 8 portion sizes per food. The ASA24 will make the administration of multiple days of recalls in large-scale studies economical and feasible.

  9. Validity of juvenile idiopathic arthritis diagnoses using administrative health data.

    PubMed

    Stringer, Elizabeth; Bernatsky, Sasha

    2015-03-01

    Administrative health databases are valuable sources of data for conducting research including disease surveillance, outcomes research, and processes of health care at the population level. There has been limited use of administrative data to conduct studies of pediatric rheumatic conditions and no studies validating case definitions in Canada. We report a validation study of incident cases of juvenile idiopathic arthritis in the Canadian province of Nova Scotia. Cases identified through administrative data algorithms were compared to diagnoses in a clinical database. The sensitivity of algorithms that included pediatric rheumatology specialist claims was 81-86%. However, 35-48% of cases that were identified could not be verified in the clinical database depending on the algorithm used. Our case definitions would likely lead to overestimates of disease burden. Our findings may be related to issues pertaining to the non-fee-for-service remuneration model in Nova Scotia, in particular, systematic issues related to the process of submitting claims.

  10. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    PubMed

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  11. The thyrotropin receptor mutation database: update 2003.

    PubMed

    Führer, Dagmar; Lachmund, Peter; Nebel, Istvan-Tibor; Paschke, Ralf

    2003-12-01

    In 1999 we have created a TSHR mutation database compiling TSHR mutations with their basic characteristics and associated clinical conditions (www.uni-leipzig.de/innere/tshr). Since then, more than 2887 users from 36 countries have logged into the TSHR mutation database and have contributed several valuable suggestions for further improvement of the database. We now present an updated and extended version of the TSHR database to which several novel features have been introduced: 1. detailed functional characteristics on all 65 mutations (43 activating and 22 inactivating mutations) reported to date, 2. 40 pedigrees with detailed information on molecular aspects, clinical courses and treatment options in patients with gain-of-function and loss-of-function germline TSHR mutations, 3. a first compilation of site-directed mutagenesis studies, 4. references with Medline links, 5. a user friendly search tool for specific database searches, user-specific database output and 6. an administrator tool for the submission of novel TSHR mutations. The TSHR mutation database is installed as one of the locus specific HUGO mutation databases. It is listed under index TSHR 603372 (http://ariel.ucs.unimelb.edu.au/~cotton/glsdbq.htm) and can be accessed via www.uni-leipzig.de/innere/tshr.

  12. Database on Demand: insight how to build your own DBaaS

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  13. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  14. Adopting a corporate perspective on databases. Improving support for research and decision making.

    PubMed

    Meistrell, M; Schlehuber, C

    1996-03-01

    The Veterans Health Administration (VHA) is at the forefront of designing and managing health care information systems that accommodate the needs of clinicians, researchers, and administrators at all levels. Rather than using one single-site, centralized corporate database VHA has constructed several large databases with different configurations to meet the needs of users with different perspectives. The largest VHA database is the Decentralized Hospital Computer Program (DHCP), a multisite, distributed data system that uses decoupled hospital databases. The centralization of DHCP policy has promoted data coherence, whereas the decentralization of DHCP management has permitted system development to be done with maximum relevance to the users'local practices. A more recently developed VHA data system, the Event Driven Reporting system (EDR), uses multiple, highly coupled databases to provide workload data at facility, regional, and national levels. The EDR automatically posts a subset of DHCP data to local and national VHA management. The development of the EDR illustrates how adoption of a corporate perspective can offer significant database improvements at reasonable cost and with modest impact on the legacy system.

  15. Validity of Diagnostic Codes for Acute Stroke in Administrative Databases: A Systematic Review

    PubMed Central

    McCormick, Natalie; Bhole, Vidula; Lacaille, Diane; Avina-Zubieta, J. Antonio

    2015-01-01

    Objective To conduct a systematic review of studies reporting on the validity of International Classification of Diseases (ICD) codes for identifying stroke in administrative data. Methods MEDLINE and EMBASE were searched (inception to February 2015) for studies: (a) Using administrative data to identify stroke; or (b) Evaluating the validity of stroke codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), or Kappa scores) for stroke, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2015) of original papers. Studies solely evaluating codes for transient ischaemic attack were excluded. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Results Seventy-seven studies published from 1976–2015 were included. The sensitivity of ICD-9 430-438/ICD-10 I60-I69 for any cerebrovascular disease was ≥ 82% in most [≥ 50%] studies, and specificity and NPV were both ≥ 95%. The PPV of these codes for any cerebrovascular disease was ≥ 81% in most studies, while the PPV specifically for acute stroke was ≤ 68%. In at least 50% of studies, PPVs were ≥ 93% for subarachnoid haemorrhage (ICD-9 430/ICD-10 I60), 89% for intracerebral haemorrhage (ICD-9 431/ICD-10 I61), and 82% for ischaemic stroke (ICD-9 434/ICD-10 I63 or ICD-9 434&436). For in-hospital deaths, sensitivity was 55%. For cerebrovascular disease or acute stroke as a cause-of-death on death certificates, sensitivity was ≤ 71% in most studies while PPV was ≥ 87%. Conclusions While most cases of prevalent cerebrovascular disease can be detected using 430-438/I60-I69 collectively, acute stroke must be defined using more specific codes. Most in-hospital deaths and death certificates with stroke as a cause-of-death correspond to true stroke deaths. Linking vital

  16. A novel method to handle the effect of uneven sampling effort in biodiversity databases.

    PubMed

    Pardo, Iker; Pata, María P; Gómez, Daniel; García, María B

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses.

  17. A Novel Method to Handle the Effect of Uneven Sampling Effort in Biodiversity Databases

    PubMed Central

    Pardo, Iker; Pata, María P.; Gómez, Daniel; García, María B.

    2013-01-01

    How reliable are results on spatial distribution of biodiversity based on databases? Many studies have evidenced the uncertainty related to this kind of analysis due to sampling effort bias and the need for its quantification. Despite that a number of methods are available for that, little is known about their statistical limitations and discrimination capability, which could seriously constrain their use. We assess for the first time the discrimination capacity of two widely used methods and a proposed new one (FIDEGAM), all based on species accumulation curves, under different scenarios of sampling exhaustiveness using Receiver Operating Characteristic (ROC) analyses. Additionally, we examine to what extent the output of each method represents the sampling completeness in a simulated scenario where the true species richness is known. Finally, we apply FIDEGAM to a real situation and explore the spatial patterns of plant diversity in a National Park. FIDEGAM showed an excellent discrimination capability to distinguish between well and poorly sampled areas regardless of sampling exhaustiveness, whereas the other methods failed. Accordingly, FIDEGAM values were strongly correlated with the true percentage of species detected in a simulated scenario, whereas sampling completeness estimated with other methods showed no relationship due to null discrimination capability. Quantifying sampling effort is necessary to account for the uncertainty in biodiversity analyses, however, not all proposed methods are equally reliable. Our comparative analysis demonstrated that FIDEGAM was the most accurate discriminator method in all scenarios of sampling exhaustiveness, and therefore, it can be efficiently applied to most databases in order to enhance the reliability of biodiversity analyses. PMID:23326357

  18. Association of Palliative Care Consultation With Reducing Inpatient Chemotherapy Use in Elderly Patients With Cancer in Japan: Analysis Using a Nationwide Administrative Database.

    PubMed

    Sano, Motoko; Fushimi, Kiyohide

    2017-08-01

    The administration of chemotherapy at the end of life is considered an aggressive life-prolonging treatment. The use of unnecessarily aggressive therapy in elderly patients at the end of life is an important health-care concern. To explore the impact of palliative care consultation (PCC) on chemotherapy use in geriatric oncology inpatients in Japan by analyzing data from a national database. We conducted a multicenter cohort study of patients aged ≥65 years, registered in the Japan National Administrative Healthcare Database, who died with advanced (stage ≥3) lung, stomach, colorectal, liver, or breast cancer while hospitalized between April 2010 and March 2013. The relationship between PCC and chemotherapy use in the last 2 weeks of life was analyzed using χ 2 and logistic regression analyses. We included 26 012 patients in this analysis. The mean age was 75.74 ± 6.40 years, 68.1% were men, 81.8% had recurrent cancer, 29.5% had lung cancer, and 29.5% had stomach cancer. Of these, 3134 (12%) received PCC. Among individuals who received PCC, chemotherapy was administered to 46 patients (1.5%) and was not administered to 3088 patients (98.5%). Among those not receiving PCC, chemotherapy was administered to 909 patients (4%) and was not administered to the remaining 21 978 patients (96%; odds ratio [OR], 0.35; 95% confidence interval, 0.26-0.48). The OR of chemotherapy use was higher in men, young-old, and patients with primary cancer. Palliative care consultation was associated with less chemotherapy use in elderly Japanese patients with cancer who died in the hospital setting.

  19. Administrative Information Systems: The 1980 Profile. CAUSE Monograph Series.

    ERIC Educational Resources Information Center

    Thomas, Charles R.

    The first summaries of the CAUSE National Database, which was established in 1980, are presented. The database is updated annually to provide members with baseline reference information on the status of administrative information systems in colleges and universities. Information is based on responses from 350 CAUSE member campuses, which are…

  20. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.

  1. Methods and Management: NIH Administrators, Federal Oversight, and the Framingham Heart Study

    PubMed Central

    Patel, Sejal S.

    2012-01-01

    Summary This article explores the 1965 controversy over the Framingham Heart Study in the midst of growing oversight into the management of science at the National Institutes of Health (NIH). It describes how, beginning in the early 1960s, federal overseers demanded that NIH administrators adopt particular management styles in administering programs and how these growing pressures led administrators to favor investigative pursuits that allowed for easy prospective accounting of program payoffs, especially those based on experimental methods designed to examine discrete interventions or outcomes of interest. In light of this changing managerial culture within the NIH, the Framingham study and other population laboratories—with their bases in observation and in open-ended study designs—became harder for NIH administrators to justify and defend. PMID:22643985

  2. Methods and management: NIH administrators, federal oversight, and the Framingham Heart Study.

    PubMed

    Patel, Sejal S

    2012-01-01

    This article explores the 1965 controversy over the Framingham Heart Study in the midst of growing oversight into the management of science at the National Institutes of Health (NIH). It describes how, beginning in the early 1960s, federal overseers demanded that NIH administrators adopt particular management styles in administering programs and how these growing pressures led administrators to favor investigative pursuits that allowed for easy prospective accounting of program payoffs, especially those based on experimental methods designed to examine discrete interventions or outcomes of interest. In light of this changing managerial culture within the NIH, the Framingham study and other population laboratories-with their bases in observation and in open-ended study designs-became harder for NIH administrators to justify and defend.

  3. Methods and apparatus for constructing and implementing a universal extension module for processing objects in a database

    NASA Technical Reports Server (NTRS)

    Li, Chung-Sheng (Inventor); Smith, John R. (Inventor); Chang, Yuan-Chi (Inventor); Jhingran, Anant D. (Inventor); Padmanabhan, Sriram K. (Inventor); Hsiao, Hui-I (Inventor); Choy, David Mun-Hien (Inventor); Lin, Jy-Jine James (Inventor); Fuh, Gene Y. C. (Inventor); Williams, Robin (Inventor)

    2004-01-01

    Methods and apparatus for providing a multi-tier object-relational database architecture are disclosed. In one illustrative embodiment of the present invention, a multi-tier database architecture comprises an object-relational database engine as a top tier, one or more domain-specific extension modules as a bottom tier, and one or more universal extension modules as a middle tier. The individual extension modules of the bottom tier operationally connect with the one or more universal extension modules which, themselves, operationally connect with the database engine. The domain-specific extension modules preferably provide such functions as search, index, and retrieval services of images, video, audio, time series, web pages, text, XML, spatial data, etc. The domain-specific extension modules may include one or more IBM DB2 extenders, Oracle data cartridges and/or Informix datablades, although other domain-specific extension modules may be used.

  4. Development of a Publicly Available, Comprehensive Database of Fiber and Health Outcomes: Rationale and Methods

    PubMed Central

    Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.

    2016-01-01

    Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733

  5. Organizing a breast cancer database: data management.

    PubMed

    Yi, Min; Hunt, Kelly K

    2016-06-01

    Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.

  6. Schools Inc.: An Administrator's Guide to the Business of Education.

    ERIC Educational Resources Information Center

    McCarthy, Bob; And Others

    1989-01-01

    This theme issue describes ways in which educational administrators are successfully automating many of their administrative tasks. Articles focus on student management; office automation, including word processing, databases, and spreadsheets; human resources; support services, including supplies, textbooks, and learning resources; financial…

  7. WLN's Database: New Directions.

    ERIC Educational Resources Information Center

    Ziegman, Bruce N.

    1988-01-01

    Describes features of the Western Library Network's database, including the database structure, authority control, contents, quality control, and distribution methods. The discussion covers changes in distribution necessitated by increasing telecommunications costs and the development of optical data disk products. (CLB)

  8. Toxicity of ionic liquids: database and prediction via quantitative structure-activity relationship method.

    PubMed

    Zhao, Yongsheng; Zhao, Jihong; Huang, Ying; Zhou, Qing; Zhang, Xiangping; Zhang, Suojiang

    2014-08-15

    A comprehensive database on toxicity of ionic liquids (ILs) is established. The database includes over 4000 pieces of data. Based on the database, the relationship between IL's structure and its toxicity has been analyzed qualitatively. Furthermore, Quantitative Structure-Activity relationships (QSAR) model is conducted to predict the toxicities (EC50 values) of various ILs toward the Leukemia rat cell line IPC-81. Four parameters selected by the heuristic method (HM) are used to perform the studies of multiple linear regression (MLR) and support vector machine (SVM). The squared correlation coefficient (R(2)) and the root mean square error (RMSE) of training sets by two QSAR models are 0.918 and 0.959, 0.258 and 0.179, respectively. The prediction R(2) and RMSE of QSAR test sets by MLR model are 0.892 and 0.329, by SVM model are 0.958 and 0.234, respectively. The nonlinear model developed by SVM algorithm is much outperformed MLR, which indicates that SVM model is more reliable in the prediction of toxicity of ILs. This study shows that increasing the relative number of O atoms of molecules leads to decrease in the toxicity of ILs. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  10. Building MapObjects attribute field in cadastral database based on the method of Jackson system development

    NASA Astrophysics Data System (ADS)

    Chen, Zhu-an; Zhang, Li-ting; Liu, Lu

    2009-10-01

    ESRI's GIS components MapObjects are applied in many cadastral information system because of its miniaturization and flexibility. Some cadastral information was saved in cadastral database directly by MapObjects's Shape file format in this cadastral information system. However, MapObjects didn't provide the function of building attribute field for map layer's attribute data file in cadastral database and user cann't save the result of analysis. This present paper designed and realized the function of building attribute field in MapObjects based on the method of Jackson's system development.

  11. Building a medical image processing algorithm verification database

    NASA Astrophysics Data System (ADS)

    Brown, C. Wayne

    2000-06-01

    The design of a database containing head Computed Tomography (CT) studies is presented, along with a justification for the database's composition. The database will be used to validate software algorithms that screen normal head CT studies from studies that contain pathology. The database is designed to have the following major properties: (1) a size sufficient for statistical viability, (2) inclusion of both normal (no pathology) and abnormal scans, (3) inclusion of scans due to equipment malfunction, technologist error, and uncooperative patients, (4) inclusion of data sets from multiple scanner manufacturers, (5) inclusion of data sets from different gender and age groups, and (6) three independent diagnosis of each data set. Designed correctly, the database will provide a partial basis for FDA (United States Food and Drug Administration) approval of image processing algorithms for clinical use. Our goal for the database is the proof of viability of screening head CT's for normal anatomy using computer algorithms. To put this work into context, a classification scheme for 'computer aided diagnosis' systems is proposed.

  12. HIV quality report cards: impact of case-mix adjustment and statistical methods.

    PubMed

    Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N

    2014-10-15

    There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  13. Development of case statements in academic administration: a proactive method for achieving outcomes.

    PubMed

    Mundt, Mary H

    2005-01-01

    The complex nature of higher education presents academic administrators with unique challenges to communicate vision and strategic direction to a variety of internal and external audiences. The administrator must be prepared to engage in persuasive communication to describe the needs and desired outcomes of the academic unit. This article focuses on the use of the case statement as a communication tool for the nursing academic administrator. The case statement is a form of persuasive communication where a situation or need is presented in the context of the mission, vision, and strategic direction of a group or organization. The aim of the case statement is to enlist support in meeting the identified need. Fundamental assumptions about communicating case statements are described, as well as guidelines for how the academic administrator can prepare themselves for using the case statement method.

  14. A privacy-preserved analytical method for ehealth database with minimized information loss.

    PubMed

    Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun

    2012-01-01

    Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification.

  15. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  16. [1012.5676] The Exoplanet Orbit Database

    Science.gov Websites

    : The Exoplanet Orbit Database Authors: Jason T Wright, Onsi Fakhouri, Geoffrey W. Marcy, Eunkyu Han present a database of well determined orbital parameters of exoplanets. This database comprises parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets

  17. Development of the Veterans Healthcare Administration (VHA) Ophthalmic Surgical Outcome Database (OSOD) project and the role of ophthalmic nurse reviewers.

    PubMed

    Lara-Smalling, Agueda; Cakiner-Egilmez, Tulay; Miller, Dawn; Redshirt, Ella; Williams, Dale

    2011-01-01

    Currently, ophthalmic surgical cases are not included in the Veterans Administration Surgical Quality Improvement Project data collection. Furthermore, there is no comprehensive protocol in the health system for prospectively measuring outcomes for eye surgery in terms of safety and quality. There are 400,000 operative cases in the system per year. Of those, 48,000 (12%) are ophthalmic surgical cases, with 85% (41,000) of those being cataract cases. The Ophthalmic Surgical Outcome Database Pilot Project was developed to incorporate ophthalmology into VASQIP, thus evaluating risk factors and improving cataract surgical outcomes. Nurse reviewers facilitate the monitoring and measuring of these outcomes. Since its inception in 1778, the Veterans Administration (VA) Health System has provided comprehensive healthcare to millions of deserving veterans throughout the U.S. and its territories. Historically, the quality of healthcare provided by the VA has been the main focus of discussion because it did not meet a standard of care comparable to that of the private sector. Information regarding quality of healthcare services and outcomes data had been unavailable until 1986, when Congress mandated the VA to compare its surgical outcomes to those of the private sector (PL-99-166). 1 Risk adjustment of VA surgical outcomes began in 1987 with the Continuous Improvement in Cardiac Surgery Program (CICSP) in which cardiac surgical outcomes were reported and evaluated. 2 Between 1991 and 1993, the National VA Surgical Risk Study (NVASRS) initiated a validated risk-adjustment model for predicting surgical outcomes and comparative assessment of the quality of surgical care in 44 VA medical centers. 3 The success of NVASRS encouraged the VA to establish an ongoing program for monitoring and improving the quality of surgical care, thus developing the National Surgical Quality Improvement Program (NSQIP) in 1994. 4 According to a prospective study conducted between 1991-1997 in 123

  18. Chemical databases evaluated by order theoretical tools.

    PubMed

    Voigt, Kristina; Brüggemann, Rainer; Pudenz, Stefan

    2004-10-01

    Data on environmental chemicals are urgently needed to comply with the future chemicals policy in the European Union. The availability of data on parameters and chemicals can be evaluated by chemometrical and environmetrical methods. Different mathematical and statistical methods are taken into account in this paper. The emphasis is set on a new, discrete mathematical method called METEOR (method of evaluation by order theory). Application of the Hasse diagram technique (HDT) of the complete data-matrix comprising 12 objects (databases) x 27 attributes (parameters + chemicals) reveals that ECOTOX (ECO), environmental fate database (EFD) and extoxnet (EXT)--also called multi-database databases--are best. Most single databases which are specialised are found in a minimal position in the Hasse diagram; these are biocatalysis/biodegradation database (BID), pesticide database (PES) and UmweltInfo (UMW). The aggregation of environmental parameters and chemicals (equal weight) leads to a slimmer data-matrix on the attribute side. However, no significant differences are found in the "best" and "worst" objects. The whole approach indicates a rather bad situation in terms of the availability of data on existing chemicals and hence an alarming signal concerning the new and existing chemicals policies of the EEC.

  19. Constructing a Geology Ontology Using a Relational Database

    NASA Astrophysics Data System (ADS)

    Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.

    2013-12-01

    In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances

  20. Potential use of routine databases in health technology assessment.

    PubMed

    Raftery, J; Roderick, P; Stevens, A

    2005-05-01

    To develop criteria for classifying databases in relation to their potential use in health technology (HT) assessment and to apply them to a list of databases of relevance in the UK. To explore the extent to which prioritized databases could pick up those HTs being assessed by the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the extent to which these databases have been used in HT assessment. To explore the validation of the databases and their cost. Electronic databases. Key literature sources. Experienced users of routine databases. A 'first principles' examination of the data necessary for each type of HT assessment was carried out, supplemented by literature searches and a historical review. The principal investigators applied the criteria to the databases. Comments of the 'keepers' of the prioritized databases were incorporated. Details of 161 topics funded by the NHS R&D Health Technology Assessment (HTA) programme were reviewed iteratively by the principal investigators. Uses of databases in HTAs were identified by literature searches, which included the title of each prioritized database as a keyword. Annual reports of databases were examined and 'keepers' queried. The validity of each database was assessed using criteria based on a literature search and involvement by the authors in a national academic network. The costs of databases were established from annual reports, enquiries to 'keepers' of databases and 'guesstimates' based on cost per record. For assessing effectiveness, equity and diffusion, routine databases were classified into three broad groups: (1) group I databases, identifying both HTs and health states, (2) group II databases, identifying the HTs, but not a health state, and (3) group III databases, identifying health states, but not an HT. Group I datasets were disaggregated into clinical registries, clinical administrative databases and population-oriented databases. Group III were disaggregated into adverse

  1. Technical evaluation of methods for identifying chemotherapy-induced febrile neutropenia in healthcare claims databases

    PubMed Central

    2013-01-01

    Background Healthcare claims databases have been used in several studies to characterize the risk and burden of chemotherapy-induced febrile neutropenia (FN) and effectiveness of colony-stimulating factors against FN. The accuracy of methods previously used to identify FN in such databases has not been formally evaluated. Methods Data comprised linked electronic medical records from Geisinger Health System and healthcare claims data from Geisinger Health Plan. Subjects were classified into subgroups based on whether or not they were hospitalized for FN per the presumptive “gold standard” (ANC <1.0×109/L, and body temperature ≥38.3°C or receipt of antibiotics) and claims-based definition (diagnosis codes for neutropenia, fever, and/or infection). Accuracy was evaluated principally based on positive predictive value (PPV) and sensitivity. Results Among 357 study subjects, 82 (23%) met the gold standard for hospitalized FN. For the claims-based definition including diagnosis codes for neutropenia plus fever in any position (n=28), PPV was 100% and sensitivity was 34% (95% CI: 24–45). For the definition including neutropenia in the primary position (n=54), PPV was 87% (78–95) and sensitivity was 57% (46–68). For the definition including neutropenia in any position (n=71), PPV was 77% (68–87) and sensitivity was 67% (56–77). Conclusions Patients hospitalized for chemotherapy-induced FN can be identified in healthcare claims databases--with an acceptable level of mis-classification--using diagnosis codes for neutropenia, or neutropenia plus fever. PMID:23406481

  2. Methods for eliciting, annotating, and analyzing databases for child speech development.

    PubMed

    Beckman, Mary E; Plummer, Andrew R; Munson, Benjamin; Reidy, Patrick F

    2017-09-01

    Methods from automatic speech recognition (ASR), such as segmentation and forced alignment, have facilitated the rapid annotation and analysis of very large adult speech databases and databases of caregiver-infant interaction, enabling advances in speech science that were unimaginable just a few decades ago. This paper centers on two main problems that must be addressed in order to have analogous resources for developing and exploiting databases of young children's speech. The first problem is to understand and appreciate the differences between adult and child speech that cause ASR models developed for adult speech to fail when applied to child speech. These differences include the fact that children's vocal tracts are smaller than those of adult males and also changing rapidly in size and shape over the course of development, leading to between-talker variability across age groups that dwarfs the between-talker differences between adult men and women. Moreover, children do not achieve fully adult-like speech motor control until they are young adults, and their vocabularies and phonological proficiency are developing as well, leading to considerably more within-talker variability as well as more between-talker variability. The second problem then is to determine what annotation schemas and analysis techniques can most usefully capture relevant aspects of this variability. Indeed, standard acoustic characterizations applied to child speech reveal that adult-centered annotation schemas fail to capture phenomena such as the emergence of covert contrasts in children's developing phonological systems, while also revealing children's nonuniform progression toward community speech norms as they acquire the phonological systems of their native languages. Both problems point to the need for more basic research into the growth and development of the articulatory system (as well as of the lexicon and phonological system) that is oriented explicitly toward the construction of

  3. Report to Congress : review of the National Transit Database

    DOT National Transportation Integrated Search

    2000-05-30

    This report presents the findings and recommendations of the evaluation of the Federal Transit Administration (FTA) National Transit Database (NTD), conducted in accordance with the direction of the House and Senate Committees of Appropriations, as s...

  4. The Golosiiv on-line plate archive database, management and maintenance

    NASA Astrophysics Data System (ADS)

    Pakuliak, L.; Sergeeva, T.

    2007-08-01

    We intend to create online version of the database of the MAO NASU plate archive as VO-compatible structures in accordance with principles, developed by the International Virtual Observatory Alliance in order to make them available for world astronomical community. The online version of the log-book database is constructed by means of MySQL+PHP. Data management system provides a user with user interface, gives a capability of detailed traditional form-filling radial search of plates, obtaining some auxiliary sampling, the listing of each collection and permits to browse the detail descriptions of collections. The administrative tool allows database administrator the data correction, enhancement with new data sets and control of the integrity and consistence of the database as a whole. The VO-compatible database is currently constructing under the demands and in the accordance with principles of international data archives and has to be strongly generalized in order to provide a possibility of data mining by means of standard interfaces and to be the best fitted to the demands of WFPDB Group for databases of the plate catalogues. On-going enhancements of database toward the WFPDB bring the problem of the verification of data to the forefront, as it demands the high degree of data reliability. The process of data verification is practically endless and inseparable from data management owing to a diversity of data errors nature, that means to a variety of ploys of their identification and fixing. The current status of MAO NASU glass archive forces the activity in both directions simultaneously: the enhancement of log-book database with new sets of observational data as well as generalized database creation and the cross-identification between them. The VO-compatible version of the database is supplying with digitized data of plates obtained with MicroTek ScanMaker 9800 XL TMA. The scanning procedure is not total but is conducted selectively in the frames of special

  5. Coding of obesity in administrative hospital discharge abstract data: accuracy and impact for future research studies.

    PubMed

    Martin, Billie-Jean; Chen, Guanmin; Graham, Michelle; Quan, Hude

    2014-02-13

    Obesity is a pervasive problem and a popular subject of academic assessment. The ability to take advantage of existing data, such as administrative databases, to study obesity is appealing. The objective of our study was to assess the validity of obesity coding in an administrative database and compare the association between obesity and outcomes in an administrative database versus registry. This study was conducted using a coronary catheterization registry and an administrative database (Discharge Abstract Database (DAD)). A Body Mass Index (BMI) ≥30 kg/m2 within the registry defined obesity. In the DAD obesity was defined by diagnosis codes E65-E68 (ICD-10). The sensitivity, specificity, negative predictive value (NPV) and positive predictive value (PPV) of an obesity diagnosis in the DAD was determined using obesity diagnosis in the registry as the referent. The association between obesity and outcomes was assessed. The study population of 17380 subjects was largely male (68.8%) with a mean BMI of 27.0 kg/m2. Obesity prevalence was lower in the DAD than registry (2.4% vs. 20.3%). A diagnosis of obesity in the DAD had a sensitivity 7.75%, specificity 98.98%, NPV 80.84% and PPV 65.94%. Obesity was associated with decreased risk of death or re-hospitalization, though non-significantly within the DAD. Obesity was significantly associated with an increased risk of cardiac procedure in both databases. Overall, obesity was poorly coded in the DAD. However, when coded, it was coded accurately. Administrative databases are not an optimal datasource for obesity prevalence and incidence surveillance but could be used to define obese cohorts for follow-up.

  6. 75 FR 61553 - National Transit Database: Amendments to the Urbanized Area Annual Reporting Manual and to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-05

    ... Transit Database: Amendments to the Urbanized Area Annual Reporting Manual and to the Safety and Security... the 2011 National Transit Database Urbanized Area Annual Reporting Manual and Announcement of... Transit Administration's (FTA) National Transit Database (NTD) reporting requirements, including...

  7. A Method to Calculate and Analyze Residents' Evaluations by Using a Microcomputer Data-Base Management System.

    ERIC Educational Resources Information Center

    Mills, Myron L.

    1988-01-01

    A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)

  8. Internet Tools Access Administrative Data at the University of Delaware.

    ERIC Educational Resources Information Center

    Jacobson, Carl

    1995-01-01

    At the University of Delaware, World Wide Web tools are used to produce multiplatform administrative applications, including hyperreporting, mixed media, electronic forms, and kiosk services. Web applications are quickly and easily crafted to interact with administrative databases. They are particularly suited to customer outreach efforts,…

  9. 77 FR 65546 - Western Area Power Administration; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. EF11-7-000] Western Area Power Administration; Notice of Filing Take notice that on July 14, 2011, Western Area Power Administration submitted its revised version of its Tariff Title for the Western Rate Schedules database, to be...

  10. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  11. Database in Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  12. Protocol for establishing an infant feeding database linkable with population-based administrative data: a prospective cohort study in Manitoba, Canada

    PubMed Central

    Nickel, Nathan Christopher; Warda, Lynne; Kummer, Leslie; Chateau, Joanne; Heaman, Maureen; Green, Chris; Katz, Alan; Paul, Julia; Perchuk, Carolyn; Girard, Darlene; Larocque, Lorraine; Enns, Jennifer Emily; Shaw, Souradet

    2017-01-01

    Introduction Breast feeding is associated with many health benefits for mothers and infants. But despite extensive public health efforts to promote breast feeding, many mothers do not achieve their own breastfeeding goals; and, inequities in breastfeeding rates persist between high and low-income mother–infant dyads. Developing targeted programme to support breastfeeding dyads and reduce inequities between mothers of different socioeconomic status are a priority for public health practitioners and health policy decision-makers; however, many jurisdictions lack the timely and comprehensive population-level data on infant-feeding practices required to monitor trends in breastfeeding initiation and duration. This protocol describes the establishment of a population-based infant-feeding database in the Canadian province of Manitoba, providing opportunities to develop and evaluate breastfeeding support programme. Methods and analysis Routinely collected administrative health data on mothers’ infant-feeding practices will be captured during regular vaccination visits using the Teleform fax tool, which converts handwritten information to an electronic format. The infant-feeding data will be linked to the Manitoba Population Research Data Repository, a comprehensive collection of population-based information spanning health, education and social services domains. The linkage will allow us to answer research questions about infant-feeding practices and to evaluate how effective current initiatives promoting breast feeding are. Ethics and dissemination Approvals have been granted by the Health Research Ethics Board at the University of Manitoba. Our integrative knowledge translation approach will involve disseminating findings through government and community briefings, presenting at academic conferences and publishing in scientific journals. PMID:29061626

  13. Pitfalls of using administrative data sets to describe clinical outcomes in sickle cell disease.

    PubMed

    Claster, Susan; Termuhlen, Amanda; Schrager, Sheree M; Wolfson, Julie A; Iverson, Ellen

    2013-12-01

    Administrative data sets are increasingly being used to describe clinical care in sickle cell disease (SCD). We recently used such an administrative database to look at the frequency of acute chest syndrome (ACS) and the use of transfusion to treat this syndrome in California patients from 2005 to 2010. Our results revealed a surprisingly low rate of transfusion for this life-threatening situation. To validate these results, we compared California OSPHD (Office of Statewide Health Planning and Development) administrative data with medical record review of patients diagnosed with ACS identified by two pediatric and one adult hospital databases during 2009-2010. ACS or a related pulmonary process accounted for one-fifth of the inpatient hospital discharges associated with the diagnosis of SCD between 2005 and 2010. Only 47% of those discharges were associated with a transfusion. However, chart reviews found that hospital databases over-reported visits for ACS. OSHPD underreported transfusions compared to hospital data. The net effect was a markedly higher true rate of transfusion (40.7% vs. 70.2%). These results point out the difficulties in using this administrative data base to describe clinical care for ACS given the variation in clinician recognition of this entity. OSPHD is widely used to inform health care policy in California and contributes to national databases. Our study suggests that using this administrative database to assess clinical care for SCD may lead to inaccurate assumptions about quality of care for SCD patients in California. Future studies on health services in SCD may require a different methodology. © 2013 Wiley Periodicals, Inc.

  14. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  15. Database for Safety-Oriented Tracking of Chemicals

    NASA Technical Reports Server (NTRS)

    Stump, Jacob; Carr, Sandra; Plumlee, Debrah; Slater, Andy; Samson, Thomas M.; Holowaty, Toby L.; Skeete, Darren; Haenz, Mary Alice; Hershman, Scot; Raviprakash, Pushpa

    2010-01-01

    SafetyChem is a computer program that maintains a relational database for tracking chemicals and associated hazards at Johnson Space Center (JSC) by use of a Web-based graphical user interface. The SafetyChem database is accessible to authorized users via a JSC intranet. All new chemicals pass through a safety office, where information on hazards, required personal protective equipment (PPE), fire-protection warnings, and target organ effects (TOEs) is extracted from material safety data sheets (MSDSs) and recorded in the database. The database facilitates real-time management of inventory with attention to such issues as stability, shelf life, reduction of waste through transfer of unused chemicals to laboratories that need them, quantification of chemical wastes, and identification of chemicals for which disposal is required. Upon searching the database for a chemical, the user receives information on physical properties of the chemical, hazard warnings, required PPE, a link to the MSDS, and references to the applicable International Standards Organization (ISO) 9000 standard work instructions and the applicable job hazard analysis. Also, to reduce the labor hours needed to comply with reporting requirements of the Occupational Safety and Health Administration, the data can be directly exported into the JSC hazardous- materials database.

  16. Construction and validation of a population-based bone densitometry database.

    PubMed

    Leslie, William D; Caetano, Patricia A; Macwilliam, Leonard R; Finlayson, Gregory S

    2005-01-01

    Utilization of dual-energy X-ray absorptiometry (DXA) for the initial diagnostic assessment of osteoporosis and in monitoring treatment has risen dramatically in recent years. Population-based studies of the impact of DXA and osteoporosis remain challenging because of incomplete and fragmented test data that exist in most regions. Our aim was to create and assess completeness of a database of all clinical DXA services and test results for the province of Manitoba, Canada and to present descriptive data resulting from testing. A regionally based bone density program for the province of Manitoba, Canada was established in 1997. Subsequent DXA services were prospectively captured in a program database. This database was retrospectively populated with earlier DXA results dating back to 1990 (the year that the first DXA scanner was installed) by integrating multiple data sources. A random chart audit was performed to assess completeness and accuracy of this dataset. For comparison, testing rates determined from the DXA database were compared with physician administrative claims data. There was a high level of completeness of this database (>99%) and accurate personal identifier information sufficient for linkage with other health care administrative data (>99%). This contrasted with physician billing data that were found to be markedly incomplete. Descriptive data provide a profile of individuals receiving DXA and their test results. In conclusion, the Manitoba bone density database has great potential as a resource for clinical and health policy research because it is population based with a high level of completeness and accuracy.

  17. Iodine in food- and dietary supplement–composition databases123

    PubMed Central

    Pehrsson, Pamela R; Patterson, Kristine Y; Spungen, Judith H; Wirtz, Mark S; Andrews, Karen W; Dwyer, Johanna T; Swanson, Christine A

    2016-01-01

    The US Food and Drug Administration (FDA) and the Nutrient Data Laboratory (NDL) of the USDA Agricultural Research Service have worked independently on determining the iodine content of foods and dietary supplements and are now harmonizing their efforts. The objective of the current article is to describe the harmonization plan and the results of initial iodine analyses accomplished under that plan. For many years, the FDA’s Total Diet Study (TDS) has measured iodine concentrations in selected foods collected in 4 regions of the country each year. For more than a decade, the NDL has collected and analyzed foods as part of the National Food and Nutrient Analysis Program; iodine analysis is now being added to the program. The NDL recently qualified a commercial laboratory to conduct iodine analysis of foods by an inductively coupled plasma mass spectrometry (ICP-MS) method. Co-analysis of a set of samples by the commercial laboratory using the ICP-MS method and by the FDA laboratory using its standard colorimetric method yielded comparable results. The FDA recently reviewed historical TDS data for trends in the iodine content of selected foods, and the NDL analyzed samples of a limited subset of those foods for iodine. The FDA and the NDL are working to combine their data on iodine in foods and to produce an online database that can be used for estimating iodine intake from foods in the US population. In addition, the NDL continues to analyze dietary supplements for iodine and, in collaboration with the NIH Office of Dietary Supplements, to publish the data online in the Dietary Supplement Ingredient Database. The goal is to provide, through these 2 harmonized databases and the continuing TDS focus on iodine, improved tools for estimating iodine intake in population studies. PMID:27534627

  18. Complications after craniosynostosis surgery: comparison of the 2012 Kids' Inpatient Database and Pediatric NSQIP Database.

    PubMed

    Lin, Yimo; Pan, I-Wen; Mayer, Rory R; Lam, Sandi

    2015-12-01

    OBJECT Research conducted using large administrative data sets has increased in recent decades, but reports on the fidelity and reliability of such data have been mixed. The goal of this project was to compare data from a large, administrative claims data set with a quality improvement registry in order to ascertain similarities and differences in content. METHODS Data on children younger than 12 months with nonsyndromic craniosynostosis who underwent surgery in 2012 were queried in both the Kids' Inpatient Database (KID) and the American College of Surgeons Pediatric National Surgical Quality Improvement Program (Peds NSQIP). Data from published clinical craniosynostosis surgery series are reported for comparison. RESULTS Among patients younger than 12 months of age, a total of 1765 admissions were identified in KID and 391 in Peds NSQIP in 2012. Only nonsyndromic patients were included. The mean length of stay was 3.2 days in KID and 4 days in Peds NSQIP. The rates of cardiac events (0.5% in KID, 0.3% in Peds NSQIP, and 0.4%-2.2% in the literature), stroke/intracranial bleeds (0.4% in KID, 0.5% in Peds NSQIP, and 0.3%-1.2% in the literature), infection (0.2% in KID, 0.8% in Peds NSQIP, and 0%-8% in the literature), wound disruption (0.2% in KID, 0.5% in Peds NSQIP, 0%-4% in the literature), and seizures (0.7% in KID, 0.8% in Peds NSQIP, 0%-0.8% in the literature) were low and similar between the 2 data sets. The reported rates of blood transfusion (36% in KID, 64% in Peds NSQIP, and 1.7%-100% in the literature) varied between the 2 data sets. CONCLUSIONS Both the KID and Peds NSQIP databases provide large samples of surgical patients, with more cases reported in KID. The rates of complications studied were similar between the 2 data sets, with the exception of blood transfusion events where the retrospective chart review process of Peds NSQIP captured almost double the rate reported in KID.

  19. Low template STR typing: effect of replicate number and consensus method on genotyping reliability and DNA database search results.

    PubMed

    Benschop, Corina C G; van der Beek, Cornelis P; Meiland, Hugo C; van Gorp, Ankie G M; Westen, Antoinette A; Sijen, Titia

    2011-08-01

    To analyze DNA samples with very low DNA concentrations, various methods have been developed that sensitize short tandem repeat (STR) typing. Sensitized DNA typing is accompanied by stochastic amplification effects, such as allele drop-outs and drop-ins. Therefore low template (LT) DNA profiles are interpreted with care. One can either try to infer the genotype by a consensus method that uses alleles confirmed in replicate analyses, or one can use a statistical model to evaluate the strength of the evidence in a direct comparison with a known DNA profile. In this study we focused on the first strategy and we show that the procedure by which the consensus profile is assembled will affect genotyping reliability. In order to gain insight in the roles of replicate number and requested level of reproducibility, we generated six independent amplifications of samples of known donors. The LT methods included both increased cycling and enhanced capillary electrophoresis (CE) injection [1]. Consensus profiles were assembled from two to six of the replications using four methods: composite (include all alleles), n-1 (include alleles detected in all but one replicate), n/2 (include alleles detected in at least half of the replicates) and 2× (include alleles detected twice). We compared the consensus DNA profiles with the DNA profile of the known donor, studied the stochastic amplification effects and examined the effect of the consensus procedure on DNA database search results. From all these analyses we conclude that the accuracy of LT DNA typing and the efficiency of database searching improve when the number of replicates is increased and the consensus method is n/2. The most functional number of replicates within this n/2 method is four (although a replicate number of three suffices for samples showing >25% of the alleles in standard STR typing). This approach was also the optimal strategy for the analysis of 2-person mixtures, although modified search strategies may be

  20. Molecule database framework: a framework for creating database applications with chemical structure search capability

    PubMed Central

    2013-01-01

    Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was

  1. Molecule database framework: a framework for creating database applications with chemical structure search capability.

    PubMed

    Kiener, Joos

    2013-12-11

    Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework

  2. Administrative health data in Canada: lessons from history.

    PubMed

    Lucyk, Kelsey; Lu, Mingshan; Sajobi, Tolulope; Quan, Hude

    2015-08-19

    Health decision-making requires evidence from high-quality data. As one example, the Discharge Abstract Database (DAD) compiles data from the majority of Canadian hospitals to form one of the most comprehensive and highly regarded administrative health databases available for health research, internationally. However, despite the success of this and other administrative health data resources, little is known about their history or the factors that have led to their success. The purpose of this paper is to provide an historical overview of Canadian administrative health data for health research to contribute to the institutional memory of this field. We conducted a qualitative content analysis of approximately 20 key sources to construct an historical narrative of administrative health data in Canada. Specifically, we searched for content related to key events, individuals, challenges, and successes in this field over time. In Canada, administrative health data for health research has developed in tangent with provincial research centres. Interestingly, the lessons learned from this history align with the original recommendations of the 1964 Royal Commission on Health Services: (1) standardization, and (2) centralization of data resources, that is (3) facilitated through governmental financial support. The overview history provided here illustrates the need for longstanding partnerships between government and academia, for classification, terminology and standardization are time-consuming and ever-evolving processes. This paper will be of interest to those who work with administrative health data, and also for countries that are looking to build or improve upon their use of administrative health data for decision-making.

  3. Evaluation of a CFD Method for Aerodynamic Database Development using the Hyper-X Stack Configuration

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Engelund, Walter; Armand, Sasan; Bittner, Robert

    2004-01-01

    A computational fluid dynamic (CFD) study is performed on the Hyper-X (X-43A) Launch Vehicle stack configuration in support of the aerodynamic database generation in the transonic to hypersonic flow regime. The main aim of the study is the evaluation of a CFD method that can be used to support aerodynamic database development for similar future configurations. The CFD method uses the NASA Langley Research Center developed TetrUSS software, which is based on tetrahedral, unstructured grids. The Navier-Stokes computational method is first evaluated against a set of wind tunnel test data to gain confidence in the code s application to hypersonic Mach number flows. The evaluation includes comparison of the longitudinal stability derivatives on the complete stack configuration (which includes the X-43A/Hyper-X Research Vehicle, the launch vehicle and an adapter connecting the two), detailed surface pressure distributions at selected locations on the stack body and component (rudder, elevons) forces and moments. The CFD method is further used to predict the stack aerodynamic performance at flow conditions where no experimental data is available as well as for component loads for mechanical design and aero-elastic analyses. An excellent match between the computed and the test data over a range of flow conditions provides a computational tool that may be used for future similar hypersonic configurations with confidence.

  4. [Interferons--its method of administration and adverse effect related to pharmacokinetics ].

    PubMed

    Furue, H

    1984-02-01

    The potential role of interferons in the treatment of malignant diseases is currently being evaluated. This paper reviews experimental and clinical findings regarding pharmacokinetics, method of administration, and side reactions of interferons. Interferon in the blood is rapidly cleared from the circulation. Intramuscular injection of alpha-interferon causes low but stable interferon levels in the blood. However, in the case of beta-interferon, interferon is never detected consistently in the blood after intramuscular or subcutaneous administration. The studies with animal models suggest that doses higher than those given in current clinical trials will be necessary to obtain clearly beneficial effects in human. The maximum safely tolerated daily dose is appreciably higher than that used in most previous studies, although even at this level, considerable toxicity may be encountered. Adequate method of administration, route, dose and interval are not yet established at all. Exact mechanism of anticancer activity is not yet well defined. The most frequent side reaction is fever. However, the exact mechanism to cause these side reactions is also not yet clarified. Dose limiting central nervous system toxicities, hypotension, hypocalcaemia etc. are occasionally encountered in some instances. Antibody to interferon is demonstrated in some cases. Purification of interferon does not always causes reduction of side reactions. The treatment of cancer cases with interferon has just started and there are many problems to be solved. However, therapeutic beneficial may be achieved in the treatment of malignant tumors by appropriate combinations of interferon with conventional treatment. More laboratory studies as well as carefully controlled clinical observations are warranted.

  5. Certifiable database generation for SVS

    NASA Astrophysics Data System (ADS)

    Schiefele, Jens; Damjanovic, Dejan; Kubbat, Wolfgang

    2000-06-01

    In future aircraft cockpits SVS will be used to display 3D physical and virtual information to pilots. A review of prototype and production Synthetic Vision Displays (SVD) from Euro Telematic, UPS Advanced Technologies, Universal Avionics, VDO-Luftfahrtgeratewerk, and NASA, are discussed. As data sources terrain, obstacle, navigation, and airport data is needed, Jeppesen-Sanderson, Inc. and Darmstadt Univ. of Technology currently develop certifiable methods for acquisition, validation, and processing methods for terrain, obstacle, and airport databases. The acquired data will be integrated into a High-Quality Database (HQ-DB). This database is the master repository. It contains all information relevant for all types of aviation applications. From the HQ-DB SVS relevant data is retried, converted, decimated, and adapted into a SVS Real-Time Onboard Database (RTO-DB). The process of data acquisition, verification, and data processing will be defined in a way that allows certication within DO-200a and new RTCA/EUROCAE standards for airport and terrain data. The open formats proposed will be established and evaluated for industrial usability. Finally, a NASA-industry cooperation to develop industrial SVS products under the umbrella of the NASA Aviation Safety Program (ASP) is introduced. A key element of the SVS NASA-ASP is the Jeppesen lead task to develop methods for world-wide database generation and certification. Jeppesen will build three airport databases that will be used in flight trials with NASA aircraft.

  6. Empirical cost models for estimating power and energy consumption in database servers

    NASA Astrophysics Data System (ADS)

    Valdivia Garcia, Harold Dwight

    The explosive growth in the size of data centers, coupled with the widespread use of virtualization technology has brought power and energy consumption as major concerns for data center administrators. Provisioning decisions must take into consideration not only target application performance but also the power demands and total energy consumption incurred by the hardware and software to be deployed at the data center. Failure to do so will result in damaged equipment, power outages, and inefficient operation. Since database servers comprise one of the most popular and important server applications deployed in such facilities, it becomes necessary to have accurate cost models that can predict the power and energy demands that each database workloads will impose in the system. In this work we present an empirical methodology to estimate the power and energy cost of database operations. Our methodology uses multiple-linear regression to derive accurate cost models that depend only on readily available statistics such as selectivity factors, tuple size, numbers columns and relational cardinality. Moreover, our method does not need measurement of individual hardware components, but rather total power and energy consumption measured at a server. We have implemented our methodology, and ran experiments with several server configurations. Our experiments indicate that we can predict power and energy more accurately than alternative methods found in the literature.

  7. MRNIDX - Marine Data Index: Database Description, Operation, Retrieval, and Display

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1982-01-01

    A database referencing the location and content of data stored on magnetic medium was designed to assist in the indexing of time-series and spatially dependent marine geophysical data collected or processed by the U. S. Geological Survey. The database was designed and created for input to the Geologic Retrieval and Synopsis Program (GRASP) to allow selective retrievals of information pertaining to location of data, data format, cruise, geographical bounds and collection dates of data. This information is then used to locate the stored data for administrative purposes or further processing. Database utilization is divided into three distinct operations. The first is the inventorying of the data and the updating of the database, the second is the retrieval of information from the database, and the third is the graphic display of the geographical boundaries to which the retrieved information pertains.

  8. Validating malignant melanoma ICD-9-CM codes in Umbria, ASL Napoli 3 Sud and Friuli Venezia Giulia administrative healthcare databases: a diagnostic accuracy study

    PubMed Central

    Orso, Massimiliano; Serraino, Diego; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore

    2018-01-01

    Objectives To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. Design A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Setting Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. Participants 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Outcome measures Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. Results The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments. The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. Conclusions The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. PMID:29678984

  9. Validating malignant melanoma ICD-9-CM codes in Umbria, ASL Napoli 3 Sud and Friuli Venezia Giulia administrative healthcare databases: a diagnostic accuracy study.

    PubMed

    Orso, Massimiliano; Serraino, Diego; Abraha, Iosief; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Eusebi, Paolo; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore

    2018-04-20

    To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments.The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Assessment of imputation methods using varying ecological information to fill the gaps in a tree functional trait database

    NASA Astrophysics Data System (ADS)

    Poyatos, Rafael; Sus, Oliver; Vilà-Cabrera, Albert; Vayreda, Jordi; Badiella, Llorenç; Mencuccini, Maurizio; Martínez-Vilalta, Jordi

    2016-04-01

    Plant functional traits are increasingly being used in ecosystem ecology thanks to the growing availability of large ecological databases. However, these databases usually contain a large fraction of missing data because measuring plant functional traits systematically is labour-intensive and because most databases are compilations of datasets with different sampling designs. As a result, within a given database, there is an inevitable variability in the number of traits available for each data entry and/or the species coverage in a given geographical area. The presence of missing data may severely bias trait-based analyses, such as the quantification of trait covariation or trait-environment relationships and may hamper efforts towards trait-based modelling of ecosystem biogeochemical cycles. Several data imputation (i.e. gap-filling) methods have been recently tested on compiled functional trait databases, but the performance of imputation methods applied to a functional trait database with a regular spatial sampling has not been thoroughly studied. Here, we assess the effects of data imputation on five tree functional traits (leaf biomass to sapwood area ratio, foliar nitrogen, maximum height, specific leaf area and wood density) in the Ecological and Forest Inventory of Catalonia, an extensive spatial database (covering 31900 km2). We tested the performance of species mean imputation, single imputation by the k-nearest neighbors algorithm (kNN) and a multiple imputation method, Multivariate Imputation with Chained Equations (MICE) at different levels of missing data (10%, 30%, 50%, and 80%). We also assessed the changes in imputation performance when additional predictors (species identity, climate, forest structure, spatial structure) were added in kNN and MICE imputations. We evaluated the imputed datasets using a battery of indexes describing departure from the complete dataset in trait distribution, in the mean prediction error, in the correlation matrix

  11. Windshear database for forward-looking systems certification

    NASA Technical Reports Server (NTRS)

    Switzer, G. F.; Proctor, F. H.; Hinton, D. A.; Aanstoos, J. V.

    1993-01-01

    This document contains a description of a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-look windshear detection equipment. The database contains high resolution, three dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies which have been generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases represent a wide spectrum of windshear events. General descriptions and figures from each of the case studies are included, as well as equations for F-factor, radar-reflectivity factor, and rainfall rate. The document also describes scenarios and paths through the data sets, jointly developed by NASA and the FAA, to meet FAA certification testing objectives. Instructions for reading and verifying the data from tape are included.

  12. The accuracy of burn diagnosis codes in health administrative data: A validation study.

    PubMed

    Mason, Stephanie A; Nathens, Avery B; Byrne, James P; Fowler, Rob; Gonzalez, Alejandro; Karanicolas, Paul J; Moineddin, Rahim; Jeschke, Marc G

    2017-03-01

    Health administrative databases may provide rich sources of data for the study of outcomes following burn. We aimed to determine the accuracy of International Classification of Diseases diagnoses codes for burn in a population-based administrative database. Data from a regional burn center's clinical registry of patients admitted between 2006-2013 were linked to administrative databases. Burn total body surface area (TBSA), depth, mechanism, and inhalation injury were compared between the registry and administrative records. The sensitivity, specificity, and positive and negative predictive values were determined, and coding agreement was assessed with the kappa statistic. 1215 burn center patients were linked to administrative records. TBSA codes were highly sensitive and specific for ≥10 and ≥20% TBSA (89/93% sensitive and 95/97% specific), with excellent agreement (κ, 0.85/κ, 0.88). Codes were weakly sensitive (68%) in identifying ≥10% TBSA full-thickness burn, though highly specific (86%) with moderate agreement (κ, 0.46). Codes for inhalation injury had limited sensitivity (43%) but high specificity (99%) with moderate agreement (κ, 0.54). Burn mechanism had excellent coding agreement (κ, 0.84). Administrative data diagnosis codes accurately identify burn by burn size and mechanism, while identification of inhalation injury or full-thickness burns is less sensitive but highly specific. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  13. Predicting Occurrence of Spine Surgery Complications Using "Big Data" Modeling of an Administrative Claims Database.

    PubMed

    Ratliff, John K; Balise, Ray; Veeravagu, Anand; Cole, Tyler S; Cheng, Ivan; Olshen, Richard A; Tian, Lu

    2016-05-18

    Postoperative metrics are increasingly important in determining standards of quality for physicians and hospitals. Although complications following spinal surgery have been described, procedural and patient variables have yet to be incorporated into a predictive model of adverse-event occurrence. We sought to develop a predictive model of complication occurrence after spine surgery. We used longitudinal prospective data from a national claims database and developed a predictive model incorporating complication type and frequency of occurrence following spine surgery procedures. We structured our model to assess the impact of features such as preoperative diagnosis, patient comorbidities, location in the spine, anterior versus posterior approach, whether fusion had been performed, whether instrumentation had been used, number of levels, and use of bone morphogenetic protein (BMP). We assessed a variety of adverse events. Prediction models were built using logistic regression with additive main effects and logistic regression with main effects as well as all 2 and 3-factor interactions. Least absolute shrinkage and selection operator (LASSO) regularization was used to select features. Competing approaches included boosted additive trees and the classification and regression trees (CART) algorithm. The final prediction performance was evaluated by estimating the area under a receiver operating characteristic curve (AUC) as predictions were applied to independent validation data and compared with the Charlson comorbidity score. The model was developed from 279,135 records of patients with a minimum duration of follow-up of 30 days. Preliminary assessment showed an adverse-event rate of 13.95%, well within norms reported in the literature. We used the first 80% of the records for training (to predict adverse events) and the remaining 20% of the records for validation. There was remarkable similarity among methods, with an AUC of 0.70 for predicting the occurrence of

  14. 47 CFR 52.26 - NANC Recommendations on Local Number Portability Administration.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... perform a database query to determine if the telephone number has been ported to another local exchange carrier, the local exchange carrier may block the unqueried call only if performing the database query is... manage and oversee the local number portability administrators, subject to review by the NANC, but only...

  15. 47 CFR 52.26 - NANC Recommendations on Local Number Portability Administration.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... perform a database query to determine if the telephone number has been ported to another local exchange carrier, the local exchange carrier may block the unqueried call only if performing the database query is... manage and oversee the local number portability administrators, subject to review by the NANC, but only...

  16. A Dynamic Integration Method for Borderland Database using OSM data

    NASA Astrophysics Data System (ADS)

    Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.

    2013-11-01

    Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.

  17. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  18. Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana

    2002-01-01

    Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.

  19. Oracle Applications Patch Administration Tool (PAT) Beta Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2002-01-04

    PAT is a Patch Administration Tool that provides analysis, tracking, and management of Oracle Application patches. This includes capabilities as outlined below: Patch Analysis & Management Tool Outline of capabilities: Administration Patch Data Maintenance -- track Oracle Application patches applied to what database instance & machine Patch Analysis capture text files (readme.txt and driver files) form comparison detail report comparison detail PL/SQL package comparison detail SQL scripts detail JSP module comparison detail Parse and load the current applptch.txt (10.7) or load patch data from Oracle Application database patch tables (11i) Display Analysis -- Compare patch to be applied with currentmore » Oracle Application installed Appl_top code versions Patch Detail Module comparison detail Analyze and display one Oracle Application module patch. Patch Management -- automatic queue and execution of patches Administration Parameter maintenance -- setting for directory structure of Oracle Application appl_top Validation data maintenance -- machine names and instances to patch Operation Patch Data Maintenance Schedule a patch (queue for later execution) Run a patch (queue for immediate execution) Review the patch logs Patch Management Reports« less

  20. [Explore method about post-marketing safety re-evaluation of Chinese patent medicines based on HIS database in real world].

    PubMed

    Yang, Wei; Xie, Yanming; Zhuang, Yan

    2011-10-01

    There are many kinds of Chinese traditional patent medicine used in clinical practice and many adverse events have been reported by clinical professionals. Chinese patent medicine's safety problems are the most concerned by patients and physicians. At present, many researchers have studied re-evaluation methods about post marketing Chinese medicine safety inside and outside China. However, it is rare that using data from hospital information system (HIS) to re-evaluating post marketing Chinese traditional patent medicine safety problems. HIS database in real world is a good resource with rich information to research medicine safety. This study planed to analyze HIS data selected from ten top general hospitals in Beijing, formed a large HIS database in real world with a capacity of 1 000 000 cases in total after a series of data cleaning and integrating procedures. This study could be a new project that using information to evaluate traditional Chinese medicine safety based on HIS database. A clear protocol has been completed as for the first step for the whole study. The protocol is as follows. First of all, separate each of the Chinese traditional patent medicines existing in the total HIS database as a single database. Secondly, select some related laboratory tests indexes as the safety evaluating outcomes, such as routine blood, routine urine, feces routine, conventional coagulation, liver function, kidney function and other tests. Thirdly, use the data mining method to analyze those selected safety outcomes which had abnormal change before and after using Chinese patent medicines. Finally, judge the relationship between those abnormal changing and Chinese patent medicine. We hope this method could imply useful information to Chinese medicine researchers interested in safety evaluation of traditional Chinese medicine.

  1. Personality and Student Performance on Evaluation Methods Used in Business Administration Courses

    ERIC Educational Resources Information Center

    Lakhal, Sawsen; Sévigny, Serge; Frenette, Éric

    2015-01-01

    The objective of this study was to verify whether personality (Big Five model) influences performance on the evaluation methods used in business administration courses. A sample of 169 students enrolled in two compulsory undergraduate business courses responded to an online questionnaire. As it is difficult within the same course to assess…

  2. New Directions in Library and Information Science Education. Final Report. Volume 2.6: Database Distributor/Service Professional Competencies.

    ERIC Educational Resources Information Center

    Griffiths, Jose-Marie; And Others

    This document contains validated activities and competencies needed by librarians working in a database distributor/service organization. The activities of professionals working in database distributor/service organizations are listed by function: Database Processing; Customer Support; System Administration; and Planning. The competencies are…

  3. The Danish Cardiac Rehabilitation Database.

    PubMed

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.

  4. Understanding the patient perspective on research access to national health records databases for conduct of randomized registry trials.

    PubMed

    Avram, Robert; Marquis-Gravel, Guillaume; Simard, François; Pacheco, Christine; Couture, Étienne; Tremblay-Gravel, Maxime; Desplantie, Olivier; Malhamé, Isabelle; Bibas, Lior; Mansour, Samer; Parent, Marie-Claude; Farand, Paul; Harvey, Luc; Lessard, Marie-Gabrielle; Ly, Hung; Liu, Geoffrey; Hay, Annette E; Marc Jolicoeur, E

    2018-07-01

    Use of health administrative databases is proposed for screening and monitoring of participants in randomized registry trials. However, access to these databases raises privacy concerns. We assessed patient's preferences regarding use of personal information to link their research records with national health databases, as part of a hypothetical randomized registry trial. Cardiology patients were invited to complete an anonymous self-reported survey that ascertained preferences related to the concept of accessing government health databases for research, the type of personal identifiers to be shared and the type of follow-up preferred as participants in a hypothetical trial. A total of 590 responders completed the survey (90% response rate), the majority of which were Caucasians (90.4%), male (70.0%) with a median age of 65years (interquartile range, 8). The majority responders (80.3%) would grant researchers access to health administrative databases for screening and follow-up. To this end, responders endorsed the recording of their personal identifiers by researchers for future record linkage, including their name (90%), and health insurance number (83.9%), but fewer responders agreed with the recording of their social security number (61.4%, p<0.05 with date of birth as reference). Prior participation in a trial predicted agreement for granting researchers access to the administrative databases (OR: 1.69, 95% confidence interval: 1.03-2.90; p=0.04). The majority of Cardiology patients surveyed were supportive of use of their personal identifiers to access administrative health databases and conduct long-term monitoring in the context of a randomized registry trial. Copyright © 2018 Elsevier Ireland Ltd. All rights reserved.

  5. 76 FR 74804 - Federal Housing Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... participate in the First Look Sales Method. This notice announces the availability of a universal NAID to aid... universal NAID to aid eligible NSP purchasers in the purchase of properties under First Look Sales Method... Administration (FHA) First Look Sales Method Under the Neighborhood Stabilization Programs (NSP) Technical...

  6. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  7. Severe abnormal behavior incidence after administration of neuraminidase inhibitors using the national database of medical claims.

    PubMed

    Nakamura, Yuuki; Sugawara, Tamie; Ohkusa, Yasushi; Taniguchi, Kiyosu; Miyazaki, Chiaki; Momoi, Mariko; Okabe, Nobuhiko

    2018-03-01

    An earlier study using the number of abnormal behaviors reported to the study group as the numerator and the number of influenza patient prescribed each neuraminidase inhibitor (NI) estimated by respective pharmaceutical companies found no significant difference among incidence rates of the most severe abnormal behaviors by type of NI throughout Japan. However, the dataset for the denominator used in that earlier study was the estimated number of prescriptions. In the present study, to compare the incidence rates of abnormal behavior more precisely among influenza patients administered several sorts of NI or administered no NI, we used data obtained from the National Database of Electronic Medical Claims (NDBEMC) as the denominator to reach a definitive conclusion. Results show that patients not administered any NI (hereinafter un-administered) or those administered peramivir sometimes showed higher risk of abnormal behavior than those administered oseltamivir, zanamivir, or laninamivir. However, the un-administered or peramivir patients were fewer than those taking other NI. Therefore, accumulation of data through continued research is expected to be necessary to reach a definitive conclusion about the relation between abnormal behavior and NI in influenza patients. Since severe abnormal behaviors with all types of NI or of un-administered patients have been reported, there are some risks in the administration of NI or even in un-administered cases. Therefore, we infer that the policy mandating package inserts in all types of NI. Copyright © 2017. Published by Elsevier Ltd.

  8. Analysis of Outcomes After TKA: Do All Databases Produce Similar Findings?

    PubMed

    Bedard, Nicholas A; Pugely, Andrew J; McHugh, Michael; Lux, Nathan; Otero, Jesse E; Bozic, Kevin J; Gao, Yubo; Callaghan, John J

    2018-01-01

    Use of large clinical and administrative databases for orthopaedic research has increased exponentially. Each database represents unique patient populations and varies in their methodology of data acquisition, which makes it possible that similar research questions posed to different databases might result in answers that differ in important ways. (1) What are the differences in reported demographics, comorbidities, and complications for patients undergoing primary TKA among four databases commonly used in orthopaedic research? (2) How does the difference in reported complication rates vary depending on whether only inpatient data or 30-day postoperative data are analyzed? Patients who underwent primary TKA during 2010 to 2012 were identified within the National Surgical Quality Improvement Programs (NSQIP), the Nationwide Inpatient Sample (NIS), the Medicare Standard Analytic Files (MED), and the Humana Administrative Claims database (HAC). NSQIP is a clinical registry that captures both inpatient and outpatient events up to 30 days after surgery using clinical reviewers and strict definitions for each variable. The other databases are administrative claims databases with their comorbidity and adverse event data defined by diagnosis and procedure codes used for reimbursement. NIS is limited to inpatient data only, whereas HAC and MED also have outpatient data. The number of patients undergoing primary TKA from each database was 48,248 in HAC, 783,546 in MED, 393,050 in NIS, and 43,220 in NSQIP. NSQIP definitions for comorbidities and surgical complications were matched to corresponding International Classification of Diseases, 9 Revision/Current Procedural Terminology codes and these coding algorithms were used to query NIS, MED, and HAC. Age, sex, comorbidities, and inpatient versus 30-day postoperative complications were compared across the four databases. Given the large sample sizes, statistical significance was often detected for small, clinically unimportant

  9. Implementation of the FAA research and development electromagnetic database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDowall, R.L.; Grush, D.J.; Cook, D.M.

    1991-01-01

    The Idaho National Engineering Laboratory (INEL) has been assisting the Federal Aviation Administration (FAA) in developing a database of information about lightning. The FAA Research and Development Electromagnetic Database (FRED) will ultimately contain data from a variety of airborne and groundbased lightning research projects. This paper contains an outline of the data currently available in FRED. It also lists the data sources which the FAA intends to incorporate into FRED. In addition, it describes how the researcher may access and use the FRED menu system. 2 refs., 12 figs.

  10. Atomoxetine administration combined with intensive speech therapy for post-stroke aphasia: evaluation by a novel SPECT method.

    PubMed

    Yamada, Naoki; Kakuda, Wataru; Yamamoto, Kazuma; Momosaki, Ryo; Abo, Masahiro

    2016-09-01

    We clarified the safety, feasibility, and efficacy of atomoxetine administration combined with intensive speech therapy (ST) for patients with post-stroke aphasia. In addition, we investigated the effect of atomoxetine treatment on neural activity of surrounding lesioned brain areas. Four adult patients with motor-dominant aphasia and a history of left hemispheric stroke were studied. We have registered on the clinical trials database (ID: JMA-IIA00215). Daily atomoxetine administration of 40 mg was initiated two weeks before admission and raised to 80 mg 1 week before admission. During the subsequent 13-day hospitalization, administration of atomoxetine was raised to 120 mg and daily intensive ST (120 min/day, one-on-one training) was provided. Language function was assessed using the Japanese version of The Western Aphasia Battery (WAB) and the Token test two weeks prior to admission, on the day of admission, and at discharge. At two weeks prior to admission and at discharge, each patient's cortical blood flow was measured using (123)I-IMP-single photon emission computed tomography (SPECT). This protocol was successfully completed by all patients without any adverse effects. Four patients showed improved language function with the median of the Token Test increasing from 141 to 149, and the repetition score of WAB increasing from 88 to 99. In addition, cortical blood flow surrounding lesioned brain areas was found to increase following intervention in all patients. Atomoxetine administration and intensive ST were safe and feasible for post-stroke aphasia, suggesting their potential usefulness in the treatment of this patient population.

  11. Application of China's National Forest Continuous Inventory database.

    PubMed

    Xie, Xiaokui; Wang, Qingli; Dai, Limin; Su, Dongkai; Wang, Xinchuang; Qi, Guang; Ye, Yujing

    2011-12-01

    The maintenance of a timely, reliable and accurate spatial database on current forest ecosystem conditions and changes is essential to characterize and assess forest resources and support sustainable forest management. Information for such a database can be obtained only through a continuous forest inventory. The National Forest Continuous Inventory (NFCI) is the first level of China's three-tiered inventory system. The NFCI is administered by the State Forestry Administration; data are acquired by five inventory institutions around the country. Several important components of the database include land type, forest classification and ageclass/ age-group. The NFCI database in China is constructed based on 5-year inventory periods, resulting in some of the data not being timely when reports are issued. To address this problem, a forest growth simulation model has been developed to update the database for years between the periodic inventories. In order to aid in forest plan design and management, a three-dimensional virtual reality system of forest landscapes for selected units in the database (compartment or sub-compartment) has also been developed based on Virtual Reality Modeling Language. In addition, a transparent internet publishing system for a spatial database based on open source WebGIS (UMN Map Server) has been designed and utilized to enhance public understanding and encourage free participation of interested parties in the development, implementation, and planning of sustainable forest management.

  12. Application of real-time database to LAMOST control system

    NASA Astrophysics Data System (ADS)

    Xu, Lingzhe; Xu, Xinqi

    2004-09-01

    The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX"s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.

  13. Development of the Connecticut product evaluation database application : Phase 1B.

    DOT National Transportation Integrated Search

    2010-12-01

    The Federal Highway Administration (FHWA), the American Association of State Highway : Transportation Officials (AASHTO) and the Transportation Research Board (TRB), a : division of the National Research Council (NRC), maintain databases to store nat...

  14. ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.

    ERIC Educational Resources Information Center

    Townley, Charles T.; And Others

    Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…

  15. 76 FR 70531 - Tenth Meeting: RTCA Special Committee 217/EUROCAE WG-44: Terrain and Airport Mapping Databases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    ... 217/EUROCAE WG-44: Terrain and Airport Mapping Databases AGENCY: Federal Aviation Administration (FAA... Databases: For the tenth meeting DATES: The meeting will be held December 6-9, 2011, from 9 a.m. to 5 p.m... Mapping Databases. The agenda will include the following: December 6, 2011 Open Plenary Session. Chairman...

  16. Comparative Research: An Approach to Teaching Research Methods in Political Science and Public Administration

    ERIC Educational Resources Information Center

    Engbers, Trent A

    2016-01-01

    The teaching of research methods has been at the core of public administration education for almost 30 years. But since 1990, this journal has published only two articles on the teaching of research methods. Given the increasing emphasis on data driven decision-making, greater insight is needed into the best practices for teaching public…

  17. A systematic review of validated methods for identifying anaphylaxis, including anaphylactic shock and angioneurotic edema, using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Multivariate analysis of factors influencing medical costs of acute pancreatitis hospitalizations based on a national administrative database.

    PubMed

    Murata, Atsuhiko; Matsuda, Shinya; Mayumi, Toshihiko; Okamoto, Kohji; Kuwabara, Kazuaki; Ichimiya, Yukako; Fujino, Yoshihisa; Kubo, Tatsuhiko; Fujimori, Kenji; Horiguchi, Hiromasa

    2012-02-01

    Little information is available on the analysis of medical costs of acute pancreatitis hospitalizations. This study aimed to determine the factors affecting medical costs of patients with acute pancreatitis during hospitalization using a Japanese administrative database. A total of 7193 patients with acute pancreatitis were referred to 776 hospitals. We defined "patients with high medical costs" as patients whose medical costs exceeded the 90th percentile in medical costs during hospitalization and identified the independent factors for patients with high medical costs with and without controlling for length of stay. Multiple logistic regression analysis demonstrated that necrosectomy was the most significant factor for medical costs of acute pancreatitis during hospitalization. The odds ratio of necrosectomy was 33.64 (95% confidence interval, 14.14-80.03; p<0.001). Use of an intensive care unit was the most significant factor for medical costs after controlling for LOS. The OR of an ICU was 6.44 (95% CI, 4.72-8.81; p<0.001). This study demonstrated that necrosectomy and use of an ICU significantly affected the medical costs of acute pancreatitis hospitalization. These results highlight the need for health care implementations to reduce medical costs whilst maintaining the quality of patient care, and targeting patients with severe acute pancreatitis. Copyright © 2011 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  19. Systematic review of drug administration costs and implications for biopharmaceutical manufacturing.

    PubMed

    Tetteh, Ebenezer; Morris, Stephen

    2013-10-01

    The acquisition costs of biologic drugs are often considered to be relatively high compared with those of nonbiologics. However, the total costs of delivering these drugs also depend on the cost of administration. Ignoring drug administration costs may distort resource allocation decisions because these affect cost effectiveness. The objectives of this systematic review were to develop a framework of drug administration costs that considers both the costs of physical administration and the associated proximal costs; and, as a case example, to use this framework to evaluate administration costs for biologics within the UK National Health Service (NHS). We reviewed literature that reported estimates of administration costs for biologics within the UK NHS to identify how these costs were quantified and to examine how differences in dosage forms and regimens influenced administration costs. The literature reviewed were identified by searching the Centre for Review and Dissemination Databases (DARE, NHS EED and HTA); EMBASE (The Excerpta Medica Database); MEDLINE (using the OVID interface); Econlit (EBSCO); Tufts Medical Center Cost Effectiveness Analysis (CEA) Registry; and Google Scholar. We identified 4,344 potentially relevant studies, of which 43 studies were selected for this systematic review. We extracted estimates of the administration costs of biologics from these studies. We found evidence of variation in the way that administration costs were measured, and that this affected the magnitude of costs reported, which could then influence cost effectiveness. Our findings suggested that manufacturers of biologic medicines should pay attention to formulation issues and their impact on administration costs, because these affect the total costs of healthcare delivery and cost effectiveness.

  20. Evaluating Research Administration: Methods and Utility

    ERIC Educational Resources Information Center

    Marina, Sarah; Davis-Hamilton, Zoya; Charmanski, Kara E.

    2015-01-01

    Three studies were jointly conducted by the Office of Research Administration and Office of Proposal Development at Tufts University to evaluate the services within each respective office. The studies featured assessments that used, respectively, (1) quantitative metrics; (2) a quantitative satisfaction survey with limited qualitative questions;…

  1. Interactive searching of facial image databases

    NASA Astrophysics Data System (ADS)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  2. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  3. A systematic review of validated methods for identifying transfusion-related ABO incompatibility reactions using administrative and claims data.

    PubMed

    Carnahan, Ryan M; Kee, Vicki R

    2012-01-01

    This paper aimed to systematically review algorithms to identify transfusion-related ABO incompatibility reactions in administrative data, with a focus on studies that have examined the validity of the algorithms. A literature search was conducted using PubMed, Iowa Drug Information Service database, and Embase. A Google Scholar search was also conducted because of the difficulty identifying relevant studies. Reviews were conducted by two investigators to identify studies using data sources from the USA or Canada because these data sources were most likely to reflect the coding practices of Mini-Sentinel data sources. One study was found that validated International Classification of Diseases (ICD-9-CM) codes representing transfusion reactions. None of these cases were ABO incompatibility reactions. Several studies consistently used ICD-9-CM code 999.6, which represents ABO incompatibility reactions, and a technical report identified the ICD-10 code for these reactions. One study included the E-code E8760 for mismatched blood in transfusion in the algorithm. Another study reported finding no ABO incompatibility reaction codes in the Healthcare Cost and Utilization Project Nationwide Inpatient Sample database, which contains data of 2.23 million patients who received transfusions, raising questions about the sensitivity of administrative data for identifying such reactions. Two studies reported perfect specificity, with sensitivity ranging from 21% to 83%, for the code identifying allogeneic red blood cell transfusions in hospitalized patients. There is no information to assess the validity of algorithms to identify transfusion-related ABO incompatibility reactions. Further information on the validity of algorithms to identify transfusions would also be useful. Copyright © 2012 John Wiley & Sons, Ltd.

  4. The roles of nearest neighbor methods in imputing missing data in forest inventory and monitoring databases

    Treesearch

    Bianca N. I. Eskelson; Hailemariam Temesgen; Valerie Lemay; Tara M. Barrett; Nicholas L. Crookston; Andrew T. Hudak

    2009-01-01

    Almost universally, forest inventory and monitoring databases are incomplete, ranging from missing data for only a few records and a few variables, common for small land areas, to missing data for many observations and many variables, common for large land areas. For a wide variety of applications, nearest neighbor (NN) imputation methods have been developed to fill in...

  5. The method of abstraction in the design of databases and the interoperability

    NASA Astrophysics Data System (ADS)

    Yakovlev, Nikolay

    2018-03-01

    When designing the database structure oriented to the contents of indicators presented in the documents and communications subject area. First, the method of abstraction is applied by expansion of the indices of new, artificially constructed abstract concepts. The use of abstract concepts allows to avoid registration of relations many-to-many. For this reason, when built using abstract concepts, demonstrate greater stability in the processes. The example abstract concepts to address structure - a unique house number. Second, the method of abstraction can be used in the transformation of concepts by omitting some attributes that are unnecessary for solving certain classes of problems. Data processing associated with the amended concepts is more simple without losing the possibility of solving the considered classes of problems. For example, the concept "street" loses the binding to the land. The content of the modified concept of "street" are only the relations of the houses to the declared name. For most accounting tasks and ensure communication is enough.

  6. Administrative Decision Making and Quasi Decision Making: An Empirical Study Using the Protocol Method.

    ERIC Educational Resources Information Center

    Fraser, Hugh W.; Anderson, Mary E.

    1982-01-01

    This exploratory study attempted to identify variables in need of further investigation. Those to emerge included heuristics or rules of thumb used by administrators in decision making, personality variables, and methods for evaluating alternatives. (Author/JM)

  7. Comparing the Hematopoetic Syndrome Time Course in the NHP Animal Model to Radiation Accident Cases From the Database Search.

    PubMed

    Graessle, Dieter H; Dörr, Harald; Bennett, Alexander; Shapiro, Alla; Farese, Ann M; MacVittie, Thomas J; Meineke, Viktor

    2015-11-01

    Since controlled clinical studies on drug administration for the acute radiation syndrome are lacking, clinical data of human radiation accident victims as well as experimental animal models are the main sources of information. This leads to the question of how to compare and link clinical observations collected after human radiation accidents with experimental observations in non-human primate (NHP) models. Using the example of granulocyte counts in the peripheral blood following radiation exposure, approaches for adaptation between NHP and patient databases on data comparison and transformation are introduced. As a substitute for studying the effects of administration of granulocyte-colony stimulating factor (G-CSF) in human clinical trials, the method of mathematical modeling is suggested using the example of G-CSF administration to NHP after total body irradiation.

  8. A systematic review of validated methods for identifying patients with rheumatoid arthritis using administrative or claims data.

    PubMed

    Chung, Cecilia P; Rohan, Patricia; Krishnaswami, Shanthi; McPheeters, Melissa L

    2013-12-30

    To review the evidence supporting the validity of billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify patients with rheumatoid arthritis (RA) in administrative and claim databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to RA and reference lists of included studies were searched. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria and extracted the data. Data collected included participant and algorithm characteristics. Nine studies reported validation of computer algorithms based on International Classification of Diseases (ICD) codes with or without free-text, medication use, laboratory data and the need for a diagnosis by a rheumatologist. These studies yielded positive predictive values (PPV) ranging from 34 to 97% to identify patients with RA. Higher PPVs were obtained with the use of at least two ICD and/or procedure codes (ICD-9 code 714 and others), the requirement of a prescription of a medication used to treat RA, or requirement of participation of a rheumatologist in patient care. For example, the PPV increased from 66 to 97% when the use of disease-modifying antirheumatic drugs and the presence of a positive rheumatoid factor were required. There have been substantial efforts to propose and validate algorithms to identify patients with RA in automated databases. Algorithms that include more than one code and incorporate medications or laboratory data and/or required a diagnosis by a rheumatologist may increase the PPV. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. An intermediary's perspective of online databases for local governments

    NASA Technical Reports Server (NTRS)

    Jack, R. F.

    1984-01-01

    Numerous public administration studies have indicated that local government agencies for a variety of reasons lack access to comprehensive information resources; furthermore, such entities are often unwilling or unable to share information regarding their own problem-solving innovations. The NASA/University of Kentucky Technology Applications Program devotes a considerable effort to providing scientific and technical information and assistance to local agencies, relying on its access to over 500 distinct online databases offered by 20 hosts. The author presents a subjective assessment, based on his own experiences, of several databases which may prove useful in obtaining information for this particular end-user community.

  10. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Technical evaluation of methods for identifying chemotherapy-induced febrile neutropenia in healthcare claims databases.

    PubMed

    Weycker, Derek; Sofrygin, Oleg; Seefeld, Kim; Deeter, Robert G; Legg, Jason; Edelsberg, John

    2013-02-13

    Healthcare claims databases have been used in several studies to characterize the risk and burden of chemotherapy-induced febrile neutropenia (FN) and effectiveness of colony-stimulating factors against FN. The accuracy of methods previously used to identify FN in such databases has not been formally evaluated. Data comprised linked electronic medical records from Geisinger Health System and healthcare claims data from Geisinger Health Plan. Subjects were classified into subgroups based on whether or not they were hospitalized for FN per the presumptive "gold standard" (ANC <1.0×10(9)/L, and body temperature ≥38.3°C or receipt of antibiotics) and claims-based definition (diagnosis codes for neutropenia, fever, and/or infection). Accuracy was evaluated principally based on positive predictive value (PPV) and sensitivity. Among 357 study subjects, 82 (23%) met the gold standard for hospitalized FN. For the claims-based definition including diagnosis codes for neutropenia plus fever in any position (n=28), PPV was 100% and sensitivity was 34% (95% CI: 24-45). For the definition including neutropenia in the primary position (n=54), PPV was 87% (78-95) and sensitivity was 57% (46-68). For the definition including neutropenia in any position (n=71), PPV was 77% (68-87) and sensitivity was 67% (56-77). Patients hospitalized for chemotherapy-induced FN can be identified in healthcare claims databases--with an acceptable level of mis-classification--using diagnosis codes for neutropenia, or neutropenia plus fever.

  12. Construction of In-house Databases in a Corporation

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takaya

    The author describes the progress in and present status of the information management system at the research laboratories as a R & D component of pharmaceutical industry. The system deals with three fundamental types of information, that is, graphic information, numeral information and textual information which includes the former two types of information. The author and others have constructed the system which enables to process these kinds of information integrally. The system is also featured by the fact that natural form of information in which Japanese words (2 byte type) and English (1 byte type) as culture of personal & word processing computers are mixed can be processed by large-size computers because Japanese language are eligible for computer processing. The system is originally for research administrators, but can be effective also for researchers. At present 7 databases are available including external databases. The system is always ready to accept other databases newly.

  13. Development and Validation of Case-Finding Algorithms for the Identification of Patients with ANCA-Associated Vasculitis in Large Healthcare Administrative Databases

    PubMed Central

    Sreih, Antoine G.; Annapureddy, Narender; Springer, Jason; Casey, George; Byram, Kevin; Cruz, Andy; Estephan, Maya; Frangiosa, Vince; George, Michael D.; Liu, Mei; Parker, Adam; Sangani, Sapna; Sharim, Rebecca; Merkel, Peter A.

    2016-01-01

    Purpose To develop and validate case-finding algorithms for granulomatosis with polyangiitis (Wegener’s, GPA), microscopic polyangiitis (MPA), and eosinophilic granulomatosis with polyangiitis (Churg-Strauss, EGPA). Methods 250 patients per disease were randomly selected from 2 large healthcare systems using the International Classification of Diseases version 9 (ICD9) codes for GPA/EGPA (446.4) and MPA (446.0). 16 case-finding algorithms were constructed using a combination of ICD9 code, encounter type (inpatient or outpatient), physician specialty, use of immunosuppressive medications, and the anti-neutrophil cytoplasmic antibody (ANCA) type. Algorithms with the highest average positive predictive value (PPV) were validated in a third healthcare system. Results An algorithm excluding patients with eosinophilia or asthma and including the encounter type and physician specialty had the highest PPV for GPA (92.4%). An algorithm including patients with eosinophilia and asthma and the physician specialty had the highest PPV for EGPA (100%). An algorithm including patients with one of the following diagnoses: alveolar hemorrhage, interstitial lung disease, glomerulonephritis, acute or chronic kidney disease, the encounter type, physician specialty, and immunosuppressive medications had the highest PPV for MPA (76.2%). When validated in a third healthcare system, these algorithms had high PPV (85.9% for GPA, 85.7% for EGPA, and 61.5% for MPA). Adding the ANCA type increased the PPV to 94.4%, 100%, and 81.2% for GPA, EGPA, and MPA respectively. Conclusion Case-finding algorithms accurately identify patients with GPA, EGPA, and MPA in administrative databases. These algorithms can be used to assemble population-based cohorts and facilitate future research in epidemiology, drug safety, and comparative effectiveness. PMID:27804171

  14. In-Memory Graph Databases for Web-Scale Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castellana, Vito G.; Morari, Alessandro; Weaver, Jesse R.

    RDF databases have emerged as one of the most relevant way for organizing, integrating, and managing expo- nentially growing, often heterogeneous, and not rigidly structured data for a variety of scientific and commercial fields. In this paper we discuss the solutions integrated in GEMS (Graph database Engine for Multithreaded Systems), a software framework for implementing RDF databases on commodity, distributed-memory high-performance clusters. Unlike the majority of current RDF databases, GEMS has been designed from the ground up to primarily employ graph-based methods. This is reflected in all the layers of its stack. The GEMS framework is composed of: a SPARQL-to-C++more » compiler, a library of data structures and related methods to access and modify them, and a custom runtime providing lightweight software multithreading, network messages aggregation and a partitioned global address space. We provide an overview of the framework, detailing its component and how they have been closely designed and customized to address issues of graph methods applied to large-scale datasets on clusters. We discuss in details the principles that enable automatic translation of the queries (expressed in SPARQL, the query language of choice for RDF databases) to graph methods, and identify differences with respect to other RDF databases.« less

  15. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.

  16. Aerodynamic Tests of the Space Launch System for Database Development

    NASA Technical Reports Server (NTRS)

    Pritchett, Victor E.; Mayle, Melody N.; Blevins, John A.; Crosby, William A.; Purinton, David C.

    2014-01-01

    The Aerosciences Branch (EV33) at the George C. Marshall Space Flight Center (MSFC) has been responsible for a series of wind tunnel tests on the National Aeronautics and Space Administration's (NASA) Space Launch System (SLS) vehicles. The primary purpose of these tests was to obtain aerodynamic data during the ascent phase and establish databases that can be used by the Guidance, Navigation, and Mission Analysis Branch (EV42) for trajectory simulations. The paper describes the test particulars regarding models and measurements and the facilities used, as well as database preparations.

  17. The Génolevures database.

    PubMed

    Martin, Tiphaine; Sherman, David J; Durrens, Pascal

    2011-01-01

    The Génolevures online database (URL: http://www.genolevures.org) stores and provides the data and results obtained by the Génolevures Consortium through several campaigns of genome annotation of the yeasts in the Saccharomycotina subphylum (hemiascomycetes). This database is dedicated to large-scale comparison of these genomes, storing not only the different chromosomal elements detected in the sequences, but also the logical relations between them. The database is divided into a public part, accessible to anyone through Internet, and a private part where the Consortium members make genome annotations with our Magus annotation system; this system is used to annotate several related genomes in parallel. The public database is widely consulted and offers structured data, organized using a REST web site architecture that allows for automated requests. The implementation of the database, as well as its associated tools and methods, is evolving to cope with the influx of genome sequences produced by Next Generation Sequencing (NGS). Copyright © 2011 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  18. Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database

    PubMed Central

    Davis, Allan Peter; Johnson, Robin J.; Lennon-Hopkins, Kelley; Sciaky, Daniela; Rosenstein, Michael C.; Wiegers, Thomas C.; Mattingly, Carolyn J.

    2012-01-01

    The Comparative Toxicogenomics Database (CTD) is a public resource that promotes understanding about the effects of environmental chemicals on human health. CTD biocurators read the scientific literature and manually curate a triad of chemical–gene, chemical–disease and gene–disease interactions. Typically, articles for CTD are selected using a chemical-centric approach by querying PubMed to retrieve a corpus containing the chemical of interest. Although this technique ensures adequate coverage of knowledge about the chemical (i.e. data completeness), it does not necessarily reflect the most current state of all toxicological research in the community at large (i.e. data currency). Keeping databases current with the most recent scientific results, as well as providing a rich historical background from legacy articles, is a challenging process. To address this issue of data currency, CTD designed and tested a journal-centric approach of curation to complement our chemical-centric method. We first identified priority journals based on defined criteria. Next, over 7 weeks, three biocurators reviewed 2425 articles from three consecutive years (2009–2011) of three targeted journals. From this corpus, 1252 articles contained relevant data for CTD and 52 752 interactions were manually curated. Here, we describe our journal selection process, two methods of document delivery for the biocurators and the analysis of the resulting curation metrics, including data currency, and both intra-journal and inter-journal comparisons of research topics. Based on our results, we expect that curation by select journals can (i) be easily incorporated into the curation pipeline to complement our chemical-centric approach; (ii) build content more evenly for chemicals, genes and diseases in CTD (rather than biasing data by chemicals-of-interest); (iii) reflect developing areas in environmental health and (iv) improve overall data currency for chemicals, genes and diseases. Database

  19. A Chronostratigraphic Relational Database Ontology

    NASA Astrophysics Data System (ADS)

    Platon, E.; Gary, A.; Sikora, P.

    2005-12-01

    A chronostratigraphic research database was donated by British Petroleum to the Stratigraphy Group at the Energy and Geoscience Institute (EGI), University of Utah. These data consists of over 2,000 measured sections representing over three decades of research into the application of the graphic correlation method. The data are global and includes both microfossil (foraminifera, calcareous nannoplankton, spores, pollen, dinoflagellate cysts, etc) and macrofossil data. The objective of the donation was to make the research data available to the public in order to encourage additional chronostratigraphy studies, specifically regarding graphic correlation. As part of the National Science Foundation's Cyberinfrastructure for the Geosciences (GEON) initiative these data have been made available to the public at http://css.egi.utah.edu. To encourage further research using the graphic correlation method, EGI has developed a software package, StrataPlot that will soon be publicly available from the GEON website as a standalone software download. The EGI chronostratigraphy research database, although relatively large, has many data holes relative to some paleontological disciplines and geographical areas, so the challenge becomes how do we expand the data available for chronostratigrahic studies using graphic correlation. There are several public or soon-to-be public databases available to chronostratigraphic research, but they have their own data structures and modes of presentation. The heterogeneous nature of these database schemas hinders their integration and makes it difficult for the user to retrieve and consolidate potentially valuable chronostratigraphic data. The integration of these data sources would facilitate rapid and comprehensive data searches, thus helping advance studies in chronostratigraphy. The GEON project will host a number of databases within the geology domain, some of which contain biostratigraphic data. Ontologies are being developed to provide

  20. Fluid administration in severe sepsis and septic shock, patterns and outcomes: an analysis of a large national database.

    PubMed

    Marik, Paul E; Linde-Zwirble, Walter T; Bittner, Edward A; Sahatjian, Jennifer; Hansell, Douglas

    2017-05-01

    The optimal strategy of fluid resuscitation in the early hours of severe sepsis and septic shock is controversial, with both an aggressive and conservative approach being recommended. We used the 2013 Premier Hospital Discharge database to analyse the administration of fluids on the first ICU day, in 23,513 patients with severe sepsis and septic shock, who were admitted to an ICU from the emergency department. Day 1 fluid was grouped into categories 1 L wide, starting with 1-1.99 L up to ≥9 L, to examine the effect of day 1 fluids on patient mortality. We built binary response models for hospital mortality and the propensity for receiving more than 5 L of fluids on day 1, using patient age and acute conditions present on admission. Patients were grouped by the requirement for mechanical ventilation and the presence or absence of shock. We assessed trends in the difference between actual and expected mortality, in the low fluid range (1-5 L day 1 fluids) and the high fluid range (5 to ≥9 L day 1 fluids) categories, using weighted linear regression controlling for the effects of sample size and variation within the day 1 fluid category. Day 1 fluid administration averaged 4.4 L being lowest in the group with no mechanical ventilation and no shock (3.6 L) and highest (5.4 L) in the group receiving mechanical ventilation and in shock. The administration of day 1 fluids was remarkably consistent on the basis of hospital size, teaching status, rural/urban location, and region of the country. The hospital mortality in the entire cohort was 25.8%, with a mean ICU and hospital length of stay of 5.1 and 9.1 days, respectively. In the entire cohort, low volume resuscitation (1-4.99 L) was associated with a small but significant reduction in mortality, of -0.7% per litre (95% CI -1.0%, -0.4%; p = 0.02). However, in patients receiving high volume resuscitation (5 to ≥9 L), the mortality increased by 2.3% (95% CI 2.0, 2.5%; p = 0.0003) for

  1. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  2. PC based temporary shielding administrative procedure (TSAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, D.E.; Pederson, G.E.; Hamby, P.N.

    1995-03-01

    A completely new Administrative Procedure for temporary shielding was developed for use at Commonwealth Edison`s six nuclear stations. This procedure promotes the use of shielding, and addresses industry requirements for the use and control of temporary shielding. The importance of an effective procedure has increased since more temporary shielding is being used as ALARA goals become more ambitious. To help implement the administrative procedure, a personal computer software program was written to incorporate the procedural requirements. This software incorporates the useability of a Windows graphical user interface with extensive help and database features. This combination of a comprehensive administrative proceduremore » and user friendly software promotes the effective use and management of temporary shielding while ensuring that industry requirements are met.« less

  3. A Method for the Minimization of Competition Bias in Signal Detection from Spontaneous Reporting Databases.

    PubMed

    Arnaud, Mickael; Salvo, Francesco; Ahmed, Ismaïl; Robinson, Philip; Moore, Nicholas; Bégaud, Bernard; Tubert-Bitter, Pascale; Pariente, Antoine

    2016-03-01

    The two methods for minimizing competition bias in signal of disproportionate reporting (SDR) detection--masking factor (MF) and masking ratio (MR)--have focused on the strength of disproportionality for identifying competitors and have been tested using competitors at the drug level. The aim of this study was to develop a method that relies on identifying competitors by considering the proportion of reports of adverse events (AEs) that mention the drug class at an adequate level of drug grouping to increase sensitivity (Se) for SDR unmasking, and its comparison with MF and MR. Reports in the French spontaneous reporting database between 2000 and 2005 were selected. Five AEs were considered: myocardial infarction, pancreatitis, aplastic anemia, convulsions, and gastrointestinal bleeding; related reports were retrieved using standardized Medical Dictionary for Regulatory Activities (MedDRA(®)) queries. Potential competitors of AEs were identified using the developed method, i.e. Competition Index (ComIn), as well as MF and MR. All three methods were tested according to Anatomical Therapeutic Chemical (ATC) classification levels 2-5. For each AE, SDR detection was performed, first in the complete database, and second after removing reports mentioning competitors; SDRs only detected after the removal were unmasked. All unmasked SDRs were validated using the Summary of Product Characteristics, and constituted the reference dataset used for computing the performance for SDR unmasking (area under the curve [AUC], Se). Performance of the ComIn was highest when considering competitors at ATC level 3 (AUC: 62 %; Se: 52 %); similar results were obtained with MF and MR. The ComIn could greatly minimize the competition bias in SDR detection. Further study using a larger dataset is needed.

  4. Teleform scannable data entry: an efficient method to update a community-based medical record? Community care coordination network Database Group.

    PubMed Central

    Guerette, P.; Robinson, B.; Moran, W. P.; Messick, C.; Wright, M.; Wofford, J.; Velez, R.

    1995-01-01

    Community-based multi-disciplinary care of chronically ill individuals frequently requires the efforts of several agencies and organizations. The Community Care Coordination Network (CCCN) is an effort to establish a community-based clinical database and electronic communication system to facilitate the exchange of pertinent patient data among primary care, community-based and hospital-based providers. In developing a primary care based electronic record, a method is needed to update records from the field or remote sites and agencies and yet maintain data quality. Scannable data entry with fixed fields, optical character recognition and verification was compared to traditional keyboard data entry to determine the relative efficiency of each method in updating the CCCN database. PMID:8563414

  5. [Algorithms for the identification of hospital stays due to osteoporotic femoral neck fractures in European medical administrative databases using ICD-10 codes: A non-systematic review of the literature].

    PubMed

    Caillet, P; Oberlin, P; Monnet, E; Guillon-Grammatico, L; Métral, P; Belhassen, M; Denier, P; Banaei-Bouchareb, L; Viprey, M; Biau, D; Schott, A-M

    2017-10-01

    Osteoporotic hip fractures (OHF) are associated with significant morbidity and mortality. The French medico-administrative database (SNIIRAM) offers an interesting opportunity to improve the management of OHF. However, the validity of studies conducted with this database relies heavily on the quality of the algorithm used to detect OHF. The aim of the REDSIAM network is to facilitate the use of the SNIIRAM database. The main objective of this study was to present and discuss several OHF-detection algorithms that could be used with this database. A non-systematic literature search was performed. The Medline database was explored during the period January 2005-August 2016. Furthermore, a snowball search was then carried out from the articles included and field experts were contacted. The extraction was conducted using the chart developed by the REDSIAM network's "Methodology" task force. The ICD-10 codes used to detect OHF are mainly S72.0, S72.1, and S72.2. The performance of these algorithms is at best partially validated. Complementary use of medical and surgical procedure codes would affect their performance. Finally, few studies described how they dealt with fractures of non-osteoporotic origin, re-hospitalization, and potential contralateral fracture cases. Authors in the literature encourage the use of ICD-10 codes S72.0 to S72.2 to develop algorithms for OHF detection. These are the codes most frequently used for OHF in France. Depending on the study objectives, other ICD10 codes and medical and surgical procedures could be usefully discussed for inclusion in the algorithm. Detection and management of duplicates and non-osteoporotic fractures should be considered in the process. Finally, when a study is based on such an algorithm, all these points should be precisely described in the publication. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  6. Systematic literature review of hospital medication administration errors in children

    PubMed Central

    Ameer, Ahmed; Dhillon, Soraya; Peters, Mark J; Ghaleb, Maisoon

    2015-01-01

    Objective Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs) during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child’s weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588). It was also identified in a mean of 29% of doses observed (n=8,894). The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to the definition and method used to investigate MAEs. The review also illustrated the complexity and multifaceted nature of MAEs. Therefore, there is a need to develop a set of safety measures to tackle these errors in pediatric practice. PMID:29354530

  7. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  8. Algorithm to calculate proportional area transformation factors for digital geographic databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, R.

    1983-01-01

    A computer technique is described for determining proportionate-area factors used to transform thematic data between large geographic areal databases. The number of calculations in the algorithm increases linearly with the number of segments in the polygonal definitions of the databases, and increases with the square root of the total number of chains. Experience is presented in calculating transformation factors for two national databases, the USGS Water Cataloging Unit outlines and DOT county boundaries which consist of 2100 and 3100 polygons respectively. The technique facilitates using thematic data defined on various natural bases (watersheds, landcover units, etc.) in analyses involving economicmore » and other administrative bases (states, counties, etc.), and vice versa.« less

  9. RADS Version 4: An Efficient Way to Analyse the Multi-Mission Altimeter Database

    NASA Astrophysics Data System (ADS)

    Scharroo, Remko; Leuliette, Eric; Naeije, Marc; Martin-Puig, Cristina; Pires, Nelson

    2016-08-01

    The Radar Altimeter Database System (RADS) has grown to become a mature altimeter database. Over the last 18 years it is continuously being developed, first at Delft University of Technology, now also at the National Oceanic and Atmospheric Administration (NOAA) and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT).RADS now serves as a fundamental Climate Data Record for sea level. Because of the multiple users involved in vetting the data and the regular updates to the database, RADS is one of the most accurate and complete databases of satellite altimeter data around.RADS version 4 is a major change from the previous version. While the database is compatible with both software versions, the new software provides new tools, allows easier expansion, and has a better and more standardised interface.

  10. Normative Databases for Imaging Instrumentation

    PubMed Central

    Realini, Tony; Zangwill, Linda; Flanagan, John; Garway-Heath, David; Patella, Vincent Michael; Johnson, Chris; Artes, Paul; Ben Gaddie, I.; Fingeret, Murray

    2015-01-01

    Purpose To describe the process by which imaging devices undergo reference database development and regulatory clearance. The limitations and potential improvements of reference (normative) data sets for ophthalmic imaging devices will be discussed. Methods A symposium was held in July 2013 in which a series of speakers discussed issues related to the development of reference databases for imaging devices. Results Automated imaging has become widely accepted and used in glaucoma management. The ability of such instruments to discriminate healthy from glaucomatous optic nerves, and to detect glaucomatous progression over time is limited by the quality of reference databases associated with the available commercial devices. In the absence of standardized rules governing the development of reference databases, each manufacturer’s database differs in size, eligibility criteria, and ethnic make-up, among other key features. Conclusions The process for development of imaging reference databases may be improved by standardizing eligibility requirements and data collection protocols. Such standardization may also improve the degree to which results may be compared between commercial instruments. PMID:25265003

  11. Using Data-Based Inquiry and Decision Making To Improve Instruction.

    ERIC Educational Resources Information Center

    Feldman, Jay; Tung, Rosann

    2001-01-01

    Discusses a study of six schools using data-based inquiry and decision-making process to improve instruction. Findings identified two conditions to support successful implementation of the process: administrative support, especially in providing teachers learning time, and teacher leadership to encourage and support colleagues to own the process.…

  12. Methodological challenges in assessment of current use of warfarin among patients with atrial fibrillation using dispensation data from administrative health care databases.

    PubMed

    Sinyavskaya, Liliya; Matteau, Alexis; Johnson, Sarasa; Durand, Madeleine

    2018-06-05

    Algorithms to define current exposure to warfarin using administrative data may be imprecise. Study objectives were to characterize dispensation patterns, to measure gaps between expected and observed refill dates for warfarin and direct oral anticoagulants (DOACs). Retrospective cohort study using administrative health care databases of the Régie de l'assurance-maladie du Québec. We identified every dispensation of warfarin, dabigatran, rivaroxaban, or apixaban for patients with AF initiating oral anticoagulants between 2010 and 2015. For each dispensation, we extracted date and duration. Refill gaps were calculated as difference between expected and observed dates of successive dispensation. Refill gaps were summarized using descriptive statistics. To account for repeated observations nested within patients and to assess the components of variance of refill gaps, we used unconditional multilevel linear models. We identified 61 516 new users. Majority were prescribed warfarin (60.3%), followed by rivaroxaban (16.4%), dabigatran (14.5%), apixaban (8.8%). Most frequent recorded duration of dispensation was 7 days, suggesting use of pharmacist-prepared weekly pillboxes. The average refill gap from multilevel model was higher for warfarin (9.28 days, 95%CI:8.97-9.59) compared with DOACs (apixaban 3.08 days, 95%CI: 2.96-3.20, dabigatran 3.70, 95%CI: 3.56-3.84, rivaroxaban 3.15, 95%CI: 3.03-3.27). The variance of refill gaps was greater among warfarin users than among DOAC users. Greater refill gaps for warfarin may reflect inadequate capture of the period covered by the number of dispensed pills recorded in administrative data. A time-dependent definition of exposure using dispensation data would lead to greater misclassification of warfarin than DOACs use. Copyright © 2018 John Wiley & Sons, Ltd.

  13. FHWA Deep Foundation Load Test Database Version 2.0 User Manual

    DOT National Transportation Integrated Search

    2016-09-01

    The Federal Highway Administration (FHWA) began the development of the first version of the Deep Foundation Load Test Database (DFLTD) in the 1980s. Over 1,500 load tests were collected and stored for various types of piles and drilled shafts in diff...

  14. Leveraging Administrative Data for Program Evaluations: A Method for Linking Data Sets Without Unique Identifiers.

    PubMed

    Lorden, Andrea L; Radcliff, Tiffany A; Jiang, Luohua; Horel, Scott A; Smith, Matthew L; Lorig, Kate; Howell, Benjamin L; Whitelaw, Nancy; Ory, Marcia

    2016-06-01

    In community-based wellness programs, Social Security Numbers (SSNs) are rarely collected to encourage participation and protect participant privacy. One measure of program effectiveness includes changes in health care utilization. For the 65 and over population, health care utilization is captured in Medicare administrative claims data. Therefore, methods as described in this article for linking participant information to administrative data are useful for program evaluations where unique identifiers such as SSN are not available. Following fuzzy matching methodologies, participant information from the National Study of the Chronic Disease Self-Management Program was linked to Medicare administrative data. Linking variables included participant name, date of birth, gender, address, and ZIP code. Seventy-eight percent of participants were linked to their Medicare claims data. Linking program participant information to Medicare administrative data where unique identifiers are not available provides researchers with the ability to leverage claims data to better understand program effects. © The Author(s) 2014.

  15. Recent advances on terrain database correlation testing

    NASA Astrophysics Data System (ADS)

    Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art

    1998-08-01

    Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.

  16. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling

    NASA Astrophysics Data System (ADS)

    Mehdizadeh, Saeid

    2018-04-01

    Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external

  17. Feasibility of creating a National ALS Registry using administrative data in the United States

    PubMed Central

    KAYE, WENDY E.; SANCHEZ, MARCHELLE; WU, JENNIFER

    2015-01-01

    Uncertainty about the incidence and prevalence of amyotrophic lateral sclerosis (ALS), as well as the role of the environment in the etiology of ALS, supports the need for a surveillance system/registry for this disease. Our aim was to evaluate the feasibility of using existing administrative data to identify cases of ALS. The Agency for Toxic Substances and Disease Registry (ATSDR) funded four pilot projects at tertiary care facilities for ALS, HMOs, and state based organizations. Data from Medicare, Medicaid, the Veterans Health Administration, and Veterans Benefits Administration were matched to data available from site-specific administrative and clinical databases for a five-year time-period (1 January 2001–31 December 2005). Review of information in the medical records by a neurologist was considered the gold standard for determining an ALS case. We developed an algorithm using variables from the administrative data that identified true cases of ALS (verified by a neurologist). Individuals could be categorized into ALS, possible ALS, and not ALS. The best algorithm had sensitivity of 87% and specificity of 85%. We concluded that administrative data can be used to develop a surveillance system/ registry for ALS. These methods can be explored for creating surveillance systems for other neurodegenerative diseases. PMID:24597459

  18. Database recovery using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1992-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper a method is proposed for using redundant disk arrays to support rapid-recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, it is shown that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  19. Modelos para la Unificacion de Conceptos, Metodos y Procedimientos Administrativos (Guidelines for Uniform Administrative Concepts, Methods, and Procedures).

    ERIC Educational Resources Information Center

    Serrano, Jorge A., Ed.

    These documents, discussed and approved during the first meeting of the university administrators affiliated with the Federation of Private Universities of Central America and Panama (FUPAC), seek to establish uniform administrative concepts, methods, and procedures, particularly with respect to budgetary matters. The documents define relevant…

  20. A global overview of health insurance administrative costs: what are the reasons for variations found?

    PubMed

    Mathauer, Inke; Nicolle, Emmanuelle

    2011-10-01

    Administrative costs are an important spending category in total health insurance expenditure. Yet, they have rarely been a topic outside the US and there is no cross-country comparison available. This paper provides a global overview and analysis of administrative costs for social security schemes (SSS) and private health insurance schemes (PHI). The analysis is based on data of the World Health Organization (WHO) National Health Accounts (NHA) and the Organisation for Economic Cooperation and Development (OECD) System of Health Accounts (SHA). These are the only worldwide databases on health expenditure data. Further data was retrieved from a literature search. Administrative costs are presented as a share of total health insurance costs. Data is available for 58 countries. In high-income OECD countries, the average SSS administrative costs are 4.2%. Average PHI administrative costs are about three times higher. The shares are much higher for low- and middle-income countries. However, considerable variations across and within countries over time are revealed. Seven explanatory factors are explored to explain the variations: health financing system aspects, administrative activities undertaken, insurance design aspects, context factors, reporting format, accounting methods, and management and administrative efficiency measures. More detailed reporting of administrative costs would enhance comparability and provide benchmarks. Improved administrative efficiency could free resources to expand coverage. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. Matched Comparison of PRAMS and the First Steps Database.

    ERIC Educational Resources Information Center

    Schubert, Stacey; Cawthon, Laurie

    This study compared some of the survey data collected by the Pregnancy Risk Assessment Monitoring System (PRAMS) to information from vital records and administrative records in the First Steps Database (FSDB) for a group of women who gave birth in 1993. PRAMS is an ongoing survey of Washington women who have given birth. The FSDB contains birth…

  2. MANUAL OF ADMINISTRATION AND RECORDING METHODS FOR THE STAATS "MOTIVATED LEARNING" READING PROCEDURE.

    ERIC Educational Resources Information Center

    STAATS, ARTHUR W.; AND OTHERS

    THE STAATS MOTIVATED LEARNING READING PROCEDURE IS AN APPLICATION OF AN INTEGRATED-FUNCTIONAL APPROACH TO LEARNING IN THE AREA OF READING. THE METHOD INVOLVES A SYSTEM OF EXTRINSIC REINFORCEMENT WHICH EMPLOYS TOKENS BACKED UP BY A MONETARY REWARD. THE STUDENT REPORTS TO THE PROGRAM ADMINISTRATOR SOME ITEM FOR WHICH HE WOULD LIKE TO WORK, SUCH AS A…

  3. Personnel Administration: The Case Method of Teaching *

    PubMed Central

    Shaffer, Kenneth R.

    1965-01-01

    Only recently are case materials being used in the area of personnel administration, general library administration, and reference services. These permit more intellectual involvement on the part of the student than do the generalizations which result from the traditional lecture-discussion techniques. The medical library field has a professional character quite particular to itself. This is illustrated by its highly specialized clienteles, the quite special nature of the materials involved and their control, and the aura of special ethical considerations involved in any aspect of medicine. The development of a body of case materials would seem to have merit as a teaching vehicle for the medical library course, for in-service training in larger medical libraries, for workshops and institutes, and as a learning vehicle for the individual medical librarian. PMID:5832703

  4. A systematic review of validated methods to capture myopericarditis using administrative or claims data.

    PubMed

    Idowu, Rachel T; Carnahan, Ryan; Sathe, Nila A; McPheeters, Melissa L

    2013-12-30

    To identify algorithms that can capture incident cases of myocarditis and pericarditis in administrative and claims databases; these algorithms can eventually be used to identify cardiac inflammatory adverse events following vaccine administration. We searched MEDLINE from 1991 to September 2012 using controlled vocabulary and key terms related to myocarditis. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics as well as study conduct. Nine publications (including one study reported in two publications) met criteria for inclusion. Two studies performed medical record review in order to confirm that these coding algorithms actually captured patients with the disease of interest. One of these studies identified five potential cases, none of which were confirmed as acute myocarditis upon review. The other study, which employed a search algorithm based on diagnostic surveillance (using ICD-9 codes 420.90, 420.99, 422.90, 422.91 and 429.0) and sentinel reporting, identified 59 clinically confirmed cases of myopericarditis among 492,671 United States military service personnel who received smallpox vaccine between 2002 and 2003. Neither study provided algorithm validation statistics (positive predictive value, sensitivity, or specificity). A validated search algorithm is currently unavailable for identifying incident cases of pericarditis or myocarditis. Several authors have published unvalidated ICD-9-based search algorithms that appear to capture myocarditis events occurring in the context of other underlying cardiac or autoimmune conditions. Copyright © 2013. Published by Elsevier Ltd.

  5. The Development and Validation of a Special Education Intelligent Administration Support Program. Final Report.

    ERIC Educational Resources Information Center

    Utah State Univ., Logan. Center for Persons with Disabilities.

    This project studied the effects of implementing a computerized management information system developed for special education administrators. The Intelligent Administration Support Program (IASP), an expert system and database program, assisted in information acquisition and analysis pertaining to the district's quality of decisions and procedures…

  6. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved

  7. Libraries of Peptide Fragmentation Mass Spectra Database

    National Institute of Standards and Technology Data Gateway

    SRD 1C NIST Libraries of Peptide Fragmentation Mass Spectra Database (Web, free access)   The purpose of the library is to provide peptide reference data for laboratories employing mass spectrometry-based proteomics methods for protein analysis. Mass spectral libraries identify these compounds in a more sensitive and robust manner than alternative methods. These databases are freely available for testing and development of new applications.

  8. Updating the 2001 National Land Cover Database Impervious Surface Products to 2006 using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, George; Homer, Collin G.

    2010-01-01

    A prototype method was developed to update the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 to a nominal date of 2006. NLCD 2001 is widely used as a baseline for national land cover and impervious cover conditions. To enable the updating of this database in an optimal manner, methods are designed to be accomplished by individual Landsat scene. Using conservative change thresholds based on land cover classes, areas of change and no-change were segregated from change vectors calculated from normalized Landsat scenes from 2001 and 2006. By sampling from NLCD 2001 impervious surface in unchanged areas, impervious surface predictions were estimated for changed areas within an urban extent defined by a companion land cover classification. Methods were developed and tested for national application across six study sites containing a variety of urban impervious surface. Results show the vast majority of impervious surface change associated with urban development was captured, with overall RMSE from 6.86 to 13.12% for these areas. Changes of urban development density were also evaluated by characterizing the categories of change by percentile for impervious surface. This prototype method provides a relatively low cost, flexible approach to generate updated impervious surface using NLCD 2001 as the baseline.

  9. Creating a VAPEPS database: A VAPEPS tutorial

    NASA Technical Reports Server (NTRS)

    Graves, George

    1989-01-01

    A procedural method is outlined for creating a Vibroacoustic Payload Environment Prediction System (VAPEPS) Database. The method of presentation employs flowcharts of sequential VAPEPS Commands used to create a VAPEPS Database. The commands are accompanied by explanatory text to the right of the command in order to minimize the need for repetitive reference to the VAPEPS user's manual. The method is demonstrated by examples of varying complexity. It is assumed that the reader has acquired a basic knowledge of the VAPEPS software program.

  10. An Introduction to Database Management Systems.

    ERIC Educational Resources Information Center

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  11. Using Large-Scale Databases in Evaluation: Advances, Opportunities, and Challenges

    ERIC Educational Resources Information Center

    Penuel, William R.; Means, Barbara

    2011-01-01

    Major advances in the number, capabilities, and quality of state, national, and transnational databases have opened up new opportunities for evaluators. Both large-scale data sets collected for administrative purposes and those collected by other researchers can provide data for a variety of evaluation-related activities. These include (a)…

  12. Observational database for studies of nearby universe

    NASA Astrophysics Data System (ADS)

    Kaisina, E. I.; Makarov, D. I.; Karachentsev, I. D.; Kaisin, S. S.

    2012-01-01

    We present the description of a database of galaxies of the Local Volume (LVG), located within 10 Mpc around the Milky Way. It contains more than 800 objects. Based on an analysis of functional capabilities, we used the PostgreSQL DBMS as a management system for our LVG database. Applying semantic modelling methods, we developed a physical ER-model of the database. We describe the developed architecture of the database table structure, and the implemented web-access, available at http://www.sao.ru/lv/lvgdb.

  13. A method to add richness to the National Landslide Database of Great Britain

    NASA Astrophysics Data System (ADS)

    Taylor, Faith; Freeborough, Katy; Malamud, Bruce; Demeritt, David

    2014-05-01

    Landslides in Great Britain (GB) pose a risk to infrastructure, property and livelihoods. Our understanding of where landslide hazard and impact will be greatest is based on our knowledge of past events. Here, we present a method to supplement existing records of landslides in GB by searching electronic archives of local and regional newspapers. In Great Britain, the British Geological Survey (BGS) are responsible for updating and maintaining records of GB landslide events and their impacts in the National Landslide Database (NLD). The NLD contains records of approximately 16,500 landslide events in Great Britain. Data sources for the NLD include field surveys, academic articles, grey literature, news, public reports and, since 2012, social media. Here we aim to supplement the richness of the NLD by (i) identifying additional landslide events and (ii) adding more detail to existing database entries. This is done by systematically searching the LexisNexis digital archive of 568 local and regional newspapers published in the UK. The first step in the methodology was to construct Boolean search criteria that optimised the balance between minimising the number of irrelevant articles (e.g. "a landslide victory") and maximising those referring to landslide events. This keyword search was then applied to the LexisNexis archive of newspapers for all articles published between 1 January and 31 December 2012, resulting in 1,668 articles. These articles were assessed to determine whether they related to a landslide event. Of the 1,668 articles, approximately 30% (~700) referred to landslide events, with others referring to landslides more generally or themes unrelated to landslides. Examples of information obtained from newspaper articles included: date/time of landslide occurrence, spatial location, size, impact, landslide type and triggering mechanism, although the amount of detail and precision attainable from individual articles was variable. Of the 700 articles found for

  14. Validation of the Social Security Administration Life Tables (2004-2014) in Localized Prostate Cancer Patients within the Surveillance, Epidemiology, and End Results database.

    PubMed

    Preisser, Felix; Bandini, Marco; Mazzone, Elio; Nazzani, Sebastiano; Marchioni, Michele; Tian, Zhe; Saad, Fred; Pompe, Raisa S; Shariat, Shahrokh F; Heinzer, Hans; Montorsi, Francesco; Huland, Hartwig; Graefen, Markus; Tilki, Derya; Karakiewicz, Pierre I

    2018-05-22

    Accurate life expectancy estimation is crucial in clinical decision-making including management and treatment of clinically localized prostate cancer (PCa). We hypothesized that Social Security Administration (SSA) life tables' derived survival estimates closely follow observed survival of PCa patients. To test this relationship, we examined 10-yr overall survival rates in patients with clinically localized PCa and compared it with survival estimates derived from the SSA life tables. Within the Surveillance, Epidemiology, and End Results database (2004), we identified patients aged >50-<90yr. Follow-up was at least 10 yr for patients who did not die of disease or other causes. Monte Carlo method was used to define individual survival in years, according to the SSA life tables (2004-2014). Subsequently, SSA life tables' predicted survival was compared with observed survival rates in Kaplan-Meier analyses. Subgroup analyses were stratified according to treatment type and D'Amico risk classification. Overall, 39191 patients with localized PCa were identified. At 10-yr follow-up, the SSA life tables' predicted survival was 69.5% versus 73.1% according to the observed rate (p<0.0001). The largest differences between estimated versus observed survival rates were recorded for D'Amico low-risk PCa (8.0%), brachytherapy (9.1%), and radical prostatectomy (8.6%) patients. Conversely, the smallest differences were recorded for external beam radiotherapy (1.7%) and unknown treatment type (1.6%) patients. Overall, SSA life tables' predicted life expectancy closely approximate observed overall survival rates. However, SSA life tables' predicted rates underestimate by as much as 9.1% the survival in brachytherapy patients, as well as in D'Amico low-risk and radical prostatectomy patients. In these patient categories, an adjustment for the degree of underestimation might be required when counseling is provided in clinical practice. Social Security Administration (SSA) life tables

  15. Dereplication of plant phenolics using a mass-spectrometry database independent method.

    PubMed

    Borges, Ricardo M; Taujale, Rahil; de Souza, Juliana Santana; de Andrade Bezerra, Thaís; Silva, Eder Lana E; Herzog, Ronny; Ponce, Francesca V; Wolfender, Jean-Luc; Edison, Arthur S

    2018-05-29

    Dereplication, an approach to sidestep the efforts involved in the isolation of known compounds, is generally accepted as being the first stage of novel discoveries in natural product research. It is based on metabolite profiling analysis of complex natural extracts. To present the application of LipidXplorer for automatic targeted dereplication of phenolics in plant crude extracts based on direct infusion high-resolution tandem mass spectrometry data. LipidXplorer uses a user-defined molecular fragmentation query language (MFQL) to search for specific characteristic fragmentation patterns in large data sets and highlight the corresponding metabolites. To this end, MFQL files were written to dereplicate common phenolics occurring in plant extracts. Complementary MFQL files were used for validation purposes. New MFQL files with molecular formula restrictions for common classes of phenolic natural products were generated for the metabolite profiling of different representative crude plant extracts. This method was evaluated against an open-source software for mass-spectrometry data processing (MZMine®) and against manual annotation based on published data. The targeted LipidXplorer method implemented using common phenolic fragmentation patterns, was found to be able to annotate more phenolics than MZMine® that is based on automated queries on the available databases. Additionally, screening for ascarosides, natural products with unrelated structures to plant phenolics collected from the nematode Caenorhabditis elegans, demonstrated the specificity of this method by cross-testing both groups of chemicals in both plants and nematodes. Copyright © 2018 John Wiley & Sons, Ltd.

  16. PMAG: Relational Database Definition

    NASA Astrophysics Data System (ADS)

    Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.

    2002-12-01

    The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the

  17. Quantitative Methods for Administrative Decision Making in Junior Colleges.

    ERIC Educational Resources Information Center

    Gold, Benjamin Knox

    With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…

  18. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  19. Development of an Ada programming support environment database SEAD (Software Engineering and Ada Database) administration manual

    NASA Technical Reports Server (NTRS)

    Liaw, Morris; Evesson, Donna

    1988-01-01

    Software Engineering and Ada Database (SEAD) was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities which are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce duplication of effort while improving quality in the development of future software systems. SEAD data is organized into five major areas: information regarding education and training resources which are relevant to the life cycle of Ada-based software engineering projects such as those in the Space Station program; research publications relevant to NASA projects such as the Space Station Program and conferences relating to Ada technology; the latest progress reports on Ada projects completed or in progress both within NASA and throughout the free world; Ada compilers and other commercial products that support Ada software development; and reusable Ada components generated both within NASA and from elsewhere in the free world. This classified listing of reusable components shall include descriptions of tools, libraries, and other components of interest to NASA. Sources for the data include technical newletters and periodicals, conference proceedings, the Ada Information Clearinghouse, product vendors, and project sponsors and contractors.

  20. Stent thrombosis with bioabsorbable polymer drug-eluting stents: insights from the Food and Drug Administration database.

    PubMed

    Khan, Abdur R; Tripathi, Avnish; Farid, Talha A; Abaid, Bilal; Bhatt, Deepak L; Resar, Jon R; Flaherty, Michael P

    2017-11-01

    SYNERGY, a bioabsorbable polymer-based, everolimus-eluting stent (BP-DES), recently received regulatory approval in the USA for use in percutaneous coronary interventions. Yet, information on the safety of BP-DES in routine clinical practice is limited. Our aim was to compare the safety of the recently approved BP-DES with current durable polymer drug-eluting stents (DP-DES) by analyzing adverse events, namely, stent thrombosis (ST), reported to the Manufacturer and User Facility Device Experience (MAUDE) database. The MAUDE database requires nationwide mandatory notification for adverse events on devices approved for clinical use. This database was searched for adverse events reported between 1 October 2015 and 25 December 2016, encountered after the placement of either BP-DES or DP-DES. Only those adverse events were included where the exposure period to the stents was comparable after the index procedure. Of all the adverse events reported, the event of interest was ST. A total of 951 adverse events were reported. ST occurred in 48/951 of all events, 31/309 and 17/642 when BP-DES or DP-DES were used, respectively (P=0.00001). Of the 31 ST events with BP-DES, 68% (21/31) occurred within less than or equal to 24 h of the index procedure and 52% (16/31) occurred within less than or equal to 2 h. Our results raise the possibility of an increased risk of ST, particularly early ST (within 24 h), with the recently approved BP-DES. However, because of the inherent limitations of reporting within the MAUDE database, these data merely highlight a potential need for additional surveillance and randomized trials to assess further the safety of the bioabsorbable platform.

  1. Facilitators and Barriers to Safe Medication Administration to Hospital Inpatients: A Mixed Methods Study of Nurses’ Medication Administration Processes and Systems (the MAPS Study)

    PubMed Central

    McLeod, Monsey; Barber, Nicholas; Franklin, Bryony Dean

    2015-01-01

    Context Research has documented the problem of medication administration errors and their causes. However, little is known about how nurses administer medications safely or how existing systems facilitate or hinder medication administration; this represents a missed opportunity for implementation of practical, effective, and low-cost strategies to increase safety. Aim To identify system factors that facilitate and/or hinder successful medication administration focused on three inter-related areas: nurse practices and workarounds, workflow, and interruptions and distractions. Methods We used a mixed-methods ethnographic approach involving observational fieldwork, field notes, participant narratives, photographs, and spaghetti diagrams to identify system factors that facilitate and/or hinder successful medication administration in three inpatient wards, each from a different English NHS trust. We supplemented this with quantitative data on interruptions and distractions among other established medication safety measures. Findings Overall, 43 nurses on 56 drug rounds were observed. We identified a median of 5.5 interruptions and 9.6 distractions per hour. We identified three interlinked themes that facilitated successful medication administration in some situations but which also acted as barriers in others: (1) system configurations and features, (2) behaviour types among nurses, and (3) patient interactions. Some system configurations and features acted as a physical constraint for parts of the drug round, however some system effects were partly dependent on nurses’ inherent behaviour; we grouped these behaviours into ‘task focused’, and ‘patient-interaction focused’. The former contributed to a more streamlined workflow with fewer interruptions while the latter seemed to empower patients to act as a defence barrier against medication errors by being: (1) an active resource of information, (2) a passive information resource, and/or (3) a

  2. Model-based methods for case definitions from administrative health data: application to rheumatoid arthritis

    PubMed Central

    Kroeker, Kristine; Widdifield, Jessica; Muthukumarana, Saman; Jiang, Depeng; Lix, Lisa M

    2017-01-01

    Objective This research proposes a model-based method to facilitate the selection of disease case definitions from validation studies for administrative health data. The method is demonstrated for a rheumatoid arthritis (RA) validation study. Study design and setting Data were from 148 definitions to ascertain cases of RA in hospital, physician and prescription medication administrative data. We considered: (A) separate univariate models for sensitivity and specificity, (B) univariate model for Youden’s summary index and (C) bivariate (ie, joint) mixed-effects model for sensitivity and specificity. Model covariates included the number of diagnoses in physician, hospital and emergency department records, physician diagnosis observation time, duration of time between physician diagnoses and number of RA-related prescription medication records. Results The most common case definition attributes were: 1+ hospital diagnosis (65%), 2+ physician diagnoses (43%), 1+ specialist physician diagnosis (51%) and 2+ years of physician diagnosis observation time (27%). Statistically significant improvements in sensitivity and/or specificity for separate univariate models were associated with (all p values <0.01): 2+ and 3+ physician diagnoses, unlimited physician diagnosis observation time, 1+ specialist physician diagnosis and 1+ RA-related prescription medication records (65+ years only). The bivariate model produced similar results. Youden’s index was associated with these same case definition criteria, except for the length of the physician diagnosis observation time. Conclusion A model-based method provides valuable empirical evidence to aid in selecting a definition(s) for ascertaining diagnosed disease cases from administrative health data. The choice between univariate and bivariate models depends on the goals of the validation study and number of case definitions. PMID:28645978

  3. An Evaluation of Online Business Databases.

    ERIC Educational Resources Information Center

    van der Heyde, Angela J.

    The purpose of this study was to evaluate the credibility and timeliness of online business databases. The areas of evaluation were the currency, reliability, and extent of financial information in the databases. These were measured by performing an online search for financial information on five U.S. companies. The method of selection for the…

  4. Thrombotic events associated with C1 esterase inhibitor products in patients with hereditary angioedema: investigation from the United States Food and Drug Administration adverse event reporting system database.

    PubMed

    Gandhi, Pranav K; Gentry, William M; Bottorff, Michael B

    2012-10-01

    To investigate reports of thrombotic events associated with the use of C1 esterase inhibitor products in patients with hereditary angioedema in the United States. Retrospective data mining analysis. The United States Food and Drug Administration (FDA) adverse event reporting system (AERS) database. Case reports of C1 esterase inhibitor products, thrombotic events, and C1 esterase inhibitor product-associated thrombotic events (i.e., combination cases) were extracted from the AERS database, using the time frames of each respective product's FDA approval date through the second quarter of 2011. Bayesian statistical methodology within the neural network architecture was implemented to identify potential signals of a drug-associated adverse event. A potential signal is generated when the lower limit of the 95% 2-sided confidence interval of the information component, denoted by IC₀₂₅ , is greater than zero. This suggests that the particular drug-associated adverse event was reported to the database more often than statistically expected from reports available in the database. Ten combination cases of thrombotic events associated with the use of one C1 esterase inhibitor product (Cinryze) were identified in patients with hereditary angioedema. A potential signal demonstrated by an IC₀₂₅ value greater than zero (IC₀₂₅ = 2.91) was generated for these combination cases. The extracted cases from the AERS indicate continuing reports of thrombotic events associated with the use of one C1 esterase inhibitor product among patients with hereditary angioedema. The AERS is incapable of establishing a causal link and detecting the true frequency of an adverse event associated with a drug; however, potential signals of C1 esterase inhibitor product-associated thrombotic events among patients with hereditary angioedema were identified in the extracted combination cases. © 2012 Pharmacotherapy Publications, Inc.

  5. Optimal Dose and Method of Administration of Intravenous Insulin in the Management of Emergency Hyperkalemia: A Systematic Review.

    PubMed

    Harel, Ziv; Kamel, Kamel S

    2016-01-01

    Hyperkalemia is a common electrolyte disorder that can result in fatal cardiac arrhythmias. Despite the importance of insulin as a lifesaving intervention in the treatment of hyperkalemia in an emergency setting, there is no consensus on the dose or the method (bolus or infusion) of its administration. Our aim was to review data in the literature to determine the optimal dose and route of administration of insulin in the management of emergency hyperkalemia. We searched several databases from their date of inception through February 2015 for eligible articles published in any language. We included any study that reported on the use of insulin in the management of hyperkalemia. We identified eleven studies. In seven studies, 10 units of regular insulin was administered (bolus in five studies, infusion in two studies), in one study 12 units of regular insulin was infused over 30 minutes, and in three studies 20 units of regular insulin was infused over 60 minutes. The majority of included studies were biased. There was no statistically significant difference in mean decrease in serum potassium (K+) concentration at 60 minutes between studies in which insulin was administered as an infusion of 20 units over 60 minutes and studies in which 10 units of insulin was administered as a bolus (0.79±0.25 mmol/L versus 0.78±0.25 mmol/L, P = 0.98) or studies in which 10 units of insulin was administered as an infusion (0.79±0.25 mmol/L versus 0.39±0.09 mmol/L, P = 0.1). Almost one fifth of the study population experienced an episode of hypoglycemia. The limited data available in the literature shows no statistically significant difference between the different regimens of insulin used to acutely lower serum K+ concentration. Accordingly, 10 units of short acting insulin given intravenously may be used in cases of hyperkalemia. Alternatively, 20 units of short acting insulin may be given as a continuous intravenous infusion over 60 minutes in patients with severe

  6. Optimal Dose and Method of Administration of Intravenous Insulin in the Management of Emergency Hyperkalemia: A Systematic Review

    PubMed Central

    Harel, Ziv; Kamel, Kamel S.

    2016-01-01

    Background and Objectives Hyperkalemia is a common electrolyte disorder that can result in fatal cardiac arrhythmias. Despite the importance of insulin as a lifesaving intervention in the treatment of hyperkalemia in an emergency setting, there is no consensus on the dose or the method (bolus or infusion) of its administration. Our aim was to review data in the literature to determine the optimal dose and route of administration of insulin in the management of emergency hyperkalemia. Design, Setting, Participants, & Measurements We searched several databases from their date of inception through February 2015 for eligible articles published in any language. We included any study that reported on the use of insulin in the management of hyperkalemia. Results We identified eleven studies. In seven studies, 10 units of regular insulin was administered (bolus in five studies, infusion in two studies), in one study 12 units of regular insulin was infused over 30 minutes, and in three studies 20 units of regular insulin was infused over 60 minutes. The majority of included studies were biased. There was no statistically significant difference in mean decrease in serum potassium (K+) concentration at 60 minutes between studies in which insulin was administered as an infusion of 20 units over 60 minutes and studies in which 10 units of insulin was administered as a bolus (0.79±0.25 mmol/L versus 0.78±0.25 mmol/L, P = 0.98) or studies in which 10 units of insulin was administered as an infusion (0.79±0.25 mmol/L versus 0.39±0.09 mmol/L, P = 0.1). Almost one fifth of the study population experienced an episode of hypoglycemia. Conclusion The limited data available in the literature shows no statistically significant difference between the different regimens of insulin used to acutely lower serum K+ concentration. Accordingly, 10 units of short acting insulin given intravenously may be used in cases of hyperkalemia. Alternatively, 20 units of short acting insulin may be

  7. The Cardiac Safety Research Consortium ECG database.

    PubMed

    Kligfield, Paul; Green, Cynthia L

    2012-01-01

    The Cardiac Safety Research Consortium (CSRC) ECG database was initiated to foster research using anonymized, XML-formatted, digitized ECGs with corresponding descriptive variables from placebo- and positive-control arms of thorough QT studies submitted to the US Food and Drug Administration (FDA) by pharmaceutical sponsors. The database can be expanded to other data that are submitted directly to CSRC from other sources, and currently includes digitized ECGs from patients with genotyped varieties of congenital long-QT syndrome; this congenital long-QT database is also linked to ambulatory electrocardiograms stored in the Telemetric and Holter ECG Warehouse (THEW). Thorough QT data sets are available from CSRC for unblinded development of algorithms for analysis of repolarization and for blinded comparative testing of algorithms developed for the identification of moxifloxacin, as used as a positive control in thorough QT studies. Policies and procedures for access to these data sets are available from CSRC, which has developed tools for statistical analysis of blinded new algorithm performance. A recently approved CSRC project will create a data set for blinded analysis of automated ECG interval measurements, whose initial focus will include comparison of four of the major manufacturers of automated electrocardiographs in the United States. CSRC welcomes application for use of the ECG database for clinical investigation. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  9. Clinical supervision for nurses in administrative and leadership positions: a systematic literature review of the studies focusing on administrative clinical supervision.

    PubMed

    Sirola-Karvinen, Pirjo; Hyrkäs, Kristiina

    2006-11-01

    The aim of this systematic literature review was to describe administrative clinical supervision from the nursing leaders', directors' and administrators' perspective. Administrative clinical supervision is a timely and important topic as organizational structures in health care and nursing leadership are changing in addition to the increasing number of complex challenges present in health care. The material in this review was drawn from national and international databases including doctoral dissertations, distinguished thesis and peer-reviewed articles. The material was analysed by means of content analysis. The theoretical framework for the analysis was based on the three main functions of clinical supervision: administrative, educational and supportive. The findings demonstrated that the experiences of the administrative clinical supervision and its supportiveness were varying. The intervention was seen to provide versatility of learning experiences and support in challenging work experiences. Administrative clinical supervision effects and assures the quality of care. The effects as a means of development were explained through its resemblance to a leading specialist community. The findings support earlier perceptions concerning the importance and significance of administrative clinical supervision for nursing managers and administrators. However, more research is needed to develop administrative clinical supervision and to increase understanding of theoretical assumptions and relationships of the concepts on the background.

  10. Value of shared preclinical safety studies - The eTOX database.

    PubMed

    Briggs, Katharine; Barber, Chris; Cases, Montserrat; Marc, Philippe; Steger-Hartmann, Thomas

    2015-01-01

    A first analysis of a database of shared preclinical safety data for 1214 small molecule drugs and drug candidates extracted from 3970 reports donated by thirteen pharmaceutical companies for the eTOX project (www.etoxproject.eu) is presented. Species, duration of exposure and administration route data were analysed to assess if large enough subsets of homogenous data are available for building in silico predictive models. Prevalence of treatment related effects for the different types of findings recorded were analysed. The eTOX ontology was used to determine the most common treatment-related clinical chemistry and histopathology findings reported in the database. The data were then mined to evaluate sensitivity of established in vivo biomarkers for liver toxicity risk assessment. The value of the database to inform other drug development projects during early drug development is illustrated by a case study.

  11. Case retrieval in medical databases by fusing heterogeneous information.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice

    2011-01-01

    A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.

  12. Identifying predictors of hospital readmission following congenital heart surgery through analysis of a multiinstitutional administrative Database.

    PubMed

    Smith, Andrew H; Doyle, Thomas P; Mettler, Bret A; Bichell, David P; Gay, James C

    2015-01-01

    Despite resource burdens associated with hospital readmission, there remains little multiinstitutional data available to identify children at risk for readmission following congenital heart surgery. Children undergoing congenital heart surgery and discharged home between January of 2011 and December 2012 were identified within the Pediatric Health Information System database, a multiinstitutional collection of clinical and administrative data. Patient discharges were assigned to derivation and validation cohorts for the purposes of predictive model design, with 17 871 discharges meeting inclusion criteria. Readmission within 30 days was noted following 956 (11%) of discharges within the derivation cohort (n = 9104), with a median time to readmission of 9 days (interquartile range [IQR] 5-18 days). Readmissions resulted in a rehospitalization length of stay of 4 days (IQR 2-8 days) and were associated with an intensive care unit (ICU) admission in 36% of cases. Independent perioperative predictors of readmission included Risk Adjustment in Congenital Heart Surgery score of 6 (odds ratio [OR] 2.6, 95% confidence interval [CI] 1.8-3.7, P < .001) and ICU length of stay of at least 7 days (OR 1.9 95% CI 1.6-2.2, P < .001). Demographic predictors included Hispanic ethnicity (OR 1.2, 95% CI 1.1-1.4, P = .014) and government payor status (OR 1.2, 95% CI 1.1-1.4, P = .007). Predictive model performance was modest among validation cohort (c statistic 0.68, 95% CI 0.66-0.69, P < .001). Readmissions following congenital heart surgery are common and associated with significant resource consumption. While we describe independent predictors that may identify patients at risk for readmission prior to hospital discharge, there likely remains other unreported factors that may contribute to readmission following congenital heart surgery. © 2014 Wiley Periodicals, Inc.

  13. Comparison of pediatric cardiac surgical mortality rates from national administrative data to contemporary clinical standards.

    PubMed

    Welke, Karl F; Diggs, Brian S; Karamlou, Tara; Ungerleider, Ross M

    2009-01-01

    Despite the superior coding and risk adjustment of clinical data, the ready availability, national scope, and perceived unbiased nature of administrative data make it the choice of governmental agencies and insurance companies for evaluating quality and outcomes. We calculated pediatric cardiac surgery mortality rates from administrative data and compared them with widely quoted standards from clinical databases. Pediatric cardiac surgical operations were retrospectively identified by ICD-9-CM diagnosis and procedure codes from the Nationwide Inpatient Sample (NIS) 1988-2005 and the Kids' Inpatient Database (KID) 2003. Cases were grouped into Risk Adjustment for Congenital Heart Surgery, version 1 (RACHS-1) categories. In-hospital mortality rates and 95% confidence intervals were calculated. A total of 55,164 operations from the NIS and 10,945 operations from the KID were placed into RACHS-1 categories. During the 18-year period, the overall NIS mortality rate for pediatric cardiac surgery decreased from 8.7% (95% confidence interval, 8.0% to 9.3%) to 4.6% (95% confidence interval, 4.3% to 5.0%). Mortality rates by RACHS-1 category decreased significantly as well. The KID and NIS mortality rates from comparable years were similar. Overall mortality rates derived from administrative data were higher than those from contemporary national clinical data, The Society of Thoracic Surgeons Congenital Heart Surgery Database, or published data from pediatric cardiac specialty centers. Although category-specific mortality rates were higher in administrative data than in clinical data, a minority of the relationships reached statistical significance. Despite substantial improvement, mortality rates from administrative data remain higher than those from clinical data. The discrepancy may be attributable to several factors: differences in database design and composition, differences in data collection and reporting structures, and variation in data quality.

  14. Translation from the collaborative OSM database to cartography

    NASA Astrophysics Data System (ADS)

    Hayat, Flora

    2018-05-01

    The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.

  15. Heterogenous database integration in a physician workstation.

    PubMed

    Annevelink, J; Young, C Y; Tang, P C

    1991-01-01

    We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema.

  16. Heterogenous database integration in a physician workstation.

    PubMed Central

    Annevelink, J.; Young, C. Y.; Tang, P. C.

    1991-01-01

    We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema. PMID:1807624

  17. Protein structure database search and evolutionary classification.

    PubMed

    Yang, Jinn-Moon; Tung, Chi-Hua

    2006-01-01

    As more protein structures become available and structural genomics efforts provide structural models in a genome-wide strategy, there is a growing need for fast and accurate methods for discovering homologous proteins and evolutionary classifications of newly determined structures. We have developed 3D-BLAST, in part, to address these issues. 3D-BLAST is as fast as BLAST and calculates the statistical significance (E-value) of an alignment to indicate the reliability of the prediction. Using this method, we first identified 23 states of the structural alphabet that represent pattern profiles of the backbone fragments and then used them to represent protein structure databases as structural alphabet sequence databases (SADB). Our method enhanced BLAST as a search method, using a new structural alphabet substitution matrix (SASM) to find the longest common substructures with high-scoring structured segment pairs from an SADB database. Using personal computers with Intel Pentium4 (2.8 GHz) processors, our method searched more than 10 000 protein structures in 1.3 s and achieved a good agreement with search results from detailed structure alignment methods. [3D-BLAST is available at http://3d-blast.life.nctu.edu.tw].

  18. The need for a juvenile fire setting database.

    PubMed

    Klein, Julianne J; Mondozzi, Mary A; Andrews, David A

    2008-01-01

    A juvenile fire setter can be classified as any youth setting a fire regardless of the reason. Many communities have programs to deal with this problem, most based on models developed by the United States Fire Administration. We reviewed our programs data to compare it with that published nationally. Currently there is not a nationwide database to compare fire setter data. A single institution, retrospective chart review of all fire setters between the years of January 1, 2003 and December 31, 2005 was completed. There were 133 participants ages 3 to 17. Information obtained included age, location, ignition source, court order and recidivism. Analysis from our data set found 26% of the peak ages for fire involvement to be 12 and 14. Location, ignition source, and court ordered participants were divided into two age groups: 3 to 10 (N = 58) and 11 to 17 (N = 75). Bedrooms ranked first for the younger population and schools for the latter. Fifty-four percentage of the 133 participants used lighters over matches. Twelve percentage of the 3- to 10-year-olds were court mandated, compared with 52% of the 11- to 17-year-olds. Recidivism rates were 4 to 10% with a 33 to 38% survey return rate. Currently there is no state or nationwide, time honored data base to compare facts from which conclusions can be drawn. Starting small with a statewide database could educe a stimulus for a national database. This could also enhance the information provided by the United States Fire Administration, National Fire Data Center beginning one juvenile firesetter program and State Fire Marshal's office at a time.

  19. Legal and Administrative Feasibility of a Federal Junk Food and Sugar-Sweetened Beverage Tax to Improve Diet.

    PubMed

    Pomeranz, Jennifer L; Wilde, Parke; Huang, Yue; Micha, Renata; Mozaffarian, Dariush

    2018-02-01

    To evaluate legal and administrative feasibility of a federal "junk" food (including sugar-sweetened beverages [SSBs]) tax to improve diet. To assess food definitions and administration models, we systematically searched (1) PubMed (through May 15, 2017) for articles defining foods subject to taxes, and legal and legislative databases as well as online for (2) US federal, state, and tribal junk food tax bills and laws (January 1, 2012-February 28, 2017); SSB taxes (January 1, 2014-February 28, 2017); and international junk food tax laws (as of February 28, 2017); and (3) federal taxing mechanisms and administrative methods (as of February 28, 2017). Articles recommend taxing foods by product category, broad nutrient criteria, specific nutrients or calories, or a combination. US junk food tax bills (n = 6) and laws (n = 3), international junk food laws (n = 2), and US SSB taxes (n = 10) support taxing foods using category-based (n = 8), nutrient-based (n = 1), or combination (n = 12) approaches. Federal taxing mechanisms (particularly manufacturer excise taxes on alcohol) and administrative methods provide informative models. From legal and administrative perspectives, a federal junk food tax appears feasible based on product categories or combination category-plus-nutrient approaches, using a manufacturer excise tax, with additional support for sugar and graduated tax strategies.

  20. Developing a stroke severity index based on administrative data was feasible using data mining techniques.

    PubMed

    Sung, Sheng-Feng; Hsieh, Cheng-Yang; Kao Yang, Yea-Huei; Lin, Huey-Juan; Chen, Chih-Hung; Chen, Yu-Wei; Hu, Ya-Han

    2015-11-01

    Case-mix adjustment is difficult for stroke outcome studies using administrative data. However, relevant prescription, laboratory, procedure, and service claims might be surrogates for stroke severity. This study proposes a method for developing a stroke severity index (SSI) by using administrative data. We identified 3,577 patients with acute ischemic stroke from a hospital-based registry and analyzed claims data with plenty of features. Stroke severity was measured using the National Institutes of Health Stroke Scale (NIHSS). We used two data mining methods and conventional multiple linear regression (MLR) to develop prediction models, comparing the model performance according to the Pearson correlation coefficient between the SSI and the NIHSS. We validated these models in four independent cohorts by using hospital-based registry data linked to a nationwide administrative database. We identified seven predictive features and developed three models. The k-nearest neighbor model (correlation coefficient, 0.743; 95% confidence interval: 0.737, 0.749) performed slightly better than the MLR model (0.742; 0.736, 0.747), followed by the regression tree model (0.737; 0.731, 0.742). In the validation cohorts, the correlation coefficients were between 0.677 and 0.725 for all three models. The claims-based SSI enables adjusting for disease severity in stroke studies using administrative data. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. New tools for discovery from old databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.P.

    1990-05-01

    Very large quantities of information have been accumulated as a result of petroleum exploration and the practice of petroleum geology. New and more powerful methods to build and analyze databases have been developed. The new tools must be tested, and, as quickly as possible, combined with traditional methods to the full advantage of currently limited funds in the search for new and extended hydrocarbon reserves. A recommended combined sequence is (1) database validating, (2) category separating, (3) machine learning, (4) graphic modeling, (5) database filtering, and (6) regression for predicting. To illustrate this procedure, a database from the Railroad Commissionmore » of Texas has been analyzed. Clusters of information have been identified to prevent apples and oranges problems from obscuring the conclusions. Artificial intelligence has checked the database for potentially invalid entries and has identified rules governing the relationship between factors, which can be numeric or nonnumeric (words), or both. Graphic 3-Dimensional modeling has clarified relationships. Database filtering has physically separated the integral parts of the database, which can then be run through the sequence again, increasing the precision. Finally, regressions have been run on separated clusters giving equations, which can be used with confidence in making predictions. Advances in computer systems encourage the learning of much more from past records, and reduce the danger of prejudiced decisions. Soon there will be giant strides beyond current capabilities to the advantage of those who are ready for them.« less

  2. Searching mixed DNA profiles directly against profile databases.

    PubMed

    Bright, Jo-Anne; Taylor, Duncan; Curran, James; Buckleton, John

    2014-03-01

    DNA databases have revolutionised forensic science. They are a powerful investigative tool as they have the potential to identify persons of interest in criminal investigations. Routinely, a DNA profile generated from a crime sample could only be searched for in a database of individuals if the stain was from single contributor (single source) or if a contributor could unambiguously be determined from a mixed DNA profile. This meant that a significant number of samples were unsuitable for database searching. The advent of continuous methods for the interpretation of DNA profiles offers an advanced way to draw inferential power from the considerable investment made in DNA databases. Using these methods, each profile on the database may be considered a possible contributor to a mixture and a likelihood ratio (LR) can be formed. Those profiles which produce a sufficiently large LR can serve as an investigative lead. In this paper empirical studies are described to determine what constitutes a large LR. We investigate the effect on a database search of complex mixed DNA profiles with contributors in equal proportions with dropout as a consideration, and also the effect of an incorrect assignment of the number of contributors to a profile. In addition, we give, as a demonstration of the method, the results using two crime samples that were previously unsuitable for database comparison. We show that effective management of the selection of samples for searching and the interpretation of the output can be highly informative. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    PubMed Central

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  4. C&RE-SLC: Database for conservation and renewable energy activities

    NASA Astrophysics Data System (ADS)

    Cavallo, J. D.; Tompkins, M. M.; Fisher, A. G.

    1992-08-01

    The Western Area Power Administration (Western) requires all its long-term power customers to implement programs that promote the conservation of electric energy or facilitate the use of renewable energy resources. The hope is that these measures could significantly reduce the amount of environmental damage associated with electricity production. As part of preparing the environmental impact statement for Western's Electric Power Marketing Program, Argonne National Laboratory constructed a database of the conservation and renewable energy activities in which Western's Salt Lake City customers are involved. The database provides information on types of conservation and renewable energy activities and allows for comparisons of activities being conducted at different utilities in the Salt Lake City region. Sorting the database allows Western's Salt Lake City customers to be classified so the various activities offered by different classes of utilities can be identified; for example, comparisons can be made between municipal utilities and cooperatives or between large and small customers. The information included in the database was collected from customer planning documents in the files of Western's Salt Lake City office.

  5. A Comparative Analysis Among the SRS M&M, NIS, and KID Databases for the Adolescent Idiopathic Scoliosis.

    PubMed

    Lee, Nathan J; Guzman, Javier Z; Kim, Jun; Skovrlj, Branko; Martin, Christopher T; Pugely, Andrew J; Gao, Yubo; Caridi, John M; Mendoza-Lattes, Sergio; Cho, Samuel K

    2016-11-01

    Retrospective cohort analysis. A growing number of publications have utilized the Scoliosis Research Society (SRS) Morbidity and Mortality (M&M) database, but none have compared it to other large databases. The objective of this study was to compare SRS complications with those in administrative databases. The Nationwide Inpatient Sample (NIS) and Kid's Inpatient Database (KID) captured a greater number of overall complications while the SRS M&M data provided a greater incidence of spine-related complications following adolescent idiopathic scoliosis (AIS) surgery. Chi-square was used to obtain statistical significance, with p < .05 considered significant. The SRS 2004-2007 (9,904 patients), NIS 2004-2007 (20,441 patients) and KID 2003-2006 (10,184 patients) databases were analyzed for AIS patients who underwent fusion. Comparable variables were queried in all three databases, including patient demographics, surgical variables, and complications. Patients undergoing AIS in the SRS database were slightly older (SRS 14.4 years vs. NIS 13.8 years, p < .0001; KID 13.9 years, p < .0001) and less likely to be male (SRS 18.5% vs. NIS 26.3%, p < .0001; KID 24.8%, p < .0001). Revision surgery (SRS 3.3% vs. NIS 2.4%, p < .0001; KID 0.9%, p < .0001) and osteotomy (SRS 8% vs. NIS 2.3%, p < .0001; KID 2.4%, p < .0001) were more commonly reported in the SRS database. The SRS database reported fewer overall complications (SRS 3.9% vs. NIS 7.3%, p < .0001; KID 6.6%, p < .0001). However, when respiratory complications (SRS 0.5% vs. NIS 3.7%, p < .0001; KID 4.4%, p < .0001) were excluded, medical complication rates were similar across databases. In contrast, SRS reported higher spine-specific complication rates. Mortality rates were similar between SRS versus NIS (p = .280) and SRS versus KID (p = .08) databases. There are similarities and differences between the three databases. These discrepancies are likely due to the varying data-gathering methods each organization uses to

  6. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    PubMed

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Solutions for medical databases optimal exploitation.

    PubMed

    Branescu, I; Purcarea, V L; Dobrescu, R

    2014-03-15

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, "multimodel" federated system for extending OLAP querying to external object databases.

  8. Solutions for medical databases optimal exploitation

    PubMed Central

    Branescu, I; Purcarea, VL; Dobrescu, R

    2014-01-01

    The paper discusses the methods to apply OLAP techniques for multidimensional databases that leverage the existing, performance-enhancing technique, known as practical pre-aggregation, by making this technique relevant to a much wider range of medical applications, as a logistic support to the data warehousing techniques. The transformations have practically low computational complexity and they may be implemented using standard relational database technology. The paper also describes how to integrate the transformed hierarchies in current OLAP systems, transparently to the user and proposes a flexible, “multimodel" federated system for extending OLAP querying to external object databases. PMID:24653769

  9. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  10. Privacy-preserving search for chemical compound databases.

    PubMed

    Shimizu, Kana; Nuida, Koji; Arai, Hiromi; Mitsunari, Shigeo; Attrapadung, Nuttapong; Hamada, Michiaki; Tsuda, Koji; Hirokawa, Takatsugu; Sakuma, Jun; Hanaoka, Goichiro; Asai, Kiyoshi

    2015-01-01

    Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information.

  11. Privacy-preserving search for chemical compound databases

    PubMed Central

    2015-01-01

    Background Searching for similar compounds in a database is the most important process for in-silico drug screening. Since a query compound is an important starting point for the new drug, a query holder, who is afraid of the query being monitored by the database server, usually downloads all the records in the database and uses them in a closed network. However, a serious dilemma arises when the database holder also wants to output no information except for the search results, and such a dilemma prevents the use of many important data resources. Results In order to overcome this dilemma, we developed a novel cryptographic protocol that enables database searching while keeping both the query holder's privacy and database holder's privacy. Generally, the application of cryptographic techniques to practical problems is difficult because versatile techniques are computationally expensive while computationally inexpensive techniques can perform only trivial computation tasks. In this study, our protocol is successfully built only from an additive-homomorphic cryptosystem, which allows only addition performed on encrypted values but is computationally efficient compared with versatile techniques such as general purpose multi-party computation. In an experiment searching ChEMBL, which consists of more than 1,200,000 compounds, the proposed method was 36,900 times faster in CPU time and 12,000 times as efficient in communication size compared with general purpose multi-party computation. Conclusion We proposed a novel privacy-preserving protocol for searching chemical compound databases. The proposed method, easily scaling for large-scale databases, may help to accelerate drug discovery research by making full use of unused but valuable data that includes sensitive information. PMID:26678650

  12. Administrative Data Algorithms Can Describe Ambulatory Physician Utilization

    PubMed Central

    Shah, Baiju R; Hux, Janet E; Laupacis, Andreas; Zinman, Bernard; Cauch-Dudek, Karen; Booth, Gillian L

    2007-01-01

    Objective To validate algorithms using administrative data that characterize ambulatory physician care for patients with a chronic disease. Data Sources Seven-hundred and eighty-one people with diabetes were recruited mostly from community pharmacies to complete a written questionnaire about their physician utilization in 2002. These data were linked with administrative databases detailing health service utilization. Study Design An administrative data algorithm was defined that identified whether or not patients received specialist care, and it was tested for agreement with self-report. Other algorithms, which assigned each patient to a primary care and specialist physician, were tested for concordance with self-reported regular providers of care. Principal Findings The algorithm to identify whether participants received specialist care had 80.4 percent agreement with questionnaire responses (κ = 0.59). Compared with self-report, administrative data had a sensitivity of 68.9 percent and specificity 88.3 percent for identifying specialist care. The best administrative data algorithm to assign each participant's regular primary care and specialist providers was concordant with self-report in 82.6 and 78.2 percent of cases, respectively. Conclusions Administrative data algorithms can accurately match self-reported ambulatory physician utilization. PMID:17610448

  13. Databases for rRNA gene profiling of microbial communities

    DOEpatents

    Ashby, Matthew

    2013-07-02

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  14. Administrative Claims Data for Economic Analyses in Hematopoietic Cell Transplantation: Challenges and Opportunities

    PubMed Central

    Preussler, Jaime M.; Mau, Lih-Wen; Majhail, Navneet S; Meyer, Christa L.; Denzen, Ellen; Edsall, Kristen C.; Farnia, Stephanie H.; Silver, Alicia; Saber, Wael; Burns, Linda J.; Vanness, David J.

    2017-01-01

    There is an increasing need for the development of approaches to measure quality, costs and resource utilization patterns among allogeneic hematopoietic cell transplant (HCT) patients. Administrative claims data provide an opportunity to examine service utilization and costs, particularly from the payer’s perspective. However, because administrative claims data are primarily designed for reimbursement purposes, challenges arise when using it for research. We use a case study with data derived from the 2007–2011 Truven Health MarketScan Research database to discuss opportunities and challenges for the use of administrative claims data to examine the costs and service utilization of allogeneic HCT and chemotherapy alone for patients with acute myeloid leukemia (AML). Starting with a cohort of 29,915 potentially eligible patients with a diagnosis of AML, we were able to identify 211 patients treated with HCT and 774 treated with chemotherapy only where we were sufficiently confident of the diagnosis and treatment path to allow analysis. Administrative claims data provide an avenue to meet the need for health care costs, resource utilization, and outcome information. However, when using these data, a balance between clinical knowledge and applied methods is critical to identifying a valid study cohort and accurate measures of costs and resource utilization. PMID:27184624

  15. Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burford, M.J.; Burnett, R.A.; Curtis, L.M.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Chemical biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS system package. System administrators, database administrators, and general users can use this guide to install, configure, and maintain the FEMIS client software package. This document provides a description of the FEMIS environment; distribution media; data, communications, and electronic mail servers; user workstations; and system management.

  16. Data Administration at a Regional University: A Case Study.

    ERIC Educational Resources Information Center

    Gose, Frank J.

    Data administration (DA) is a position that has emerged with the growth of information technologies. A review of DA literature confirms that, although DA is widely associated with database management systems (DBMS), there is no standard DA job description, DA staffing and location within the organization vary, and DA functions range in description…

  17. A development and integration of database code-system with a compilation of comparator, k0 and absolute methods for INAA using microsoft access

    NASA Astrophysics Data System (ADS)

    Hoh, Siew Sin; Rapie, Nurul Nadiah; Lim, Edwin Suh Wen; Tan, Chun Yuan; Yavar, Alireza; Sarmani, Sukiman; Majid, Amran Ab.; Khoo, Kok Siong

    2013-05-01

    Instrumental Neutron Activation Analysis (INAA) is often used to determine and calculate the elemental concentrations of a sample at The National University of Malaysia (UKM) typically in Nuclear Science Programme, Faculty of Science and Technology. The objective of this study was to develop a database code-system based on Microsoft Access 2010 which could help the INAA users to choose either comparator method, k0-method or absolute method for calculating the elemental concentrations of a sample. This study also integrated k0data, Com-INAA, k0Concent, k0-Westcott and Abs-INAA to execute and complete the ECC-UKM database code-system. After the integration, a study was conducted to test the effectiveness of the ECC-UKM database code-system by comparing the concentrations between the experiments and the code-systems. 'Triple Bare Monitor' Zr-Au and Cr-Mo-Au were used in k0Concent, k0-Westcott and Abs-INAA code-systems as monitors to determine the thermal to epithermal neutron flux ratio (f). Calculations involved in determining the concentration were net peak area (Np), measurement time (tm), irradiation time (tirr), k-factor (k), thermal to epithermal neutron flux ratio (f), parameters of the neutron flux distribution epithermal (α) and detection efficiency (ɛp). For Com-INAA code-system, certified reference material IAEA-375 Soil was used to calculate the concentrations of elements in a sample. Other CRM and SRM were also used in this database codesystem. Later, a verification process to examine the effectiveness of the Abs-INAA code-system was carried out by comparing the sample concentrations between the code-system and the experiment. The results of the experimental concentration values of ECC-UKM database code-system were performed with good accuracy.

  18. [A Terahertz Spectral Database Based on Browser/Server Technique].

    PubMed

    Zhang, Zhuo-yong; Song, Yue

    2015-09-01

    With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to

  19. [Method of traditional Chinese medicine formula design based on 3D-database pharmacophore search and patent retrieval].

    PubMed

    He, Yu-su; Sun, Zhi-yi; Zhang, Yan-ling

    2014-11-01

    By using the pharmacophore model of mineralocorticoid receptor antagonists as a starting point, the experiment stud- ies the method of traditional Chinese medicine formula design for anti-hypertensive. Pharmacophore models were generated by 3D-QSAR pharmacophore (Hypogen) program of the DS3.5, based on the training set composed of 33 mineralocorticoid receptor antagonists. The best pharmacophore model consisted of two Hydrogen-bond acceptors, three Hydrophobic and four excluded volumes. Its correlation coefficient of training set and test set, N, and CAI value were 0.9534, 0.6748, 2.878, and 1.119. According to the database screening, 1700 active compounds from 86 source plant were obtained. Because of lacking of available anti-hypertensive medi cation strategy in traditional theory, this article takes advantage of patent retrieval in world traditional medicine patent database, in order to design drug formula. Finally, two formulae was obtained for antihypertensive.

  20. Cross-checking of Large Evaluated and Experimental Nuclear Reaction Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeydina, O.; Koning, A.J.; Soppera, N.

    2014-06-15

    Automated methods are presented for the verification of large experimental and evaluated nuclear reaction databases (e.g. EXFOR, JEFF, TENDL). These methods allow an assessment of the overall consistency of the data and detect aberrant values in both evaluated and experimental databases.

  1. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  2. System and method employing a self-organizing map load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Harley, Ronald G.; Du, Liang; Yang, Yi; Sharma, Santosh K.; Zambare, Prachi; Madane, Mayura A.

    2014-06-17

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a self-organizing map load feature database of a plurality of different electric load types and a plurality of neurons, each of the load types corresponding to a number of the neurons; employing a weight vector for each of the neurons; sensing a voltage signal and a current signal for each of the loads; determining a load feature vector including at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the loads; and identifying by a processor one of the load types by relating the load feature vector to the neurons of the database by identifying the weight vector of one of the neurons corresponding to the one of the load types that is a minimal distance to the load feature vector.

  3. Evaluation of data obtained from military disability medical administrative databases for service members with schizophrenia or bipolar disorder.

    PubMed

    Millikan, Amy M; Weber, Natalya S; Niebuhr, David W; Torrey, E Fuller; Cowan, David N; Li, Yuanzhang; Kaminski, Brenda

    2007-10-01

    We are studying associations between selected biomarkers and schizophrenia or bipolar disorder among military personnel. To assess potential diagnostic misclassification and to estimate the date of illness onset, we reviewed medical records for a subset of cases. Two psychiatrists independently reviewed 182 service medical records retrieved from the Department of Veterans Affairs. Data were evaluated for diagnostic concordance between database diagnoses and reviewers. Interreviewer variability was measured by using proportion of agreement and the kappa statistic. Data were abstracted to estimate date of onset. High levels of agreement existed between database diagnoses and reviewers (proportion, 94.7%; kappa = 0.88) and between reviewers (proportion, 92.3%; kappa = 0.87). The median time between illness onset and initiation of medical discharge was 1.6 and 1.1 years for schizophrenia and bipolar disorder, respectively. High levels of agreement between investigators and database diagnoses indicate that diagnostic misclassification is unlikely. Discharge procedure initiation date provides a suitable surrogate for disease onset.

  4. Predicting 30-day Hospital Readmission with Publicly Available Administrative Database. A Conditional Logistic Regression Modeling Approach.

    PubMed

    Zhu, K; Lou, Z; Zhou, J; Ballester, N; Kong, N; Parikh, P

    2015-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Hospital readmissions raise healthcare costs and cause significant distress to providers and patients. It is, therefore, of great interest to healthcare organizations to predict what patients are at risk to be readmitted to their hospitals. However, current logistic regression based risk prediction models have limited prediction power when applied to hospital administrative data. Meanwhile, although decision trees and random forests have been applied, they tend to be too complex to understand among the hospital practitioners. Explore the use of conditional logistic regression to increase the prediction accuracy. We analyzed an HCUP statewide inpatient discharge record dataset, which includes patient demographics, clinical and care utilization data from California. We extracted records of heart failure Medicare beneficiaries who had inpatient experience during an 11-month period. We corrected the data imbalance issue with under-sampling. In our study, we first applied standard logistic regression and decision tree to obtain influential variables and derive practically meaning decision rules. We then stratified the original data set accordingly and applied logistic regression on each data stratum. We further explored the effect of interacting variables in the logistic regression modeling. We conducted cross validation to assess the overall prediction performance of conditional logistic regression (CLR) and compared it with standard classification models. The developed CLR models outperformed several standard classification models (e.g., straightforward logistic regression, stepwise logistic regression, random forest, support vector machine). For example, the best CLR model improved the classification accuracy by nearly 20% over the straightforward logistic regression model. Furthermore, the developed CLR models tend to achieve better sensitivity of

  5. "There's so Much Data": Exploring the Realities of Data-Based School Governance

    ERIC Educational Resources Information Center

    Selwyn, Neil

    2016-01-01

    Educational governance is commonly predicated around the generation, collation and processing of data through digital technologies. Drawing upon an empirical study of two Australian secondary schools, this paper explores the different forms of data-based governance that are being enacted by school leaders, managers, administrators and teachers.…

  6. Enhancing navigation in biomedical databases by community voting and database-driven text classification

    PubMed Central

    Duchrow, Timo; Shtatland, Timur; Guettler, Daniel; Pivovarov, Misha; Kramer, Stefan; Weissleder, Ralph

    2009-01-01

    Background The breadth of biological databases and their information content continues to increase exponentially. Unfortunately, our ability to query such sources is still often suboptimal. Here, we introduce and apply community voting, database-driven text classification, and visual aids as a means to incorporate distributed expert knowledge, to automatically classify database entries and to efficiently retrieve them. Results Using a previously developed peptide database as an example, we compared several machine learning algorithms in their ability to classify abstracts of published literature results into categories relevant to peptide research, such as related or not related to cancer, angiogenesis, molecular imaging, etc. Ensembles of bagged decision trees met the requirements of our application best. No other algorithm consistently performed better in comparative testing. Moreover, we show that the algorithm produces meaningful class probability estimates, which can be used to visualize the confidence of automatic classification during the retrieval process. To allow viewing long lists of search results enriched by automatic classifications, we added a dynamic heat map to the web interface. We take advantage of community knowledge by enabling users to cast votes in Web 2.0 style in order to correct automated classification errors, which triggers reclassification of all entries. We used a novel framework in which the database "drives" the entire vote aggregation and reclassification process to increase speed while conserving computational resources and keeping the method scalable. In our experiments, we simulate community voting by adding various levels of noise to nearly perfectly labelled instances, and show that, under such conditions, classification can be improved significantly. Conclusion Using PepBank as a model database, we show how to build a classification-aided retrieval system that gathers training data from the community, is completely controlled

  7. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  8. Databases and Associated Tools for Glycomics and Glycoproteomics.

    PubMed

    Lisacek, Frederique; Mariethoz, Julien; Alocci, Davide; Rudd, Pauline M; Abrahams, Jodie L; Campbell, Matthew P; Packer, Nicolle H; Ståhle, Jonas; Widmalm, Göran; Mullen, Elaine; Adamczyk, Barbara; Rojas-Macias, Miguel A; Jin, Chunsheng; Karlsson, Niclas G

    2017-01-01

    The access to biodatabases for glycomics and glycoproteomics has proven to be essential for current glycobiological research. This chapter presents available databases that are devoted to different aspects of glycobioinformatics. This includes oligosaccharide sequence databases, experimental databases, 3D structure databases (of both glycans and glycorelated proteins) and association of glycans with tissue, disease, and proteins. Specific search protocols are also provided using tools associated with experimental databases for converting primary glycoanalytical data to glycan structural information. In particular, researchers using glycoanalysis methods by U/HPLC (GlycoBase), MS (GlycoWorkbench, UniCarb-DB, GlycoDigest), and NMR (CASPER) will benefit from this chapter. In addition we also include information on how to utilize glycan structural information to query databases that associate glycans with proteins (UniCarbKB) and with interactions with pathogens (SugarBind).

  9. Using glycome databases for drug discovery.

    PubMed

    Aoki-Kinoshita, Kiyoko F

    2008-08-01

    The glycomics field has made great advancements in the last decade due to technologies for their synthesis and analysis including carbohydrate microarrays. Accordingly, databases for glycomics research have also emerged and been made publicly available by many major institutions worldwide. This review introduces these and other useful databases on which new methods for drug discovery can be developed. The scope of this review covers current documented and accessible databases and resources pertaining to glycomics. These were selected with the expectation that they may be useful for drug discovery research. There is a plethora of glycomics databases that have much potential for drug discovery. This may seem daunting at first but this review helps to put some of these resources into perspective. Additionally, some thoughts on how to integrate these resources to allow more efficient research are presented.

  10. Image database for digital hand atlas

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente; Dey, Partha S.; Gertych, Arkadiusz; Pospiech-Kurkowska, Sywia

    2003-05-01

    Bone age assessment is a procedure frequently performed in pediatric patients to evaluate their growth disorder. A commonly used method is atlas matching by a visual comparison of a hand radiograph with a small reference set of old Greulich-Pyle atlas. We have developed a new digital hand atlas with a large set of clinically normal hand images of diverse ethnic groups. In this paper, we will present our system design and implementation of the digital atlas database to support the computer-aided atlas matching for bone age assessment. The system consists of a hand atlas image database, a computer-aided diagnostic (CAD) software module for image processing and atlas matching, and a Web user interface. Users can use a Web browser to push DICOM images, directly or indirectly from PACS, to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, are then extracted and compared with patterns from the atlas image database to assess the bone age. The digital atlas method built on a large image database and current Internet technology provides an alternative to supplement or replace the traditional one for a quantitative, accurate and cost-effective assessment of bone age.

  11. Handbook of automated data collection methods for the National Transit Database

    DOT National Transportation Integrated Search

    2003-10-01

    In recent years, with the increasing sophistication and capabilities of information processing technologies, there has been a renewed interest on the part of transit systems to tap the rich information potential of the National Transit Database (NTD)...

  12. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for

  13. Feasibility of using administrative data to compare hospital performance in the EU

    PubMed Central

    Groene, O.; Kristensen, S.; Arah, O.A.; Thompson, C.A.; Bartels, P.; Sunol, R.; Klazinga, N.; Klazinga, N; Kringos, DS; Lombarts, MJMH; Plochg, T; Lopez, MA; Secanell, M; Sunol, R; Vallejo, P; Bartels, P; Kristensen, S; Michel, P; Saillour-Glenisson, F; Vlcek, F; Car, M; Jones, S; Klaus, E; Bottaro, S; Garel, P; Saluvan, M; Bruneau, C; Depaigne-Loth, A; Shaw, C; Hammer, A; Ommen, O; Pfaff, H; Groene, O; Botje, D; Wagner, C; Kutaj-Wasikowska, H; Kutryba, B; Escoval, A; Lívio, A; Eiras, M; Franca, M; Leite, I; Almeman, F; Kus, H; Ozturk, K; Mannion, R; Arah, OA; DerSarkissian, M; Thompson, CA; Wang, A; Thompson, A

    2014-01-01

    Objective To describe hospitals' organizational arrangements relevant to the abstraction of administrative data, to report on the completeness of administrative data collected and to assess associations between organizational arrangements and completeness of data submission. Design A cross-sectional study design utilizing administrative data. Setting and Participants Randomly selected hospitals from seven European countries (The Czech Republic, France, Germany, Poland, Portugal, Spain, and Turkey). Main Outcome Measures Completeness of data submission for four quality indicators: mortality after acute myocardial infarction, stroke and hip fractures and complications after normal delivery. Results In general, hospitals were able to produce data on the four indicators required for this research study. A substantial proportion had missing data on one or more data items. The proportion of hospitals that was able to produce more detailed indicators of relevance for quality monitoring and improvement was low and ranged from 40.1% for thrombolysis performed on patients with acute ischemic stroke to 63.8% for hip-fracture operations performed within 48 h after admission for patients aged 65 or older. National factors were strong predictors of data completeness on the studied indicators. Conclusions At present, hospital administrative databases do not seem to be an appropriate source of information for comparison of hospital performance across the countries of the EU. However, given that this is a dynamic field, changes to administrative databases may make this possible in the near future. Such changes could be accelerated by an in-depth comparative analysis of the issues of using administrative data for comparisons of hospital performances in EU countries. PMID:24554645

  14. System and method employing a minimum distance and a load feature database to identify electric load types of different electric loads

    DOEpatents

    Lu, Bin; Yang, Yi; Sharma, Santosh K; Zambare, Prachi; Madane, Mayura A

    2014-12-23

    A method identifies electric load types of a plurality of different electric loads. The method includes providing a load feature database of a plurality of different electric load types, each of the different electric load types including a first load feature vector having at least four different load features; sensing a voltage signal and a current signal for each of the different electric loads; determining a second load feature vector comprising at least four different load features from the sensed voltage signal and the sensed current signal for a corresponding one of the different electric loads; and identifying by a processor one of the different electric load types by determining a minimum distance of the second load feature vector to the first load feature vector of the different electric load types of the load feature database.

  15. Ontology based heterogeneous materials database integration and semantic query

    NASA Astrophysics Data System (ADS)

    Zhao, Shuai; Qian, Quan

    2017-10-01

    Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.

  16. Data exploration systems for databases

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.; Hield, Christopher

    1992-01-01

    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems.

  17. Coding of Barrett's oesophagus with high-grade dysplasia in national administrative databases: a population-based cohort study.

    PubMed

    Chadwick, Georgina; Varagunam, Mira; Brand, Christian; Riley, Stuart A; Maynard, Nick; Crosby, Tom; Michalowski, Julie; Cromwell, David A

    2017-06-09

    The International Classification of Diseases 10th Revision (ICD-10) system used in the English hospital administrative database (Hospital Episode Statistics (HES)) does not contain a specific code for oesophageal high-grade dysplasia (HGD). The aim of this paper was to examine how patients with HGD were coded in HES and whether it was done consistently. National population-based cohort study of patients with newly diagnosed with HGD in England. The study used data collected prospectively as part of the National Oesophago-Gastric Cancer Audit (NOGCA). These records were linked to HES to investigate the pattern of ICD-10 codes recorded for these patients at the time of diagnosis. All patients with a new diagnosis of HGD between 1 April 2013 and 31 March 2014 in England, who had data submitted to the NOGCA. The main outcome assessed was the pattern of primary and secondary ICD-10 diagnostic codes recorded in the HES records at endoscopy at the time of diagnosis of HGD. Among 452 patients with a new diagnosis of HGD between 1 April 2013 and 31 March 2014, Barrett's oesophagus was the only condition coded in 200 (44.2%) HES records. Records for 59 patients (13.1%) contained no oesophageal conditions. The remaining 193 patients had various diagnostic codes recorded, 93 included a diagnosis of Barrett's oesophagus and 57 included a diagnosis of oesophageal/gastric cardia cancer. HES is not suitable to support national studies looking at the management of HGD. This is one reason for the UK to adopt an extended ICD system (akin to ICD-10-CM). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.

    PubMed

    Hallin, Peter F; Ussery, David W

    2004-12-12

    Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.

  19. Cochrane pregnancy and childbirth database: resource for evidence-based practice.

    PubMed

    Callister, L C; Hobbins-Garbett, D

    2000-01-01

    The Cochrane Pregnancy and Childbirth database is an ongoing meta-analysis of evidence documenting effective health care practices for childbearing women and their neonates. It is proving invaluable to nurse educators, researchers, clinicians, and administrators working in a variety of health care delivery settings. Evidence-based nursing practice that is safe and effective can enhance rather than overpower pivotal and celebratory life events such as childbirth.

  20. Variability in Standard Outcomes of Posterior Lumbar Fusion Determined by National Databases.

    PubMed

    Joseph, Jacob R; Smith, Brandon W; Park, Paul

    2017-01-01

    National databases are used with increasing frequency in spine surgery literature to evaluate patient outcomes. The differences between individual databases in relationship to outcomes of lumbar fusion are not known. We evaluated the variability in standard outcomes of posterior lumbar fusion between the University HealthSystem Consortium (UHC) database and the Healthcare Cost and Utilization Project National Inpatient Sample (NIS). NIS and UHC databases were queried for all posterior lumbar fusions (International Classification of Diseases, Ninth Revision code 81.07) performed in 2012. Patient demographics, comorbidities (including obesity), length of stay (LOS), in-hospital mortality, and complications such as urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, durotomy, and surgical site infection were collected using specific International Classification of Diseases, Ninth Revision codes. Analysis included 21,470 patients from the NIS database and 14,898 patients from the UHC database. Demographic data were not significantly different between databases. Obesity was more prevalent in UHC (P = 0.001). Mean LOS was 3.8 days in NIS and 4.55 in UHC (P < 0.0001). Complications were significantly higher in UHC, including urinary tract infection, deep venous thrombosis, pulmonary embolism, myocardial infarction, surgical site infection, and durotomy. In-hospital mortality was similar between databases. NIS and UHC databases had similar demographic patient populations undergoing posterior lumbar fusion. However, the UHC database reported significantly higher complication rate and longer LOS. This difference may reflect academic institutions treating higher-risk patients; however, a definitive reason for the variability between databases is unknown. The inability to precisely determine the basis of the variability between databases highlights the limitations of using administrative databases for spinal outcome analysis. Copyright

  1. A validated case definition for chronic rhinosinusitis in administrative data: a Canadian perspective.

    PubMed

    Rudmik, Luke; Xu, Yuan; Kukec, Edward; Liu, Mingfu; Dean, Stafford; Quan, Hude

    2016-11-01

    Pharmacoepidemiological research using administrative databases has become increasingly popular for chronic rhinosinusitis (CRS); however, without a validated case definition the cohort evaluated may be inaccurate resulting in biased and incorrect outcomes. The objective of this study was to develop and validate a generalizable administrative database case definition for CRS using International Classification of Diseases, 9th edition (ICD-9)-coded claims. A random sample of 100 patients with a guideline-based diagnosis of CRS and 100 control patients were selected and then linked to a Canadian physician claims database from March 31, 2010, to March 31, 2015. The proportion of CRS ICD-9-coded claims (473.x and 471.x) for each of these 200 patients were reviewed and the validity of 7 different ICD-9-based coding algorithms was evaluated. The CRS case definition of ≥2 claims with a CRS ICD-9 code (471.x or 473.x) within 2 years of the reference case provides a balanced validity with a sensitivity of 77% and specificity of 79%. Applying this CRS case definition to the claims database produced a CRS cohort of 51,000 patients with characteristics that were consistent with published demographics and rates of comorbid asthma, allergic rhinitis, and depression. This study has validated several coding algorithms; based on the results a case definition of ≥2 physician claims of CRS (ICD-9 of 471.x or 473.x) within 2 years provides an optimal level of validity. Future studies will need to validate this administrative case definition from different health system perspectives and using larger retrospective chart reviews from multiple providers. © 2016 ARS-AAOA, LLC.

  2. The Difficulties of Female Primary School Administrators in the Administration Process and Solution Suggestions

    ERIC Educational Resources Information Center

    Kosar, Didem; Altunay, Esen; Yalçinkaya, Mu¨nevver

    2014-01-01

    The aim of this research is to determine the administration experiences of female administrators, find out the troubles they have had during their administration process, and suggest some solutions according to these experiences. The qualitative method was used in this research and data was collected via the semi-structured interview form…

  3. Epidemiologic and Economic Burden Attributable to First Spinal Fusion Surgery: Analysis From an Italian Administrative Database.

    PubMed

    Cortesi, Paolo A; Assietti, Roberto; Cuzzocrea, Fabrizio; Prestamburgo, Domenico; Pluderi, Mauro; Cozzolino, Paolo; Tito, Patrizia; Vanelli, Roberto; Cecconi, Davide; Borsa, Stefano; Cesana, Giancarlo; Mantovani, Lorenzo G

    2017-09-15

    Retrospective large population based-study. Assessment of the epidemiologic trends and economic burden of first spinal fusions. No adequate data are available regarding the epidemiology of spinal fusion surgery and its economic impact in Europe. The study population was identified through a data warehouse (DENALI), which matches clinical and economic data of different Healthcare Administrative databases of the Italian Lombardy Region. The study population consisted of all subjects, resident in Lombardy, who, during the period January 2001 to December 2010, underwent spinal fusion surgery (ICD-9-CM codes: 81.04, 81.05, 81.06, 81.07, and 81.08). The first procedure was used as the index event. We estimated the incidence of first spinal fusion surgery, the population and surgery characteristics and the healthcare costs from the National Health Service's perspective. The analysis was performed for the entire population and divided into the main groups of diagnosis. The analysis identified 17,772 [mean age (SD): 54.6 (14.5) years, 55.3% females] spinal fusion surgeries. Almost 67% of the patients suffered from a lumbar degenerative disease. The incidence rate of interventions increased from 11.5 to 18.5 per 100,000 person-year between 2001 and 2006, and was above 20.0 per 100,000 person-year in the last 4 years. The patients' mean age increased during the observational time period from 48.1 to 55.9 years; whereas the median hospital length of stay reported for the index event decreased. The average cost of the spinal fusion surgery increased during the observational period, from &OV0556; 4726 up to &OV0556; 9388. The study showed an increasing incidence of spinal fusion surgery and costs from 2001 to 2010. These results can be used to better understand the epidemiological and economic burden of these interventions, and help to optimize the resources available considering the different clinical approaches accessible today. 4.

  4. Prehospital Naloxone Administration as a Public Health Surveillance Tool: A Retrospective Validation Study.

    PubMed

    Lindstrom, Heather A; Clemency, Brian M; Snyder, Ryan; Consiglio, Joseph D; May, Paul R; Moscati, Ronald M

    2015-08-01

    Abuse or unintended overdose (OD) of opiates and heroin may result in prehospital and emergency department (ED) care. Prehospital naloxone use has been suggested as a surrogate marker of community opiate ODs. The study objective was to verify externally whether prehospital naloxone use is a surrogate marker of community opiate ODs by comparing Emergency Medical Services (EMS) naloxone administration records to an independent database of ED visits for opiate and heroin ODs in the same community. A retrospective chart review of prehospital and ED data from July 2009 through June 2013 was conducted. Prehospital naloxone administration data obtained from the electronic medical records (EMRs) of a large private EMS provider serving a metropolitan area were considered a surrogate marker for suspected opiate OD. Comparison data were obtained from the regional trauma/psychiatric ED that receives the majority of the OD patients. The ED maintains a de-identified database of narcotic-related visits for surveillance of narcotic use in the metropolitan area. The ED database was queried for ODs associated with opiates or heroin. Cross-correlation analysis was used to test if prehospital naloxone administration was independent of ED visits for opiate/heroin ODs. Naloxone was administered during 1,812 prehospital patient encounters, and 1,294 ED visits for opiate/heroin ODs were identified. The distribution of patients in the prehospital and ED datasets did not differ by gender, but it did differ by race and age. The frequency of naloxone administration by prehospital providers varied directly with the frequency of ED visits for opiate/heroin ODs. A monthly increase of two ED visits for opiate-related ODs was associated with an increase in one prehospital naloxone administration (cross-correlation coefficient [CCF]=0.44; P=.0021). A monthly increase of 100 ED visits for heroin-related ODs was associated with an increase in 94 prehospital naloxone administrations (CCF=0.46; P=.0012

  5. New Zealand's National Landslide Database

    NASA Astrophysics Data System (ADS)

    Rosser, B.; Dellow, S.; Haubrook, S.; Glassey, P.

    2016-12-01

    Since 1780, landslides have caused an average of about 3 deaths a year in New Zealand and have cost the economy an average of at least NZ$250M/a (0.1% GDP). To understand the risk posed by landslide hazards to society, a thorough knowledge of where, when and why different types of landslides occur is vital. The main objective for establishing the database was to provide a centralised national-scale, publically available database to collate landslide information that could be used for landslide hazard and risk assessment. Design of a national landslide database for New Zealand required consideration of both existing landslide data stored in a variety of digital formats, and future data, yet to be collected. Pre-existing databases were developed and populated with data reflecting the needs of the landslide or hazard project, and the database structures of the time. Bringing these data into a single unified database required a new structure capable of storing and delivering data at a variety of scales and accuracy and with different attributes. A "unified data model" was developed to enable the database to hold old and new landslide data irrespective of scale and method of capture. The database contains information on landslide locations and where available: 1) the timing of landslides and the events that may have triggered them; 2) the type of landslide movement; 3) the volume and area; 4) the source and debris tail; and 5) the impacts caused by the landslide. Information from a variety of sources including aerial photographs (and other remotely sensed data), field reconnaissance and media accounts has been collated and is presented for each landslide along with metadata describing the data sources and quality. There are currently nearly 19,000 landslide records in the database that include point locations, polygons of landslide source and deposit areas, and linear features. Several large datasets are awaiting upload which will bring the total number of landslides to

  6. West Virginia yellow-poplar lumber defect database

    Treesearch

    Lawrence E. Osborn; Charles J. Gatchell; Curt C. Hassler; Curt C. Hassler

    1992-01-01

    Describes the data collection methods and the format of the new West Virginia yellow-poplar lumber defect database that was developed for use with computer simulation programs. The database contains descriptions of 627 boards, totaling approximately 3,800 board. feet, collected in West Virginia in grades FAS, FASlF, No. 1 Common, No. 2A Common, and No. 2B Common. The...

  7. Hosting and pulishing astronomical data in SQL databases

    NASA Astrophysics Data System (ADS)

    Galkin, Anastasia; Klar, Jochen; Riebe, Kristin; Matokevic, Gal; Enke, Harry

    2017-04-01

    In astronomy, terabytes and petabytes of data are produced by ground instruments, satellite missions and simulations. At Leibniz-Institute for Astrophysics Potsdam (AIP) we host and publish terabytes of cosmological simulation and observational data. The public archive at AIP has now reached a size of 60TB and growing and helps to produce numerous scientific papers. The web framework Daiquiri offers a dedicated web interface for each of the hosted scientific databases. Scientists all around the world run SQL queries which include specific astrophysical functions and get their desired data in reasonable time. Daiquiri supports the scientific projects by offering a number of administration tools such as database and user management, contact messages to the staff and support for organization of meetings and workshops. The webpages can be customized and the Wordpress integration supports the participating scientists in maintaining the documentation and the projects' news sections.

  8. Recovery issues in databases using redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.

    1993-01-01

    Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.

  9. BNDB - the Biochemical Network Database.

    PubMed

    Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter

    2007-10-02

    Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.

  10. Thermodynamics of Enzyme-Catalyzed Reactions Database

    National Institute of Standards and Technology Data Gateway

    SRD 74 Thermodynamics of Enzyme-Catalyzed Reactions Database (Web, free access)   The Thermodynamics of Enzyme-Catalyzed Reactions Database contains thermodynamic data on enzyme-catalyzed reactions that have been recently published in the Journal of Physical and Chemical Reference Data (JPCRD). For each reaction the following information is provided: the reference for the data, the reaction studied, the name of the enzyme used and its Enzyme Commission number, the method of measurement, the data and an evaluation thereof.

  11. Database Access Systems.

    ERIC Educational Resources Information Center

    Dalrymple, Prudence W.; Roderer, Nancy K.

    1994-01-01

    Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…

  12. Using the TIGR gene index databases for biological discovery.

    PubMed

    Lee, Yuandan; Quackenbush, John

    2003-11-01

    The TIGR Gene Index web pages provide access to analyses of ESTs and gene sequences for nearly 60 species, as well as a number of resources derived from these. Each species-specific database is presented using a common format with a homepage. A variety of methods exist that allow users to search each species-specific database. Methods implemented currently include nucleotide or protein sequence queries using WU-BLAST, text-based searches using various sequence identifiers, searches by gene, tissue and library name, and searches using functional classes through Gene Ontology assignments. This protocol provides guidance for using the Gene Index Databases to extract information.

  13. A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases

    PubMed Central

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  14. A hybrid spatio-temporal data indexing method for trajectory databases.

    PubMed

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-07-21

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.

  15. "Mr. Database" : Jim Gray and the History of Database Technologies.

    PubMed

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  16. Building An Integrated Neurodegenerative Disease Database At An Academic Health Center

    PubMed Central

    Xie, Sharon X.; Baek, Young; Grossman, Murray; Arnold, Steven E.; Karlawish, Jason; Siderowf, Andrew; Hurtig, Howard; Elman, Lauren; McCluskey, Leo; Van Deerlin, Vivianna; Lee, Virginia M.-Y.; Trojanowski, John Q.

    2010-01-01

    Background It is becoming increasingly important to study common and distinct etiologies, clinical and pathological features, and mechanisms related to neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), amyotrophic lateral sclerosis (ALS), and frontotemporal lobar degeneration (FTLD). These comparative studies rely on powerful database tools to quickly generate data sets which match diverse and complementary criteria set by the studies. Methods In this paper, we present a novel Integrated NeuroDegenerative Disease (INDD) database developed at the University of Pennsylvania (Penn) through a consortium of Penn investigators. Since these investigators work on AD, PD, ALS and FTLD, this allowed us to achieve the goal of developing an INDD database for these major neurodegenerative disorders. We used Microsoft SQL Server as the platform with built-in “backwards” functionality to provide Access as a front-end client to interface with the database. We used PHP hypertext Preprocessor to create the “front end” web interface and then integrated individual neurodegenerative disease databases using a master lookup table. We also present methods of data entry, database security, database backups, and database audit trails for this INDD database. Results We compare the results of a biomarker study using the INDD database to those using an alternative approach by querying individual database separately. Conclusions We have demonstrated that the Penn INDD database has the ability to query multiple database tables from a single console with high accuracy and reliability. The INDD database provides a powerful tool for generating data sets in comparative studies across several neurodegenerative diseases. PMID:21784346

  17. Patent Administration by Office Computer - A Case at Mazda Motor Corporation

    NASA Astrophysics Data System (ADS)

    Kimura, Ikuo; Nakamura, Shinji

    The needs of patent administration have been diversified reflecting R&D activities under the severe competition of technical development, and business has been increased in quantity year after year as seen in patent application. Under these circumstances it is necessary to develop business mechanization which assists manual operation as much as possible to enforce the patent administration. Introducing office computer (CPU 512 KB, external memory 128 MB) for exclusive use in this purpose, Patent Department of Mazda Motor Corporation has been constructing database of patent administration centered around patent application by their own company, and utilizes it for automatic preparation of business forms, preparation of various statistical materials, and real-time reference to the application procedures.

  18. Pathogen Research Databases

    Science.gov Websites

    Hepatitis C Virus (HCV) database project is funded by the Division of Microbiology and Infectious Diseases of the National Institute of Allergies and Infectious Diseases (NIAID). The HCV database project started as a spin-off from the HIV database project. There are two databases for HCV, a sequence database

  19. The Danish Inguinal Hernia database.

    PubMed

    Friis-Andersen, Hans; Bisgaard, Thue

    2016-01-01

    To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. Patients ≥18 years operated for groin hernia. Type and size of hernia, primary or recurrent, type of surgical repair procedure, mesh and mesh fixation methods. According to the Danish National Health Act, surgeons are obliged to register all hernia repairs immediately after surgery (3 minute registration time). All institutions have continuous access to their own data stratified on individual surgeons. Registrations are based on a closed, protected Internet system requiring personal codes also identifying the operating institution. A national steering committee consisting of 13 voluntary and dedicated surgeons, 11 of whom are unpaid, handles the medical management of the database. The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015). A total of 49 peer-reviewed national and international publications have been published from the database (June 2015). The Danish Inguinal Hernia Database is fully active monitoring surgical quality and contributes to the national and international surgical society to improve outcome after groin hernia repair.

  20. On patterns and re-use in bioinformatics databases.

    PubMed

    Bell, Michael J; Lord, Phillip

    2017-09-01

    As the quantity of data being depositing into biological databases continues to increase, it becomes ever more vital to develop methods that enable us to understand this data and ensure that the knowledge is correct. It is widely-held that data percolates between different databases, which causes particular concerns for data correctness; if this percolation occurs, incorrect data in one database may eventually affect many others while, conversely, corrections in one database may fail to percolate to others. In this paper, we test this widely-held belief by directly looking for sentence reuse both within and between databases. Further, we investigate patterns of how sentences are reused over time. Finally, we consider the limitations of this form of analysis and the implications that this may have for bioinformatics database design. We show that reuse of annotation is common within many different databases, and that also there is a detectable level of reuse between databases. In addition, we show that there are patterns of reuse that have previously been shown to be associated with percolation errors. Analytical software is available on request. phillip.lord@newcastle.ac.uk. © The Author(s) 2017. Published by Oxford University Press.