Sample records for information database procedural

  1. 16 CFR 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  2. 16 CFR 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  3. ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.

    ERIC Educational Resources Information Center

    Townley, Charles T.; And Others

    Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…

  4. 16 CFR § 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  5. 16 CFR 1102.26 - Determination of materially inaccurate information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... reviewing a report of harm or manufacturer comment, either before or after publication in the Database, may... manufacturer comment, be excluded from the Database or corrected by the Commission because it contains...

  6. 16 CFR 1102.26 - Determination of materially inaccurate information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... reviewing a report of harm or manufacturer comment, either before or after publication in the Database, may... manufacturer comment, be excluded from the Database or corrected by the Commission because it contains...

  7. 16 CFR § 1102.26 - Determination of materially inaccurate information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... PRODUCT SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... reviewing a report of harm or manufacturer comment, either before or after publication in the Database, may... manufacturer comment, be excluded from the Database or corrected by the Commission because it contains...

  8. Vision based flight procedure stereo display system

    NASA Astrophysics Data System (ADS)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. Creating Your Own Database.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1982-01-01

    Outlines the important factors to be considered in selecting a database management system for use with a microcomputer and presents a series of guidelines for developing a database. General procedures, report generation, data manipulation, information storage, word processing, data entry, database indexes, and relational databases are among the…

  10. Developing an automated database for monitoring ultrasound- and computed tomography-guided procedure complications and diagnostic yield.

    PubMed

    Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M

    2014-04-01

    Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.

  11. 16 CFR 1102.28 - Publication of reports of harm.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Procedural..., the Commission will publish reports of harm that meet the requirements for publication in the Database...(d) in the Database beyond the 10-business-day time frame set forth in paragraph (a) of this section...

  12. 16 CFR § 1102.30 - Publication of manufacturer comments.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural....26, the Commission will publish in the Database manufacturer comments submitted in response to a...

  13. A Remote Registration Based on MIDAS

    NASA Astrophysics Data System (ADS)

    JIN, Xin

    2017-04-01

    We often need for software registration to protect the interests of the software developers. This article narrated one kind of software long-distance registration technology. The registration method is: place the registration information in a database table, after the procedure starts in check table registration information, if it has registered then the procedure may the normal operation; Otherwise, the customer must input the sequence number and registers through the network on the long-distance server. If it registers successfully, then records the registration information in the database table. This remote registration method can protect the rights of software developers.

  14. A “Cookbook” Cost Analysis Procedure for Medical Information Systems*

    PubMed Central

    Torrance, Janice L.; Torrance, George W.; Covvey, H. Dominic

    1983-01-01

    A costing procedure for medical information systems is described. The procedure incorporates state-of-the-art costing methods in an easy to follow “cookbook” format. Application of the procedure consists of filling out a series of Mac-Tor EZ-Cost forms. The procedure and forms have been field tested by application to a cardiovascular database system. This article describes the major features of the costing procedure. The forms and other details are available upon request.

  15. SLIMMER--A UNIX System-Based Information Retrieval System.

    ERIC Educational Resources Information Center

    Waldstein, Robert K.

    1988-01-01

    Describes an information retrieval system developed at Bell Laboratories to create and maintain a variety of different but interrelated databases, and to provide controlled access to these databases. The components discussed include the interfaces, indexing rules, display languages, response time, and updating procedures of the system. (6 notes…

  16. Relational Database Design in Information Science Education.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1985-01-01

    Reports on database management system (dbms) applications designed by library school students for university community at University of Iowa. Three dbms design issues are examined: synthesis of relations, analysis of relations (normalization procedure), and data dictionary usage. Database planning prior to automation using data dictionary approach…

  17. 48 CFR 204.1103 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) database. (1) On contract award documents, use the contractor's legal or “doing business as” name and physical address information as recorded in the (SAM) database at the time of award. (2) When making a... database; and (ii) The contractor's Data Universal Numbering System (DUNS) number, Commercial and...

  18. 48 CFR 204.1103 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) database. (1) On contract award documents, use the contractor's legal or “doing business as” name and physical address information as recorded in the (SAM) database at the time of award. (2) When making a... database; and (ii) The contractor's Data Universal Numbering System (DUNS) number, Commercial and...

  19. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  20. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  1. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  2. 48 CFR 204.1103 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...

  3. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  4. 48 CFR 204.1103 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...

  5. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  6. 48 CFR 204.1103 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... information as recorded in the Central Contractor Registration (CCR) database at the time of award. (2) When... record is active in the CCR database; and (ii) The contractor's Data Universal Numbering System (DUNS... database, the contracting officer shall process a novation or change-of-name agreement, or an address...

  7. Designing a Zoo-Based Endangered Species Database.

    ERIC Educational Resources Information Center

    Anderson, Christopher L.

    1989-01-01

    Presented is a class activity that uses the database feature of the Appleworks program to create a database from which students may study endangered species. The use of a local zoo as a base of information about the animals is suggested. Procedures and follow-up activities are included. (CW)

  8. 16 CFR 1102.28 - Publication of reports of harm.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural Requirements § 1102.28... publish reports of harm that meet the requirements for publication in the Database. The Commission will... Commission may publish a report of harm that meets the requirements of § 1102.10(d) in the Database beyond...

  9. 3 CFR - Enhancing Payment Accuracy Through a “Do Not Pay List”

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... are not made. Agencies maintain many databases containing information on a recipient's eligibility to... databases before making payments or awards, agencies can identify ineligible recipients and prevent certain... pre-payment and pre-award procedures and ensure that a thorough review of available databases with...

  10. 16 CFR 1102.28 - Publication of reports of harm.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural Requirements § 1102.28... publish reports of harm that meet the requirements for publication in the Database. The Commission will... Commission may publish a report of harm that meets the requirements of § 1102.10(d) in the Database beyond...

  11. Critical care procedure logging using handheld computers

    PubMed Central

    Carlos Martinez-Motta, J; Walker, Robin; Stewart, Thomas E; Granton, John; Abrahamson, Simon; Lapinsky, Stephen E

    2004-01-01

    Introduction We conducted this study to evaluate the feasibility of implementing an internet-linked handheld computer procedure logging system in a critical care training program. Methods Subspecialty trainees in the Interdepartmental Division of Critical Care at the University of Toronto received and were trained in the use of Palm handheld computers loaded with a customized program for logging critical care procedures. The procedures were entered into the handheld device using checkboxes and drop-down lists, and data were uploaded to a central database via the internet. To evaluate the feasibility of this system, we tracked the utilization of this data collection system. Benefits and disadvantages were assessed through surveys. Results All 11 trainees successfully uploaded data to the central database, but only six (55%) continued to upload data on a regular basis. The most common reason cited for not using the system pertained to initial technical problems with data uploading. From 1 July 2002 to 30 June 2003, a total of 914 procedures were logged. Significant variability was noted in the number of procedures logged by individual trainees (range 13–242). The database generated by regular users provided potentially useful information to the training program director regarding the scope and location of procedural training among the different rotations and hospitals. Conclusion A handheld computer procedure logging system can be effectively used in a critical care training program. However, user acceptance was not uniform, and continued training and support are required to increase user acceptance. Such a procedure database may provide valuable information that may be used to optimize trainees' educational experience and to document clinical training experience for licensing and accreditation. PMID:15469577

  12. Use of administrative medical databases in population-based research.

    PubMed

    Gavrielov-Yusim, Natalie; Friger, Michael

    2014-03-01

    Administrative medical databases are massive repositories of data collected in healthcare for various purposes. Such databases are maintained in hospitals, health maintenance organisations and health insurance organisations. Administrative databases may contain medical claims for reimbursement, records of health services, medical procedures, prescriptions, and diagnoses information. It is clear that such systems may provide a valuable variety of clinical and demographic information as well as an on-going process of data collection. In general, information gathering in these databases does not initially presume and is not planned for research purposes. Nonetheless, administrative databases may be used as a robust research tool. In this article, we address the subject of public health research that employs administrative data. We discuss the biases and the limitations of such research, as well as other important epidemiological and biostatistical key points specific to administrative database studies.

  13. 16 CFR § 1102.28 - Publication of reports of harm.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural Requirements § 1102.28... publish reports of harm that meet the requirements for publication in the Database. The Commission will... Commission may publish a report of harm that meets the requirements of § 1102.10(d) in the Database beyond...

  14. Electronic data collection for clinical trials using tablet and handheld PCs

    NASA Astrophysics Data System (ADS)

    Alaoui, Adil; Vo, Minh; Patel, Nikunj; McCall, Keith; Lindisch, David; Watson, Vance; Cleary, Kevin

    2005-04-01

    This paper describes a system that uses electronic forms to collect patient and procedure data for clinical trials. During clinical trials, patients are typically required to provide background information such as demographics and medical history, as well as review and complete any consent forms. Physicians or their assistants then usually have additional forms for recording technical data from the procedure and for gathering follow-up information from patients after completion of the procedure. This approach can lead to substantial amounts of paperwork to collect and manage over the course of a clinical trial with a large patient base. By using e-forms instead, data can be transmitted to a single, centralized database, reducing the problem of managing paper forms. Additionally, the system can provide a means for relaying information from the database to the physician on his/her portable wireless device, such as to alert the physician when a patient has completed the pre-procedure forms and is ready to begin the procedure. This feature could improve the workflow in busy clinical practices. In the future, the system could be expanded so physicians could use their portable wireless device to pull up entire hospital records and view other pre-procedure data and patient images.

  15. Use of diagnostic information submitted to the United Kingdom Central Cardiac Audit Database: development of categorisation and allocation algorithms.

    PubMed

    Brown, Kate L; Crowe, Sonya; Pagel, Christina; Bull, Catherine; Muthialu, Nagarajan; Gibbs, John; Cunningham, David; Utley, Martin; Tsang, Victor T; Franklin, Rodney

    2013-08-01

    To categorise records according to primary cardiac diagnosis in the United Kingdom Central Cardiac Audit Database in order to add this information to a risk adjustment model for paediatric cardiac surgery. Codes from the International Paediatric Congenital Cardiac Code were mapped to recognisable primary cardiac diagnosis groupings, allocated using a hierarchy and less refined diagnosis groups, based on the number of functional ventricles and presence of aortic obstruction. A National Clinical Audit Database. Patients Children undergoing cardiac interventions: the proportions for each diagnosis scheme are presented for 13,551 first patient surgical episodes since 2004. In Scheme 1, the most prevalent diagnoses nationally were ventricular septal defect (13%), patent ductus arteriosus (10.4%), and tetralogy of Fallot (9.5%). In Scheme 2, the prevalence of a biventricular heart without aortic obstruction was 64.2% and with aortic obstruction was 14.1%; the prevalence of a functionally univentricular heart without aortic obstruction was 4.3% and with aortic obstruction was 4.7%; the prevalence of unknown (ambiguous) number of ventricles was 8.4%; and the prevalence of acquired heart disease only was 2.2%. Diagnostic groups added to procedural information: of the 17% of all operations classed as "not a specific procedure", 97.1% had a diagnosis identified in Scheme 1 and 97.2% in Scheme 2. Diagnostic information adds to surgical procedural data when the complexity of case mix is analysed in a national database. These diagnostic categorisation schemes may be used for future investigation of the frequency of conditions and evaluation of long-term outcome over a series of procedures.

  16. Evolution of a Patient Information Management System in a Local Area Network Environment at Loyola University of Chicago Medical Center

    PubMed Central

    Price, Ronald N; Chandrasekhar, Arcot J; Tamirisa, Balaji

    1990-01-01

    The Department of Medicine at Loyola University Medical Center (LUMC) of Chicago has implemented a local area network (LAN) based Patient Information Management System (PIMS) as part of its integrated departmental database management system. PIMS consists of related database applications encompassing demographic information, current medications, problem lists, clinical data, prior events, and on-line procedure results. Integration into the existing departmental database system permits PIMS to capture and manipulate data in other departmental applications. Standardization of clinical data is accomplished through three data tables that verify diagnosis codes, procedures codes and a standardized set of clinical data elements. The modularity of the system, coupled with standardized data formats, allowed the development of a Patient Information Protocol System (PIPS). PIPS, a userdefinable protocol processor, provides physicians with individualized data entry or review screens customized for their specific research protocols or practice habits. Physician feedback indicates that the PIMS/PIPS combination enhances their ability to collect and review specific patient information by filtering large amount of clinical data.

  17. Validating abortion procedure coding in Canadian administrative databases.

    PubMed

    Samiedaluie, Saied; Peterson, Sandra; Brant, Rollin; Kaczorowski, Janusz; Norman, Wendy V

    2016-07-12

    The British Columbia (BC) Ministry of Health collects abortion procedure data in the Medical Services Plan (MSP) physician billings database and in the hospital information Discharge Abstracts Database (DAD). Our study seeks to validate abortion procedure coding in these databases. Two randomized controlled trials enrolled a cohort of 1031 women undergoing abortion. The researcher collected database includes both enrollment and follow up chart review data. The study cohort was linked to MSP and DAD data to identify all abortions events captured in the administrative databases. We compared clinical chart data on abortion procedures with health administrative data. We considered a match to occur if an abortion related code was found in administrative data within 30 days of the date of the same event documented in a clinical chart. Among 1158 abortion events performed during enrollment and follow-up period, 99.1 % were found in at least one of the administrative data sources. The sensitivities for the two databases, evaluated using a gold standard, were 97.7 % (95 % confidence interval (CI): 96.6-98.5) for the MSP database and 91.9 % (95 % CI: 90.0-93.4) for the DAD. Abortion events coded in the BC health administrative databases are highly accurate. Single-payer health administrative databases at the provincial level in Canada have the potential to offer valid data reflecting abortion events. ClinicalTrials.gov Identifier NCT01174225 , Current Controlled Trials ISRCTN19506752 .

  18. 16 CFR 1102.20 - Transmission of reports of harm to the identified manufacturer or private labeler.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... INFORMATION DATABASE Procedural Requirements § 1102.20 Transmission of reports of harm to the identified..., provided such report meets the minimum requirements for publication in the Database, to the manufacturer or... harm, or otherwise, then it will not post the report of harm on the Database but will maintain the...

  19. 16 CFR 1102.20 - Transmission of reports of harm to the identified manufacturer or private labeler.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... INFORMATION DATABASE Procedural Requirements § 1102.20 Transmission of reports of harm to the identified..., provided such report meets the minimum requirements for publication in the Database, to the manufacturer or... harm, or otherwise, then it will not post the report of harm on the Database but will maintain the...

  20. Using linked administrative and disease-specific databases to study end-of-life care on a population level.

    PubMed

    Maetens, Arno; De Schreye, Robrecht; Faes, Kristof; Houttekier, Dirk; Deliens, Luc; Gielen, Birgit; De Gendt, Cindy; Lusyne, Patrick; Annemans, Lieven; Cohen, Joachim

    2016-10-18

    The use of full-population databases is under-explored to study the use, quality and costs of end-of-life care. Using the case of Belgium, we explored: (1) which full-population databases provide valid information about end-of-life care, (2) what procedures are there to use these databases, and (3) what is needed to integrate separate databases. Technical and privacy-related aspects of linking and accessing Belgian administrative databases and disease registries were assessed in cooperation with the database administrators and privacy commission bodies. For all relevant databases, we followed procedures in cooperation with database administrators to link the databases and to access the data. We identified several databases as fitting for end-of-life care research in Belgium: the InterMutualistic Agency's national registry of health care claims data, the Belgian Cancer Registry including data on incidence of cancer, and databases administrated by Statistics Belgium including data from the death certificate database, the socio-economic survey and fiscal data. To obtain access to the data, approval was required from all database administrators, supervisory bodies and two separate national privacy bodies. Two Trusted Third Parties linked the databases via a deterministic matching procedure using multiple encrypted social security numbers. In this article we describe how various routinely collected population-level databases and disease registries can be accessed and linked to study patterns in the use, quality and costs of end-of-life care in the full population and in specific diagnostic groups.

  1. Systemic inaccuracies in the National Surgical Quality Improvement Program database: Implications for accuracy and validity for neurosurgery outcomes research.

    PubMed

    Rolston, John D; Han, Seunggu J; Chang, Edward F

    2017-03-01

    The American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) provides a rich database of North American surgical procedures and their complications. Yet no external source has validated the accuracy of the information within this database. Using records from the 2006 to 2013 NSQIP database, we used two methods to identify errors: (1) mismatches between the Current Procedural Terminology (CPT) code that was used to identify the surgical procedure, and the International Classification of Diseases (ICD-9) post-operative diagnosis: i.e., a diagnosis that is incompatible with a certain procedure. (2) Primary anesthetic and CPT code mismatching: i.e., anesthesia not indicated for a particular procedure. Analyzing data for movement disorders, epilepsy, and tumor resection, we found evidence of CPT code and postoperative diagnosis mismatches in 0.4-100% of cases, depending on the CPT code examined. When analyzing anesthetic data from brain tumor, epilepsy, trauma, and spine surgery, we found evidence of miscoded anesthesia in 0.1-0.8% of cases. National databases like NSQIP are an important tool for quality improvement. Yet all databases are subject to errors, and measures of internal consistency show that errors affect up to 100% of case records for certain procedures in NSQIP. Steps should be taken to improve data collection on the frontend of NSQIP, and also to ensure that future studies with NSQIP take steps to exclude erroneous cases from analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network

    NASA Astrophysics Data System (ADS)

    Lussana, C.; Ranci, M.; Uboldi, F.

    2012-04-01

    In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.

  3. 16 CFR 1102.20 - Transmission of reports of harm to the identified manufacturer or private labeler.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... INFORMATION DATABASE (Eff. Jan. 10, 2011) Procedural Requirements § 1102.20 Transmission of reports of harm to... report of harm, provided such report meets the minimum requirements for publication in the Database, to... labeler is from the report of harm, or otherwise, then it will not post the report of harm on the Database...

  4. 16 CFR § 1102.20 - Transmission of reports of harm to the identified manufacturer or private labeler.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SAFETY INFORMATION DATABASE Procedural Requirements § 1102.20 Transmission of reports of harm to the... of harm, provided such report meets the minimum requirements for publication in the Database, to the... report of harm, or otherwise, then it will not post the report of harm on the Database but will maintain...

  5. COMPILATION OF SATURATED AND UNSATURATED ZONE MODELING SOFTWARE (EPA/600/SR-96/009)

    EPA Science Inventory

    The study reflects the ongoing groundwater modeling information collection and processing activities at the International Ground Water Modeling Center (IGWMC). The full report briefly discusses the information acquisition and processing procedures, the MARS information database, ...

  6. Enabling heterogenous multi-scale database for emergency service functions through geoinformation technologies

    NASA Astrophysics Data System (ADS)

    Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.

    2014-11-01

    Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.

  7. Knowledge Discovery from Databases: An Introductory Review.

    ERIC Educational Resources Information Center

    Vickery, Brian

    1997-01-01

    Introduces new procedures being used to extract knowledge from databases and discusses rationales for developing knowledge discovery methods. Methods are described for such techniques as classification, clustering, and the detection of deviations from pre-established norms. Examines potential uses of knowledge discovery in the information field.…

  8. Protein Information Resource: a community resource for expert annotation of protein data

    PubMed Central

    Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy

    2001-01-01

    The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-Inter­national databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041

  9. Development of use of an Operational Procedure Information System (OPIS) for future space missions

    NASA Technical Reports Server (NTRS)

    Illmer, N.; Mies, L.; Schoen, A.; Jain, A.

    1994-01-01

    A MS-Windows based electronic procedure system, called OPIS (Operational Procedure Information System), was developed. The system consists of two parts, the editor, for 'writing' the procedure and the notepad application, for the usage of the procedures by the crew during training and flight. The system is based on standardized, structured procedure format and language. It allows the embedding of sketches, photos, animated graphics and video sequences and the access to off-nominal procedures by linkage to an appropriate database. The system facilitates the work with procedures of different degrees of detail, depending on the training status of the crew. The development of a 'language module' for the automatic translation of the procedures, for example into Russian, is planned.

  10. [Establishement for regional pelvic trauma database in Hunan Province].

    PubMed

    Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua

    2017-04-28

    To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry.
 Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6.
 Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management.
 Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.

  11. WaveNet: A Web-Based Metocean Data Access, Processing, and Analysis Tool. Part 3 - CDIP Database

    DTIC Science & Technology

    2014-06-01

    and Analysis Tool; Part 3 – CDIP Database by Zeki Demirbilek, Lihwa Lin, and Derek Wilson PURPOSE: This Coastal and Hydraulics Engineering...Technical Note (CHETN) describes coupling of the Coastal Data Information Program ( CDIP ) database to WaveNet, the first module of MetOcnDat (Meteorological...provides a step-by-step procedure to access, process, and analyze wave and wind data from the CDIP database. BACKGROUND: WaveNet addresses a basic

  12. The Marshall Islands Data Management Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoker, A.C.; Conrado, C.L.

    1995-09-01

    This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less

  13. Database for vertigo.

    PubMed

    Kentala, E; Pyykkö, I; Auramo, Y; Juhola, M

    1995-03-01

    An interactive database has been developed to assist the diagnostic procedure for vertigo and to store the data. The database offers a possibility to split and reunite the collected information when needed. It contains detailed information about a patient's history, symptoms, and findings in otoneurologic, audiologic, and imaging tests. The symptoms are classified into sets of questions on vertigo (including postural instability), hearing loss and tinnitus, and provoking factors. Confounding disorders are screened. The otoneurologic tests involve saccades, smooth pursuit, posturography, and a caloric test. In addition, findings from specific antibody tests, clinical neurotologic tests, magnetic resonance imaging, brain stem audiometry, and electrocochleography are included. The input information can be applied to workups for vertigo in an expert system called ONE. The database assists its user in that the input of information is easy. If not only can be used for diagnostic purposes but is also beneficial for research, and in combination with the expert system, it provides a tutorial guide for medical students.

  14. Development of a diagnosis- and procedure-based risk model for 30-day outcome after pediatric cardiac surgery.

    PubMed

    Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T

    2013-05-01

    The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  15. 32 CFR 326.5 - Responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Information Officer (CIO), NRO: (1) Ensures that NRO systems of records databases have procedures to protect... law or Executive Order which provides authority for the maintenance of information in each system of... length of time each item of information must be retained according to the NRO Records Control Schedule as...

  16. The New Zealand Tsunami Database: historical and modern records

    NASA Astrophysics Data System (ADS)

    Barberopoulou, A.; Downes, G. L.; Cochran, U. A.; Clark, K.; Scheele, F.

    2016-12-01

    A database of historical (pre-instrumental) and modern (instrumentally recorded)tsunamis that have impacted or been observed in New Zealand has been compiled andpublished online. New Zealand's tectonic setting, astride an obliquely convergenttectonic boundary on the Pacific Rim, means that it is vulnerable to local, regional andcircum-Pacific tsunamis. Despite New Zealand's comparatively short written historicalrecord of c. 200 years there is a wealth of information about the impact of past tsunamis.The New Zealand Tsunami Database currently has 800+ entries that describe >50 highvaliditytsunamis. Sources of historical information include witness reports recorded indiaries, notes, newspapers, books, and photographs. Information on recent events comesfrom tide gauges and other instrumental recordings such as DART® buoys, and media ofgreater variety, for example, video and online surveys. The New Zealand TsunamiDatabase is an ongoing project with information added as further historical records cometo light. Modern tsunamis are also added to the database once the relevant data for anevent has been collated and edited. This paper briefly overviews the procedures and toolsused in the recording and analysis of New Zealand's historical tsunamis, with emphasison database content.

  17. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  18. A mobile trauma database with charge capture.

    PubMed

    Moulton, Steve; Myung, Dan; Chary, Aron; Chen, Joshua; Agarwal, Suresh; Emhoff, Tim; Burke, Peter; Hirsch, Erwin

    2005-11-01

    Charge capture plays an important role in every surgical practice. We have developed and merged a custom mobile database (DB) system with our trauma registry (TRACS), to better understand our billing methods, revenue generators, and areas for improved revenue capture. The mobile database runs on handheld devices using the Windows Compact Edition platform. The front end was written in C# and the back end is SQL. The mobile database operates as a thick client; it includes active and inactive patient lists, billing screens, hot pick lists, and Current Procedural Terminology and International Classification of Diseases, Ninth Revision code sets. Microsoft Information Internet Server provides secure data transaction services between the back ends stored on each device. Traditional, hand written billing information for three of five adult trauma surgeons was averaged over a 5-month period. Electronic billing information was then collected over a 3-month period using handheld devices and the subject software application. One surgeon used the software for all 3 months, and two surgeons used it for the latter 2 months of the electronic data collection period. This electronic billing information was combined with TRACS data to determine the clinical characteristics of the trauma patients who were and were not captured using the mobile database. Total charges increased by 135%, 148%, and 228% for each of the three trauma surgeons who used the mobile DB application. The majority of additional charges were for evaluation and management services. Patients who were captured and billed at the point of care using the mobile DB had higher Injury Severity Scores, were more likely to undergo an operative procedure, and had longer lengths of stay compared with those who were not captured. Total charges more than doubled using a mobile database to bill at the point of care. A subsequent comparison of TRACS data with billing information revealed a large amount of uncaptured patient revenue. Greater familiarity and broader use of mobile database technology holds the potential for even greater revenue capture.

  19. 77 FR 58383 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ...) The Kids' Inpatient Database (KID) is the only all-payer inpatient care database for children in the United States. The KID was specifically designed to permit researchers to study a broad range of conditions and procedures related to child health issues. The KID contains a sample of over 3 million...

  20. A Comprehensive Strategy to Construct In-house Database for Accurate and Batch Identification of Small Molecular Metabolites.

    PubMed

    Zhao, Xinjie; Zeng, Zhongda; Chen, Aiming; Lu, Xin; Zhao, Chunxia; Hu, Chunxiu; Zhou, Lina; Liu, Xinyu; Wang, Xiaolin; Hou, Xiaoli; Ye, Yaorui; Xu, Guowang

    2018-05-29

    Identification of the metabolites is an essential step in metabolomics study to interpret regulatory mechanism of pathological and physiological processes. However, it is still a big headache in LC-MSn-based studies because of the complexity of mass spectrometry, chemical diversity of metabolites, and deficiency of standards database. In this work, a comprehensive strategy is developed for accurate and batch metabolite identification in non-targeted metabolomics studies. First, a well defined procedure was applied to generate reliable and standard LC-MS2 data including tR, MS1 and MS2 information at a standard operational procedure (SOP). An in-house database including about 2000 metabolites was constructed and used to identify the metabolites in non-targeted metabolic profiling by retention time calibration using internal standards, precursor ion alignment and ion fusion, auto-MS2 information extraction and selection, and database batch searching and scoring. As an application example, a pooled serum sample was analyzed to deliver the strategy, 202 metabolites were identified in the positive ion mode. It shows our strategy is useful for LC-MSn-based non-targeted metabolomics study.

  1. CyBy(2): a structure-based data management tool for chemical and biological data.

    PubMed

    Höck, Stefan; Riedl, Rainer

    2012-01-01

    We report the development of a powerful data management tool for chemical and biological data: CyBy(2). CyBy(2) is a structure-based information management tool used to store and visualize structural data alongside additional information such as project assignment, physical information, spectroscopic data, biological activity, functional data and synthetic procedures. The application consists of a database, an application server, used to query and update the database, and a client application with a rich graphical user interface (GUI) used to interact with the server.

  2. Concentrations of indoor pollutants database: User's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author's last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user's needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  3. Concentrations of indoor pollutants database: User`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-05-01

    This manual describes the computer-based database on indoor air pollutants. This comprehensive database alloys helps utility personnel perform rapid searches on literature related to indoor air pollutants. Besides general information, it provides guidance for finding specific information on concentrations of indoor air pollutants. The manual includes information on installing and using the database as well as a tutorial to assist the user in becoming familiar with the procedures involved in doing bibliographic and summary section searches. The manual demonstrates how to search for information by going through a series of questions that provide search parameters such as pollutants type, year,more » building type, keywords (from a specific list), country, geographic region, author`s last name, and title. As more and more parameters are specified, the list of references found in the data search becomes smaller and more specific to the user`s needs. Appendixes list types of information that can be input into the database when making a request. The CIP database allows individual utilities to obtain information on indoor air quality based on building types and other factors in their own service territory. This information is useful for utilities with concerns about indoor air quality and the control of indoor air pollutants. The CIP database itself is distributed by the Electric Power Software Center and runs on IBM PC-compatible computers.« less

  4. Retrospective Evaluation of the Protocol for US Army Corps of Engineers Aquatic Ecosystem Restoration Projects. Part 2. Database Content and Data Entry Guidelines

    DTIC Science & Technology

    2014-01-01

    entry and review procedures; (2) explain the various database components; (3) outline included datafields and datasets; and (4) document the...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources...gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or

  5. Network information security in a phase III Integrated Academic Information Management System (IAIMS).

    PubMed

    Shea, S; Sengupta, S; Crosswell, A; Clayton, P D

    1992-01-01

    The developing Integrated Academic Information System (IAIMS) at Columbia-Presbyterian Medical Center provides data sharing links between two separate corporate entities, namely Columbia University Medical School and The Presbyterian Hospital, using a network-based architecture. Multiple database servers with heterogeneous user authentication protocols are linked to this network. "One-stop information shopping" implies one log-on procedure per session, not separate log-on and log-off procedures for each server or application used during a session. These circumstances provide challenges at the policy and technical levels to data security at the network level and insuring smooth information access for end users of these network-based services. Five activities being conducted as part of our security project are described: (1) policy development; (2) an authentication server for the network; (3) Kerberos as a tool for providing mutual authentication, encryption, and time stamping of authentication messages; (4) a prototype interface using Kerberos services to authenticate users accessing a network database server; and (5) a Kerberized electronic signature.

  6. Understanding your hospital bill

    MedlinePlus

    ... use to help you find this information. They use national databases of billed medical services. You enter the name of the procedure and your zip code to find an average or estimated ... charge, you can use the information to ask for a lower fee.

  7. Procedural Pain: Systematic Review of Parent Experiences and Information Needs.

    PubMed

    Gates, Allison; Shave, Kassi; Featherstone, Robin; Buckreus, Kelli; Ali, Samina; Scott, Shannon D; Hartling, Lisa

    2018-06-01

    Parents wish to reduce their child's pain during medical procedures but may not know how to do so. We systematically reviewed the literature on parents' experiences and information needs related to managing their child's pain for common medical procedures. Of 2678 records retrieved through database searching, 5 were included. Three additional records were identified by scanning reference lists. Five studies were qualitative, and 3 were quantitative. Most took place in North America or Europe (n = 7) and described neonatal intensive care unit experiences (n = 5). Procedures included needle-related medical procedures (eg, venipuncture, phlebotomy, intravenous insertion), sutures, and wound repair and treatment, among others. Generally, parents desired being present during procedures, wanted to remain stoic for their child, and thought that information would be empowering and relieve stress but felt unsupported in taking an active role. Supporting and educating parents may empower them to lessen pain for their children while undergoing medical procedures.

  8. Networking consumer health information: bringing the patient into the medical information loop.

    PubMed

    Martin, E R; Lanier, D

    1996-04-01

    The Library of the Health Sciences at the University of Illinois at Chicago obtained a grant from the Illinois State Library to implement a statewide demonstration project that would provide consumer health information (CHI) using InfoTrac's Health Reference Center CD-ROM database. The goals of the project were to cooperate with targeted public libraries and clinics in providing CHI at the earliest point of need; to provide access to the database via a dial-up network server and a toll-free telephone number; and to work with targeted sites on database training, core CHI reference sources, and referral procedures. This paper provides background information about the project; describes the major systems and technical issues encountered; and discusses the outcomes, impact, and envisioned enhancements.

  9. Information Security Considerations for Applications Using Apache Accumulo

    DTIC Science & Technology

    2014-09-01

    Distributed File System INSCOM United States Army Intelligence and Security Command JPA Java Persistence API JSON JavaScript Object Notation MAC Mandatory... MySQL [13]. BigTable can process 20 petabytes per day [14]. High degree of scalability on commodity hardware. NoSQL databases do not rely on highly...manipulation in relational databases. NoSQL databases each have a unique programming interface that uses a lower level procedural language (e.g., Java

  10. Penile prosthesis implantation compares favorably in malpractice outcomes to other common urological procedures: findings from a malpractice insurance database.

    PubMed

    Chason, Juddson; Sausville, Justin; Kramer, Andrew C

    2009-08-01

    Some urologists choose not to offer penile prostheses because of concern over malpractice liability. The aim of this study was to assess whether urologists performing penile prosthesis surgery are placed at a greater malpractice risk. Percentage of malpractice suits from prosthesis surgery and other urological procedures that result in payment, average resulting payout from these cases, and category of legal issue that ultimately resulted in payout. A database from the Physician Insurers Association of America, an association of malpractice insurance companies covering physicians in North America, was analyzed to quantitatively compare penile implant surgery to other urological procedures in medicolegal terms. Compared to other common urological procedures, penile implant is comparable and on the lower end of the spectrum in terms of both the percentage of malpractice suits that result in payment and the amount ultimately paid in indemnity from those cases. Additionally, issues of informed consent play the largest role in indemnities for all urological procedures, whereas surgical technique is the most important issue for prosthesis surgery. Urologists who are adequately trained in prosthetic surgery should not avoid penile implant procedures for fear of malpractice suits. A focus on communication and informed consent can greatly reduce malpractice risk for urological procedures.

  11. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information

    PubMed Central

    van Heuven, Walter J. B.; Pitchford, Nicola J.; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/. PMID:28231303

  12. GreekLex 2: A comprehensive lexical database with part-of-speech, syllabic, phonological, and stress information.

    PubMed

    Kyparissiadis, Antonios; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2017-01-01

    Databases containing lexical properties on any given orthography are crucial for psycholinguistic research. In the last ten years, a number of lexical databases have been developed for Greek. However, these lack important part-of-speech information. Furthermore, the need for alternative procedures for calculating syllabic measurements and stress information, as well as combination of several metrics to investigate linguistic properties of the Greek language are highlighted. To address these issues, we present a new extensive lexical database of Modern Greek (GreekLex 2) with part-of-speech information for each word and accurate syllabification and orthographic information predictive of stress, as well as several measurements of word similarity and phonetic information. The addition of detailed statistical information about Greek part-of-speech, syllabification, and stress neighbourhood allowed novel analyses of stress distribution within different grammatical categories and syllabic lengths to be carried out. Results showed that the statistical preponderance of stress position on the pre-final syllable that is reported for Greek language is dependent upon grammatical category. Additionally, analyses showed that a proportion higher than 90% of the tokens in the database would be stressed correctly solely by relying on stress neighbourhood information. The database and the scripts for orthographic and phonological syllabification as well as phonetic transcription are available at http://www.psychology.nottingham.ac.uk/greeklex/.

  13. Heterogenous database integration in a physician workstation.

    PubMed

    Annevelink, J; Young, C Y; Tang, P C

    1991-01-01

    We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema.

  14. Heterogenous database integration in a physician workstation.

    PubMed Central

    Annevelink, J.; Young, C. Y.; Tang, P. C.

    1991-01-01

    We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema. PMID:1807624

  15. Validation and extraction of molecular-geometry information from small-molecule databases.

    PubMed

    Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N

    2017-02-01

    A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.

  16. Documentation of the U.S. Geological Survey Stress and Sediment Mobility Database

    USGS Publications Warehouse

    Dalyander, P. Soupy; Butman, Bradford; Sherwood, Christopher R.; Signell, Richard P.

    2012-01-01

    The U.S. Geological Survey Sea Floor Stress and Sediment Mobility Database contains estimates of bottom stress and sediment mobility for the U.S. continental shelf. This U.S. Geological Survey database provides information that is needed to characterize sea floor ecosystems and evaluate areas for human use. The estimates contained in the database are designed to spatially and seasonally resolve the general characteristics of bottom stress over the U.S. continental shelf and to estimate sea floor mobility by comparing critical stress thresholds based on observed sediment texture data to the modeled stress. This report describes the methods used to make the bottom stress and mobility estimates, statistics used to characterize stress and mobility, data validation procedures, and the metadata for each dataset and provides information on how to access the database online.

  17. Operating Policies and Procedures of Computer Data-Base Systems.

    ERIC Educational Resources Information Center

    Anderson, David O.

    Speaking on the operating policies and procedures of computer data bases containing information on students, the author divides his remarks into three parts: content decisions, data base security, and user access. He offers nine recommended practices that should increase the data base's usefulness to the user community: (1) the cost of developing…

  18. 34 CFR 361.23 - Requirements related to the statewide workforce investment system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... technology for individuals with disabilities; (ii) The use of information and financial management systems... statistics, job vacancies, career planning, and workforce investment activities; (iii) The use of customer service features such as common intake and referral procedures, customer databases, resource information...

  19. Integrating forensic information in a crime intelligence database.

    PubMed

    Rossy, Quentin; Ioset, Sylvain; Dessimoz, Damien; Ribaux, Olivier

    2013-07-10

    Since 2008, intelligence units of six states of the western part of Switzerland have been sharing a common database for the analysis of high volume crimes. On a daily basis, events reported to the police are analysed, filtered and classified to detect crime repetitions and interpret the crime environment. Several forensic outcomes are integrated in the system such as matches of traces with persons, and links between scenes detected by the comparison of forensic case data. Systematic procedures have been settled to integrate links assumed mainly through DNA profiles, shoemarks patterns and images. A statistical outlook on a retrospective dataset of series from 2009 to 2011 of the database informs for instance on the number of repetition detected or confirmed and increased by forensic case data. Time needed to obtain forensic intelligence in regard with the type of marks treated, is seen as a critical issue. Furthermore, the underlying integration process of forensic intelligence into the crime intelligence database raised several difficulties in regards of the acquisition of data and the models used in the forensic databases. Solutions found and adopted operational procedures are described and discussed. This process form the basis to many other researches aimed at developing forensic intelligence models. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Exploration into technical procedures for vertical integration. [information systems

    NASA Technical Reports Server (NTRS)

    Michel, R. J.; Maw, K. D.

    1979-01-01

    Issues in the design and use of a digital geographic information system incorporating landuse, zoning, hazard, LANDSAT, and other data are discussed. An eleven layer database was generated. Issues in spatial resolution, registration, grid versus polygonal structures, and comparison of photointerpreted landuse to LANDSAT land cover are examined.

  1. The use of a medico economic database as a part of French apheresis registry.

    PubMed

    Kanouni, T; Aubas, P; Heshmati, F

    2017-02-01

    An apheresis registry is a part of each learned apheresis society. The interest in this is obvious, in terms of knowledge of the practice of apheresis, adverse events, and technical issues. However, because of the weight of data entry it could never be exhaustive and some data will be missing. While continuing our registry efforts and our efforts to match with other existing registries, we decided to extend the data collection to a medico-economic database that is available in France, the Programme de Médicalisation du Système d'Information (PMSI) that has covered reimbursement information for each public or private hospital since 2007. It contains almost all apheresis procedures in all apheresis fields, demographic patient data, and primary and related diagnoses, among other data. Although this data does not include technical apheresis issues or other complications of the procedures, its interest is great and it is complementary to the registry. From 2003-2014, we have recorded 250,585 apheresis procedures, for 48,428 patients. We showed that the data are reliable and exhaustive. The information shows a perfect real life practice in apheresis, regarding indications, the rhythm and the duration of apheresis treatment. This prospective data collection is sustainable and allows us to assess the impact of healthcare guidelines. Our objective is to extend the data collection and match it to other existing databases; this will allow us to conduct, for example, a cohort study specifically for ECP. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Challenges in the association of human single nucleotide polymorphism mentions with unique database identifiers

    PubMed Central

    2011-01-01

    Background Most information on genomic variations and their associations with phenotypes are covered exclusively in scientific publications rather than in structured databases. These texts commonly describe variations using natural language; database identifiers are seldom mentioned. This complicates the retrieval of variations, associated articles, as well as information extraction, e. g. the search for biological implications. To overcome these challenges, procedures to map textual mentions of variations to database identifiers need to be developed. Results This article describes a workflow for normalization of variation mentions, i.e. the association of them to unique database identifiers. Common pitfalls in the interpretation of single nucleotide polymorphism (SNP) mentions are highlighted and discussed. The developed normalization procedure achieves a precision of 98.1 % and a recall of 67.5% for unambiguous association of variation mentions with dbSNP identifiers on a text corpus based on 296 MEDLINE abstracts containing 527 mentions of SNPs. The annotated corpus is freely available at http://www.scai.fraunhofer.de/snp-normalization-corpus.html. Conclusions Comparable approaches usually focus on variations mentioned on the protein sequence and neglect problems for other SNP mentions. The results presented here indicate that normalizing SNPs described on DNA level is more difficult than the normalization of SNPs described on protein level. The challenges associated with normalization are exemplified with ambiguities and errors, which occur in this corpus. PMID:21992066

  3. Using a spatial and tabular database to generate statistics from terrain and spectral data for soil surveys

    USGS Publications Warehouse

    Horvath , E.A.; Fosnight, E.A.; Klingebiel, A.A.; Moore, D.G.; Stone, J.E.; Reybold, W.U.; Petersen, G.W.

    1987-01-01

    A methodology has been developed to create a spatial database by referencing digital elevation, Landsat multispectral scanner data, and digitized soil premap delineations of a number of adjacent 7.5-min quadrangle areas to a 30-m Universal Transverse Mercator projection. Slope and aspect transformations are calculated from elevation data and grouped according to field office specifications. An unsupervised classification is performed on a brightness and greenness transformation of the spectral data. The resulting spectral, slope, and aspect maps of each of the 7.5-min quadrangle areas are then plotted and submitted to the field office to be incorporated into the soil premapping stages of a soil survey. A tabular database is created from spatial data by generating descriptive statistics for each data layer within each soil premap delineation. The tabular data base is then entered into a data base management system to be accessed by the field office personnel during the soil survey and to be used for subsequent resource management decisions.Large amounts of data are collected and archived during resource inventories for public land management. Often these data are stored as stacks of maps or folders in a file system in someone's office, with the maps in a variety of formats, scales, and with various standards of accuracy depending on their purpose. This system of information storage and retrieval is cumbersome at best when several categories of information are needed simultaneously for analysis or as input to resource management models. Computers now provide the resource scientist with the opportunity to design increasingly complex models that require even more categories of resource-related information, thus compounding the problem.Recently there has been much emphasis on the use of geographic information systems (GIS) as an alternative method for map data archives and as a resource management tool. Considerable effort has been devoted to the generation of tabular databases, such as the U.S. Department of Agriculture's SCS/S015 (Soil Survey Staff, 1983), to archive the large amounts of information that are collected in conjunction with mapping of natural resources in an easily retrievable manner.During the past 4 years the U.S. Geological Survey's EROS Data Center, in a cooperative effort with the Bureau of Land Management (BLM) and the Soil Conservation Service (SCS), developed a procedure that uses spatial and tabular databases to generate elevation, slope, aspect, and spectral map products that can be used during soil premapping. The procedure results in tabular data, residing in a database management system, that are indexed to the final soil delineations and help quantify soil map unit composition.The procedure was developed and tested on soil surveys on over 600 000 ha in Wyoming, Nevada, and Idaho. A transfer of technology from the EROS Data Center to the BLM will enable the Denver BLM Service Center to use this procedure in soil survey operations on BLM lands. Also underway is a cooperative effort between the EROS Data Center and SCS to define and evaluate maps that can be produced as derivatives of digital elevation data for 7.5-min quadrangle areas, such as those used during the premapping stage of the soil surveys mentioned above, the idea being to make such products routinely available.The procedure emphasizes the applications of digital elevation and spectral data to order-three soil surveys on rangelands, and will:Incorporate digital terrain and spectral data into a spatial database for soil surveys.Provide hardcopy products (that can be generated from digital elevation model and spectral data) that are useful during the soil pre-mapping process.Incorporate soil premaps into a spatial database that can be accessed during the soil survey process along with terrain and spectral data.Summarize useful quantitative information for soil mapping and for making interpretations for resource management.

  4. A Data Management System for International Space Station Simulation Tools

    NASA Technical Reports Server (NTRS)

    Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.

  5. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    PubMed

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.

  6. A web based relational database management system for filariasis control

    PubMed Central

    Murty, Upadhyayula Suryanarayana; Kumar, Duvvuri Venkata Rama Satya; Sriram, Kumaraswamy; Rao, Kadiri Madhusudhan; Bhattacharyulu, Chakravarthula Hayageeva Narasimha Venakata; Praveen, Bhoopathi; Krishna, Amirapu Radha

    2005-01-01

    The present study describes a RDBMS (relational database management system) for the effective management of Filariasis, a vector borne disease. Filariasis infects 120 million people from 83 countries. The possible re-emergence of the disease and the complexity of existing control programs warrant the development of new strategies. A database containing comprehensive data associated with filariasis finds utility in disease control. We have developed a database containing information on the socio-economic status of patients, mosquito collection procedures, mosquito dissection data, filariasis survey report and mass blood data. The database can be searched using a user friendly web interface. Availability http://www.webfil.org (login and password can be obtained from the authors) PMID:17597846

  7. Digital Management of a Hysteroscopy Surgery Using Parts of the SNOMED Medical Model

    PubMed Central

    Kollias, Anastasios; Paschopoulos, Minas; Evangelou, Angelos; Poulos, Marios

    2012-01-01

    This work describes a hysteroscopy surgery management application that was designed based on the medical information standard SNOMED. We describe how the application fulfils the needs of this procedure and the way in which existing handwritten medical information is effectively transmitted to the application’s database. PMID:22848338

  8. Considering FERPA Requirements for Library Patron Databases within a Consortial Environment

    ERIC Educational Resources Information Center

    Oliver, Astrid; Prosser, Eric; Chittenden, Lloyd

    2016-01-01

    This case study examines patron record privacy and the Family Educational Rights and Privacy Act (FERPA) within a consortial environment. FERPA requirements were examined, as well as college and library policy and procedure. It was determined that information in library patron records is directory information and, under most circumstances, does…

  9. The Effective Use of Management Consultants in Higher Education. An NCHEMS Executive Overview.

    ERIC Educational Resources Information Center

    Matthews, Jana B.

    Information about consulting projects and consultants is provided to help college administrators. It is noted that colleges are increasingly asking consultants for help with such diverse projects as database design, collection of new information, or the development of evaluative procedures. The stages of a successful consulting project and the…

  10. The National Landslide Database and GIS for Great Britain: construction, development, data acquisition, application and communication

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Dashwood, Claire; Freeborough, Katy

    2014-05-01

    The National Landslide Database has been developed by the British Geological Survey (BGS) and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 16,500 records of landslide events, each documented as fully as possible. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and crowd-sourcing information through social media and other online resources. This information is invaluable for the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domain map currently under development rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures and an understanding of causative factors and their spatial distribution, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP and data collected for the National Landslide Database is used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  11. "Hyperstat": an educational and working tool in epidemiology.

    PubMed

    Nicolosi, A

    1995-01-01

    The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.

  12. Team X Spacecraft Instrument Database Consolidation

    NASA Technical Reports Server (NTRS)

    Wallenstein, Kelly A.

    2005-01-01

    In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.

  13. Kellogg Library and Archive Retrieval System (KLARS) Document Capture Manual. Draft Version.

    ERIC Educational Resources Information Center

    Hugo, Jane

    This manual is designed to supply background information for Kellogg Library and Archive Retrieval System (KLARS) processors and others who might work with the system, outline detailed policies and procedures for processors who prepare and enter data into the adult education database on KLARS, and inform general readers about the system. KLARS is…

  14. EPA Facility Registry Service (FRS): OIL

    EPA Pesticide Factsheets

    This dataset contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Oil database. The Oil database contains information on Spill Prevention, Control, and Countermeasure (SPCC) and Facility Response Plan (FRP) subject facilities to prevent and respond to oil spills. FRP facilities are referred to as substantial harm facilities due to the quantities of oil stored and facility characteristics. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to Oil facilities once the Oil data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  15. GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.

    NASA Astrophysics Data System (ADS)

    Asavin, A. M.

    2001-12-01

    There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.

  16. Theoretical foundations for information representation and constraint specification

    NASA Technical Reports Server (NTRS)

    Menzel, Christopher P.; Mayer, Richard J.

    1991-01-01

    Research accomplished at the Knowledge Based Systems Laboratory of the Department of Industrial Engineering at Texas A&M University is described. Outlined here are the theoretical foundations necessary to construct a Neutral Information Representation Scheme (NIRS), which will allow for automated data transfer and translation between model languages, procedural programming languages, database languages, transaction and process languages, and knowledge representation and reasoning control languages for information system specification.

  17. Progress in development of an integrated dietary supplement ingredient database at the NIH Office of Dietary Supplements

    PubMed Central

    Dwyer, Johanna T.; Picciano, Mary Frances; Betz, Joseph M.; Fisher, Kenneth D.; Saldanha, Leila G.; Yetley, Elizabeth A.; Coates, Paul M.; Radimer, Kathy; Bindewald, Bernadette; Sharpless, Katherine E.; Holden, Joanne; Andrews, Karen; Zhao, Cuiwei; Harnly, James; Wolf, Wayne R.; Perry, Charles R.

    2013-01-01

    Several activities of the Office of Dietary Supplements (ODS) at the National Institutes of Health involve enhancement of dietary supplement databases. These include an initiative with US Department of Agriculture to develop an analytically substantiated dietary supplement ingredient database (DSID) and collaboration with the National Center for Health Statistics to enhance the dietary supplement label database in the National Health and Nutrition Examination Survey (NHANES). The many challenges that must be dealt with in developing an analytically supported DSID include categorizing product types in the database, identifying nutrients, and other components of public health interest in these products and prioritizing which will be entered in the database first. Additional tasks include developing methods and reference materials for quantifying the constituents, finding qualified laboratories to measure the constituents, developing appropriate sample handling procedures, and finally developing representative sampling plans. Developing the NHANES dietary supplement label database has other challenges such as collecting information on dietary supplement use from NHANES respondents, constant updating and refining of information obtained, developing default values that can be used if the respondent cannot supply the exact supplement or strength that was consumed, and developing a publicly available label database. Federal partners and the research community are assisting in making an analytically supported dietary supplement database a reality. PMID:25309034

  18. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    PubMed

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical review of diagnostic and procedure codes. The four distinct methods identifying complication from codified data offer great potential in generating new evidence on the quality and safety of new procedures using routine data. However the most robust method, using the methodology recommended by the NHS Classification Service, was the least frequently used, highlighting that much valuable observational data is being ignored.

  19. Assistive technology for ultrasound-guided central venous catheter placement.

    PubMed

    Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana

    2018-01-01

    This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.

  20. Nencki Genomics Database--Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs.

    PubMed

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.

  1. Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs

    PubMed Central

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456

  2. Review of telehealth stuttering management.

    PubMed

    Lowe, Robyn; O'Brian, Sue; Onslow, Mark

    2013-01-01

    Telehealth is the use of communication technology to provide health care services by means other than typical in-clinic attendance models. Telehealth is increasingly used for the management of speech, language and communication disorders. The aim of this article is to review telehealth applications to stuttering management. We conducted a search of peer-reviewed literature for the past 20 years using the Institute for Scientific Information Web of Science database, PubMed: The Bibliographic Database and a search for articles by hand. Outcomes for telehealth stuttering treatment were generally positive, but there may be a compromise of treatment efficiency with telehealth treatment of young children. Our search found no studies dealing with stuttering assessment procedures using telehealth models. No economic analyses of this delivery model have been reported. This review highlights the need for continued research about telehealth for stuttering management. Evidence from research is needed to inform the efficacy of assessment procedures using telehealth methods as well as guide the development of improved treatment procedures. Clinical and technical guidelines are urgently needed to ensure that the evolving and continued use of telehealth to manage stuttering does not compromise the standards of care afforded with standard in-clinic models.

  3. Proteome of Caulobacter crescentus cell cycle publicly accessible on SWICZ server.

    PubMed

    Vohradsky, Jiri; Janda, Ivan; Grünenfelder, Björn; Berndt, Peter; Röder, Daniel; Langen, Hanno; Weiser, Jaroslav; Jenal, Urs

    2003-10-01

    Here we present the Swiss-Czech Proteomics Server (SWICZ), which hosts the proteomic database summarizing information about the cell cycle of the aquatic bacterium Caulobacter crescentus. The database provides a searchable tool for easy access of global protein synthesis and protein stability data as examined during the C. crescentus cell cycle. Protein synthesis data collected from five different cell cycle stages were determined for each protein spot as a relative value of the total amount of [(35)S]methionine incorporation. Protein stability of pulse-labeled extracts were measured during a chase period equivalent to one cell cycle unit. Quantitative information for individual proteins together with descriptive data such as protein identities, apparent molecular masses and isoelectric points, were combined with information on protein function, genomic context, and the cell cycle stage, and were then assembled in a relational database with a world wide web interface (http://proteom.biomed.cas.cz), which allows the database records to be searched and displays the recovered information. A total of 1250 protein spots were reproducibly detected on two-dimensional gel electropherograms, 295 of which were identified by mass spectroscopy. The database is accessible either through clickable two-dimensional gel electrophoretic maps or by means of a set of dedicated search engines. Basic characterization of the experimental procedures, data processing, and a comprehensive description of the web site are presented. In its current state, the SWICZ proteome database provides a platform for the incorporation of new data emerging from extended functional studies on the C. crescentus proteome.

  4. Biographical Study and Hypothesis Testing. Instructional Technology.

    ERIC Educational Resources Information Center

    Little, Timothy H.

    1995-01-01

    Asserts that the story of Amelia Earhart holds an ongoing fascination for students. Presents an instructional unit using a spreadsheet to create a database about Earhart's final flight. Includes student objectives, step-by-step instructional procedures, and eight graphics of student information or teacher examples. (CFR)

  5. The Barcelona Hospital Clínic therapeutic apheresis database.

    PubMed

    Cid, Joan; Carbassé, Gloria; Cid-Caballero, Marc; López-Púa, Yolanda; Alba, Cristina; Perea, Dolores; Lozano, Miguel

    2017-09-22

    A therapeutic apheresis (TA) database helps to increase knowledge about indications and type of apheresis procedures that are performed in clinical practice. The objective of the present report was to describe the type and number of TA procedures that were performed at our institution in a 10-year period, from 2007 to 2016. The TA electronic database was created by transferring patient data from electronic medical records and consultation forms into a Microsoft Access database developed exclusively for this purpose. Since 2007, prospective data from every TA procedure were entered in the database. A total of 5940 TA procedures were performed: 3762 (63.3%) plasma exchange (PE) procedures, 1096 (18.5%) hematopoietic progenitor cell (HPC) collections, and 1082 (18.2%) TA procedures other than PEs and HPC collections. The overall trend for the time-period was progressive increase in total number of TA procedures performed each year (from 483 TA procedures in 2007 to 822 in 2016). The tracking trend of each procedure during the 10-year period was different: the number of PE and other type of TA procedures increased 22% and 2818%, respectively, and the number of HPC collections decreased 28%. The TA database helped us to increase our knowledge about various indications and type of TA procedures that were performed in our current practice. We also believe that this database could serve as a model that other institutions can use to track service metrics. © 2017 Wiley Periodicals, Inc.

  6. A hospital-wide clinical findings dictionary based on an extension of the International Classification of Diseases (ICD).

    PubMed

    Bréant, C; Borst, F; Campi, D; Griesser, V; Momjian, S

    1999-01-01

    The use of a controlled vocabulary set in a hospital-wide clinical information system is of crucial importance for many departmental database systems to communicate and exchange information. In the absence of an internationally recognized clinical controlled vocabulary set, a new extension of the International statistical Classification of Diseases (ICD) is proposed. It expands the scope of the standard ICD beyond diagnosis and procedures to clinical terminology. In addition, the common Clinical Findings Dictionary (CFD) further records the definition of clinical entities. The construction of the vocabulary set and the CFD is incremental and manual. Tools have been implemented to facilitate the tasks of defining/maintaining/publishing dictionary versions. The design of database applications in the integrated clinical information system is driven by the CFD which is part of the Medical Questionnaire Designer tool. Several integrated clinical database applications in the field of diabetes and neuro-surgery have been developed at the HUG.

  7. A hospital-wide clinical findings dictionary based on an extension of the International Classification of Diseases (ICD).

    PubMed Central

    Bréant, C.; Borst, F.; Campi, D.; Griesser, V.; Momjian, S.

    1999-01-01

    The use of a controlled vocabulary set in a hospital-wide clinical information system is of crucial importance for many departmental database systems to communicate and exchange information. In the absence of an internationally recognized clinical controlled vocabulary set, a new extension of the International statistical Classification of Diseases (ICD) is proposed. It expands the scope of the standard ICD beyond diagnosis and procedures to clinical terminology. In addition, the common Clinical Findings Dictionary (CFD) further records the definition of clinical entities. The construction of the vocabulary set and the CFD is incremental and manual. Tools have been implemented to facilitate the tasks of defining/maintaining/publishing dictionary versions. The design of database applications in the integrated clinical information system is driven by the CFD which is part of the Medical Questionnaire Designer tool. Several integrated clinical database applications in the field of diabetes and neuro-surgery have been developed at the HUG. Images Figure 1 PMID:10566451

  8. The changing face of urinary continence surgery in England: a perspective from the Hospital Episode Statistics database.

    PubMed

    Withington, John; Hirji, Sadaf; Sahai, Arun

    2014-08-01

    To quantify changes in surgical practice in the treatment of stress urinary incontinence (SUI), urge urinary incontinence (UUI) and post-prostatectomy stress incontinence (PPI) in England, using the Hospital Episode Statistics (HES) database. We used public domain information from the HES database, an administrative dataset recording all hospital admissions and procedures in England, to find evidence of change in the use of various surgical procedures for urinary incontinence from 2000 to 2012. For the treatment of SUI, a general increase in the use of synthetic mid-urethral tapes, such as tension-free vaginal tape (TVTO) and transobturator tape (TOT), was observed, while there was a significant decrease in colposuspension procedures over the same period. The number of procedures to remove TVT and TOT has also increased in recent years. In the treatment of overactive bladder and UUI, there has been a significant increase in the use of botulinum toxin A and neuromodulation in recent years. This coincided with a steady decline in the recorded use of clam ileocystoplasty. A steady increase was observed in the insertion of artificial urinary sphincter (AUS) devices in men, related to PPI. Mid-urethral synthetic tapes now represent the mainstream treatment of SUI in women, but tape-related complications have led to an increase in procedures to remove these devices. The uptake of botulinum toxin A and sacral neuromodulation has led to fewer clam ileocystoplasty procedures being performed. The steady increase in insertions of AUSs in men is unsurprising and reflects the widespread uptake of radical prostatectomy in recent years. There are limitations to results sourced from the HES database, with potential inaccuracy of coding; however, these data support the trends observed by experts in this field. © 2014 The Authors. BJU International published by John Wiley & Sons Ltd on behalf of BJU International.

  9. DHLAS: A web-based information system for statistical genetic analysis of HLA population data.

    PubMed

    Thriskos, P; Zintzaras, E; Germenis, A

    2007-03-01

    DHLAS (database HLA system) is a user-friendly, web-based information system for the analysis of human leukocyte antigens (HLA) data from population studies. DHLAS has been developed using JAVA and the R system, it runs on a Java Virtual Machine and its user-interface is web-based powered by the servlet engine TOMCAT. It utilizes STRUTS, a Model-View-Controller framework and uses several GNU packages to perform several of its tasks. The database engine it relies upon for fast access is MySQL, but others can be used a well. The system estimates metrics, performs statistical testing and produces graphs required for HLA population studies: (i) Hardy-Weinberg equilibrium (calculated using both asymptotic and exact tests), (ii) genetics distances (Euclidian or Nei), (iii) phylogenetic trees using the unweighted pair group method with averages and neigbor-joining method, (iv) linkage disequilibrium (pairwise and overall, including variance estimations), (v) haplotype frequencies (estimate using the expectation-maximization algorithm) and (vi) discriminant analysis. The main merit of DHLAS is the incorporation of a database, thus, the data can be stored and manipulated along with integrated genetic data analysis procedures. In addition, it has an open architecture allowing the inclusion of other functions and procedures.

  10. Hydroponics Database and Handbook for the Advanced Life Support Test Bed

    NASA Technical Reports Server (NTRS)

    Nash, Allen J.

    1999-01-01

    During the summer 1998, I did student assistance to Dr. Daniel J. Barta, chief plant growth expert at Johnson Space Center - NASA. We established the preliminary stages of a hydroponic crop growth database for the Advanced Life Support Systems Integration Test Bed, otherwise referred to as BIO-Plex (Biological Planetary Life Support Systems Test Complex). The database summarizes information from published technical papers by plant growth experts, and it includes bibliographical, environmental and harvest information based on plant growth under varying environmental conditions. I collected 84 lettuce entries, 14 soybean, 49 sweet potato, 16 wheat, 237 white potato, and 26 mix crop entries. The list will grow with the publication of new research. This database will be integrated with a search and systems analysis computer program that will cross-reference multiple parameters to determine optimum edible yield under varying parameters. Also, we have made preliminary effort to put together a crop handbook for BIO-Plex plant growth management. It will be a collection of information obtained from experts who provided recommendations on a particular crop's growing conditions. It includes bibliographic, environmental, nutrient solution, potential yield, harvest nutritional, and propagation procedure information. This handbook will stand as the baseline growth conditions for the first set of experiments in the BIO-Plex facility.

  11. Materials Selection. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1991

    1991-01-01

    This learning activity develops algorithms to ensure that the process of selecting materials is well defined and sound. These procedures require the use of many databases to provide the designer with information such as physical, mechanical, and chemical properties of the materials under consideration. A design brief, student quiz, and five…

  12. Parallel Processable Cryptographic Methods with Unbounded Practical Security.

    ERIC Educational Resources Information Center

    Rothstein, Jerome

    Addressing the problem of protecting confidential information and data stored in computer databases from access by unauthorized parties, this paper details coding schemes which present such astronomical work factors to potential code breakers that security breaches are hopeless in any practical sense. Two procedures which can be used to encode for…

  13. The Tropical Biominer Project: mining old sources for new drugs.

    PubMed

    Artiguenave, François; Lins, André; Maciel, Wesley Dias; Junior, Antonio Celso Caldeira; Nacif-Coelho, Carla; de Souza Linhares, Maria Margarida Ribeiro; de Oliveira, Guilherme Correa; Barbosa, Luis Humberto Rezende; Lopes, Júlio César Dias; Junior, Claudionor Nunes Coelho

    2005-01-01

    The Tropical Biominer Project is a recent initiative from the Federal University of Minas Gerais (UFMG) and the Oswaldo Cruz foundation, with the participation of the Biominas Foundation (Belo Horizonte, Minas Gerais, Brazil) and the start-up Homologix. The main objective of the project is to build a new resource for the chemogenomics research, on chemical compounds, with a strong emphasis on natural molecules. Adopted technologies include the search of information from structured, semi-structured, and non-structured documents (the last two from the web) and datamining tools in order to gather information from different sources. The database is the support for developing applications to find new potential treatments for parasitic infections by using virtual screening tools. We present here the midpoint of the project: the conception and implementation of the Tropical Biominer Database. This is a Federated Database designed to store data from different resources. Connected to the database, a web crawler is able to gather information from distinct, patented web sites and store them after automatic classification using datamining tools. Finally, we demonstrate the interest of the approach, by formulating new hypotheses on specific targets of a natural compound, violacein, using inferences from a Virtual Screening procedure.

  14. Integrating Data Sources for Process Sustainability ...

    EPA Pesticide Factsheets

    To perform a chemical process sustainability assessment requires significant data about chemicals, process design specifications, and operating conditions. The required information includes the identity of the chemicals used, the quantities of the chemicals within the context of the sustainability assessment, physical properties of these chemicals, equipment inventory, as well as health, environmental, and safety properties of the chemicals. Much of this data are currently available to the process engineer either from the process design in the chemical process simulation software or online through chemical property and environmental, health, and safety databases. Examples of these databases include the U.S. Environmental Protection Agency’s (USEPA’s) Aggregated Computational Toxicology Resource (ACToR), National Institute for Occupational Safety and Health’s (NIOSH’s) Hazardous Substance Database (HSDB), and National Institute of Standards and Technology’s (NIST’s) Chemistry Webbook. This presentation will provide methods and procedures for extracting chemical identity and flow information from process design tools (such as chemical process simulators) and chemical property information from the online databases. The presentation will also demonstrate acquisition and compilation of the data for use in the EPA’s GREENSCOPE process sustainability analysis tool. This presentation discusses acquisition of data for use in rapid LCI development.

  15. Description of 103 Cases of Hypobaric Sickness from NASA-sponsored Research

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny; Klein, Jill S.; Acock, Keena E.

    2003-01-01

    One hundred and three cases of hypobaric decompression sickness (DCS) are documented, with 6 classified as Type II DCS. The presence and grade of venous gas emboli (VGE) are part of the case descriptions. Cases were diagnosed from 731 exposures in 5 different altitude chambers from 4 different laboratories between the years 1982 and 1999. Research was funded by NASA to develop operational prebreathe (PB) procedures that would permit safe extravehicular activity from the Space Shuttle and International Space Station using an extravehicular mobility unit (spacesuit) operated at 4.3 psia. Both vehicles operate at 14.7 psia with an "air" atmosphere, so a PB procedure is required to reduce nitrogen partial pressure in the tissues to an acceptable level prior to depressurization to 4.3 psia. Thirty-two additional descriptions of symptoms that were not diagnosed as DCS together with VGE information are also included. The information for each case resides in logbooks from 32 different tests. Additional information is stored in the NASA Decompression Sickness Database and the Prebreathe Reduction Protocol Database, both maintained by the Environmental Physiology Laboratory at the Johnson Space Center. Both sources were reviewed to provide the narratives that follow.

  16. Designing a framework of intelligent information processing for dentistry administration data.

    PubMed

    Amiri, N; Matthews, D C; Gao, Q

    2005-07-01

    This study was designed to test a cumulative view of current data in the clinical database at the Faculty of Dentistry, Dalhousie University. We planned to examine associations among demographic factors and treatments. Three tables were selected from the database of the faculty: patient, treatment and procedures. All fields and record numbers in each table were documented. Data was explored using SQL server and Visual Basic and then cleaned by removing incongruent fields. After transformation, a data warehouse was created. This was imported to SQL analysis services manager to create an OLAP (Online Analytic Process) cube. The multidimensional model used for access to data was created using a star schema. Treatment count was the measurement variable. Five dimensions--date, postal code, gender, age group and treatment categories--were used to detect associations. Another data warehouse of 8 tables (international tooth code # 1-8) was created and imported to SAS enterprise miner to complete data mining. Association nodes were used for each table to find sequential associations and minimum criteria were set to 2% of cases. Findings of this study confirmed most assumptions of treatment planning procedures. There were some small unexpected patterns of clinical interest. Further developments are recommended to create predictive models. Recent improvements in information technology offer numerous advantages for conversion of raw data from faculty databases to information and subsequently to knowledge. This knowledge can be used by decision makers, managers, and researchers to answer clinical questions, affect policy change and determine future research needs.

  17. Development of a relational database to capture and merge clinical history with the quantitative results of radionuclide renography.

    PubMed

    Folks, Russell D; Savir-Baruch, Bital; Garcia, Ernest V; Verdes, Liudmila; Taylor, Andrew T

    2012-12-01

    Our objective was to design and implement a clinical history database capable of linking to our database of quantitative results from (99m)Tc-mercaptoacetyltriglycine (MAG3) renal scans and export a data summary for physicians or our software decision support system. For database development, we used a commercial program. Additional software was developed in Interactive Data Language. MAG3 studies were processed using an in-house enhancement of a commercial program. The relational database has 3 parts: a list of all renal scans (the RENAL database), a set of patients with quantitative processing results (the Q2 database), and a subset of patients from Q2 containing clinical data manually transcribed from the hospital information system (the CLINICAL database). To test interobserver variability, a second physician transcriber reviewed 50 randomly selected patients in the hospital information system and tabulated 2 clinical data items: hydronephrosis and presence of a current stent. The CLINICAL database was developed in stages and contains 342 fields comprising demographic information, clinical history, and findings from up to 11 radiologic procedures. A scripted algorithm is used to reliably match records present in both Q2 and CLINICAL. An Interactive Data Language program then combines data from the 2 databases into an XML (extensible markup language) file for use by the decision support system. A text file is constructed and saved for review by physicians. RENAL contains 2,222 records, Q2 contains 456 records, and CLINICAL contains 152 records. The interobserver variability testing found a 95% match between the 2 observers for presence or absence of ureteral stent (κ = 0.52), a 75% match for hydronephrosis based on narrative summaries of hospitalizations and clinical visits (κ = 0.41), and a 92% match for hydronephrosis based on the imaging report (κ = 0.84). We have developed a relational database system to integrate the quantitative results of MAG3 image processing with clinical records obtained from the hospital information system. We also have developed a methodology for formatting clinical history for review by physicians and export to a decision support system. We identified several pitfalls, including the fact that important textual information extracted from the hospital information system by knowledgeable transcribers can show substantial interobserver variation, particularly when record retrieval is based on the narrative clinical records.

  18. The National Institute on Disability, Independent Living, and Rehabilitation Research Burn Model System: Twenty Years of Contributions to Clinical Service and Research.

    PubMed

    Goverman, Jeremy; Mathews, Katie; Holavanahalli, Radha K; Vardanian, Andrew; Herndon, David N; Meyer, Walter J; Kowalske, Karen; Fauerbach, Jim; Gibran, Nicole S; Carrougher, Gretchen J; Amtmann, Dagmar; Schneider, Jeffrey C; Ryan, Colleen M

    The National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) established the Burn Model System (BMS) in 1993 to improve the lives of burn survivors. The BMS program includes 1) a multicenter longitudinal database describing the functional and psychosocial recovery of burn survivors; 2) site-specific burn-related research; and 3) a knowledge dissemination component directed toward patients and providers. Output from each BMS component was analyzed. Database structure, content, and access procedures are described. Publications using the database were identified and categorized to illustrate the content area of the work. Unused areas of the database were identified for future study. Publications related to site-specific projects were cataloged. The most frequently cited articles are summarized to illustrate the scope of these projects. The effectiveness of dissemination activities was measured by quantifying website hits and information downloads. There were 25 NIDILRR-supported publications that utilized the database. These articles covered topics related to psychological outcomes, functional outcomes, community reintegration, and burn demographics. There were 172 site-specific publications; highly cited articles demonstrate a wide scope of study. For information dissemination, visits to the BMS website quadrupled between 2013 and 2014, with 124,063 downloads of educational material in 2014. The NIDILRR BMS program has played a major role in defining the course of burn recovery, and making that information accessible to the general public. The accumulating information in the database serves as a rich resource to the burn community for future study. The BMS is a model for collaborative research that is multidisciplinary and outcome focused.

  19. Mammography status using patient self-reports and computerized radiology database.

    PubMed

    Thompson, B; Taylor, V; Goldberg, H; Mullen, M

    1999-10-01

    This study sought to compare self-reported mammography use of low-income women utilizing an inner-city public hospital with a computerized hospital database for tracking mammography use. A survey of all age-eligible women using the hospital's internal medicine clinic was done; responses were matched with the radiology database. We examined concordance among the two data sources. Concordance between self-report and the database was high (82%) when using "ever had a mammogram at the hospital," but low (58%) when comparing self-reported last mammogram with the information contained in the database. Disagreements existed between self-reports and the database. Because we sought to ensure that women would know exactly what a mammogram entailed by including a picture of a woman having a mammogram, it is possible that women's responses were accurate, leading to concerns that discrepancies might be present in the database. Physicians and staff must ensure that they understand the full history of a woman's experience with mammography before recommending for or against the procedure.

  20. Procedural volume, cost, and reimbursement of outpatient incisional hernia repair: implications for payers and providers.

    PubMed

    Song, Chao; Liu, Emelline; Tackett, Scott; Shi, Lizheng; Marcus, Daniel

    2017-06-01

    This analysis aimed to evaluate trends in volumes and costs of primary elective incisional ventral hernia repairs (IVHRs) and investigated potential cost implications of moving procedures from inpatient to outpatient settings. A time series study was conducted using the Premier Hospital Perspective ® Database (Premier database) for elective IVHR identified by International Classification of Diseases, Ninth revision, Clinical Modification codes. IVHR procedure volumes and costs were determined for inpatient, outpatient, minimally invasive surgery (MIS), and open procedures from January 2008-June 2015. Initial visit costs were inflation-adjusted to 2015 US dollars. Median costs were used to analyze variation by site of care and payer. Quantile regression on median costs was conducted in covariate-adjusted models. Cost impact of potential outpatient migration was estimated from a Medicare perspective. During the study period, the trend for outpatient procedures in obese and non-obese populations increased. Inpatient and outpatient MIS procedures experienced a steady growth in adoption over their open counterparts. Overall median costs increased over time, and inpatient costs were often double outpatient costs. An economic model demonstrated that a 5% shift of inpatient procedures to outpatient MIS procedures can have a cost surplus of ∼ US $1.8 million for provider or a cost-saving impact of US $1.7 million from the Centers for Medicare & Medicaid Services perspective. The study was limited by information in the Premier database. No data were available for IVHR cases performed in free-standing ambulatory surgery centers or federal healthcare facilities. Volumes and costs of outpatient IVHRs and MIS procedures increased from January 2008-June 2015. Median costs were significantly higher for inpatients than outpatients, and the difference was particularly evident for obese patients. A substantial cost difference between inpatient and outpatient MIS cases indicated a financial benefit for shifting from inpatient to outpatient MIS.

  1. [Role and management of cancer clinical database in the application of gastric cancer precision medicine].

    PubMed

    Li, Yuanfang; Zhou, Zhiwei

    2016-02-01

    Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.

  2. Using expert knowledge for test linking.

    PubMed

    Bolsinova, Maria; Hoijtink, Herbert; Vermeulen, Jorine Adinda; Béguin, Anton

    2017-12-01

    Linking and equating procedures are used to make the results of different test forms comparable. In the cases where no assumption of random equivalent groups can be made some form of linking design is used. In practice the amount of data available to link the two tests is often very limited due to logistic and security reasons, which affects the precision of linking procedures. This study proposes to enhance the quality of linking procedures based on sparse data by using Bayesian methods which combine the information in the linking data with background information captured in informative prior distributions. We propose two methods for the elicitation of prior knowledge about the difference in difficulty of two tests from subject-matter experts and explain how these results can be used in the specification of priors. To illustrate the proposed methods and evaluate the quality of linking with and without informative priors, an empirical example of linking primary school mathematics tests is presented. The results suggest that informative priors can increase the precision of linking without decreasing the accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Artificial intelligence techniques for modeling database user behavior

    NASA Technical Reports Server (NTRS)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  4. More than 95% completeness of reported procedures in the population-based Dutch Arthroplasty Register

    PubMed Central

    van Steenbergen, Liza N; Spooren, Anneke; van Rooden, Stephanie M; van Oosterhout, Frank J; Morrenhof, Jan W; Nelissen, Rob G H H

    2015-01-01

    Background and purpose A complete and correct national arthroplasty register is indispensable for the quality of arthroplasty outcome studies. We evaluated the coverage, completeness, and validity of the Dutch Arthroplasty Register (LROI) for hip and knee arthroplasty. Patients and methods The LROI is a nationwide population-based registry with information on joint arthroplasties in the Netherlands. Completeness of entered procedures was validated in 2 ways: (1) by comparison with the number of reimbursements for arthroplasty surgeries (Vektis database), and (2) by comparison with data from hospital information systems (HISs). The validity was examined by conducting checks on missing or incorrectly coded values in the LROI. Results The LROI contains over 300,000 hip and knee arthroplasties performed since 2007. Coverage of all Dutch hospitals (n = 100) was reached in 2012. Completeness of registered procedures was 98% for hip arthroplasty and 96% for knee arthroplasty in 2012, based on Vektis data. Based on comparison with data from the HIS, completeness of registered procedures was 97% for primary total hip arthroplasty and 96% for primary knee arthroplasty in 2013. Completeness of revision arthroplasty was 88% for hips and 90% for knees in 2013. The proportion of missing or incorrectly coded values of variables was generally less than 0.5%, except for encrypted personal identity numbers (17% of which were missing) and ASA scores (10% of which were missing). Interpretation The LROI now contains over 300,000 hip and knee arthroplasty procedures, with coverage of all hospitals. It has a good level of completeness (i.e. more than 95% for primary hip and knee arthroplasty procedures in 2012 and 2013) and the database has high validity. PMID:25758646

  5. GALT protein database, a bioinformatics resource for the management and analysis of structural features of a galactosemia-related protein and its mutants.

    PubMed

    d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna

    2009-06-01

    We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.

  6. A DBMS architecture for global change research

    NASA Astrophysics Data System (ADS)

    Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.

    1993-08-01

    The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.

  7. The National Landslide Database of Great Britain: Acquisition, communication and the role of social media

    NASA Astrophysics Data System (ADS)

    Pennington, Catherine; Freeborough, Katy; Dashwood, Claire; Dijkstra, Tom; Lawrie, Kenneth

    2015-11-01

    The British Geological Survey (BGS) is the national geological agency for Great Britain that provides geoscientific information to government, other institutions and the public. The National Landslide Database has been developed by the BGS and is the focus for national geohazard research for landslides in Great Britain. The history and structure of the geospatial database and associated Geographical Information System (GIS) are explained, along with the future developments of the database and its applications. The database is the most extensive source of information on landslides in Great Britain with over 17,000 records of landslide events to date, each documented as fully as possible for inland, coastal and artificial slopes. Data are gathered through a range of procedures, including: incorporation of other databases; automated trawling of current and historical scientific literature and media reports; new field- and desk-based mapping technologies with digital data capture, and using citizen science through social media and other online resources. This information is invaluable for directing the investigation, prevention and mitigation of areas of unstable ground in accordance with Government planning policy guidelines. The national landslide susceptibility map (GeoSure) and a national landslide domains map currently under development, as well as regional mapping campaigns, rely heavily on the information contained within the landslide database. Assessing susceptibility to landsliding requires knowledge of the distribution of failures, an understanding of causative factors, their spatial distribution and likely impacts, whilst understanding the frequency and types of landsliding present is integral to modelling how rainfall will influence the stability of a region. Communication of landslide data through the Natural Hazard Partnership (NHP) and Hazard Impact Model contributes to national hazard mitigation and disaster risk reduction with respect to weather and climate. Daily reports of landslide potential are published by BGS through the NHP partnership and data collected for the National Landslide Database are used widely for the creation of these assessments. The National Landslide Database is freely available via an online GIS and is used by a variety of stakeholders for research purposes.

  8. Metadata (MD)

    Treesearch

    Robert E. Keane

    2006-01-01

    The Metadata (MD) table in the FIREMON database is used to record any information about the sampling strategy or data collected using the FIREMON sampling procedures. The MD method records metadata pertaining to a group of FIREMON plots, such as all plots in a specific FIREMON project. FIREMON plots are linked to metadata using a unique metadata identifier that is...

  9. Student Assessment Pilot Project: Maricopa County Community College District JCEP Project #JZ-309, 1985-86.

    ERIC Educational Resources Information Center

    Abbott, Judith A.

    A summary is provided of the 1985-86 goals and accomplishments of Maricopa County Community College District's (MCCCD's) Student Assessment Pilot Project, which was conducted to develop a districtwide database of quantitative and qualitative information upon which decisions about policies, programs, and procedures related to assessment,…

  10. Turning Access into a web-enabled secure information system for clinical trials.

    PubMed

    Dongquan Chen; Chen, Wei-Bang; Soong, Mayhue; Soong, Seng-Jaw; Orthner, Helmuth F

    2009-08-01

    Organizations that have limited resources need to conduct clinical studies in a cost-effective, but secure way. Clinical data residing in various individual databases need to be easily accessed and secured. Although widely available, digital certification, encryption, and secure web server, have not been implemented as widely, partly due to a lack of understanding of needs and concerns over issues such as cost and difficulty in implementation. The objective of this study was to test the possibility of centralizing various databases and to demonstrate ways of offering an alternative to a large-scale comprehensive and costly commercial product, especially for simple phase I and II trials, with reasonable convenience and security. We report a working procedure to transform and develop a standalone Access database into a secure Web-based secure information system. For data collection and reporting purposes, we centralized several individual databases; developed, and tested a web-based secure server using self-issued digital certificates. The system lacks audit trails. The cost of development and maintenance may hinder its wide application. The clinical trial databases scattered in various departments of an institution could be centralized into a web-enabled secure information system. The limitations such as the lack of a calendar and audit trail can be partially addressed with additional programming. The centralized Web system may provide an alternative to a comprehensive clinical trial management system.

  11. A pilot GIS database of active faults of Mt. Etna (Sicily): A tool for integrated hazard evaluation

    NASA Astrophysics Data System (ADS)

    Barreca, Giovanni; Bonforte, Alessandro; Neri, Marco

    2013-02-01

    A pilot GIS-based system has been implemented for the assessment and analysis of hazard related to active faults affecting the eastern and southern flanks of Mt. Etna. The system structure was developed in ArcGis® environment and consists of different thematic datasets that include spatially-referenced arc-features and associated database. Arc-type features, georeferenced into WGS84 Ellipsoid UTM zone 33 Projection, represent the five main fault systems that develop in the analysed region. The backbone of the GIS-based system is constituted by the large amount of information which was collected from the literature and then stored and properly geocoded in a digital database. This consists of thirty five alpha-numeric fields which include all fault parameters available from literature such us location, kinematics, landform, slip rate, etc. Although the system has been implemented according to the most common procedures used by GIS developer, the architecture and content of the database represent a pilot backbone for digital storing of fault parameters, providing a powerful tool in modelling hazard related to the active tectonics of Mt. Etna. The database collects, organises and shares all scientific currently available information about the active faults of the volcano. Furthermore, thanks to the strong effort spent on defining the fields of the database, the structure proposed in this paper is open to the collection of further data coming from future improvements in the knowledge of the fault systems. By layering additional user-specific geographic information and managing the proposed database (topological querying) a great diversity of hazard and vulnerability maps can be produced by the user. This is a proposal of a backbone for a comprehensive geographical database of fault systems, universally applicable to other sites.

  12. Face-off: A new identification procedure for child eyewitnesses.

    PubMed

    Price, Heather L; Fitzgerald, Ryan J

    2016-09-01

    In 2 experiments, we introduce a new "face-off" procedure for child eyewitness identifications. The new procedure, which is premised on reducing the stimulus set size, was compared with the showup and simultaneous procedures in Experiment 1 and with modified versions of the simultaneous and elimination procedures in Experiment 2. Several benefits of the face-off procedure were observed: it was significantly more diagnostic than the showup procedure; it led to significantly more correct rejections of target-absent lineups than the simultaneous procedures in both experiments, and it led to greater information gain than the modified elimination and simultaneous procedures. The face-off procedure led to consistently more conservative responding than the simultaneous procedures in both experiments. Given the commonly cited concern that children are too lenient in their decision criteria for identification tasks, the face-off procedure may offer a concrete technique to reduce children's high choosing rates. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Monitoring and evaluating surgical care: defining perioperative mortality rate and standardising data collection.

    PubMed

    Palmqvist, Charlotta L; Ariyaratnam, Roshan; Watters, David A; Laing, Grant L; Stupart, Douglas; Hider, Phil; Ng-Kamstra, Joshua S; Wilson, Leona; Clarke, Damian L; Hagander, Lars; Greenberg, Sarah L M; Gruen, Russell L

    2015-04-27

    Case volume per 100 000 population and perioperative mortality rate (POMR) are key indicators to monitor and strengthen surgical services. However, comparisons of POMR have been restricted by absence of standardised approaches to when it is measured, the ideal denominator, need for risk adjustment, and whether data are available. We aimed to address these issues and recommend a minimum dataset by analysing four large mixed surgical datasets, two from well-resourced settings with sophisticated electronic patient information systems and two from resource-limited settings where clinicians maintain locally developed databases. We obtained data from the New Zealand (NZ) National Minimum Dataset, the Geelong Hospital patient management system in Australia, and purpose-built surgical databases in Pietermaritzburg, South Africa (PMZ) and Port Moresby, Papua New Guinea (PNG). Information was sought on inclusion and exclusion criteria, coding criteria, and completeness of patient identifiers, admission, procedure, discharge and death dates, operation details, urgency of admission, and American Society of Anesthesiologists (ASA) score. Date-related errors were defined as missing dates and impossible discrepancies. For every site, we then calculated the POMR, the effect of admission episodes or procedures as denominator, and the difference between in-hospital POMR and 30-day POMR. To determine the need for risk adjustment, we used univariate and multivariate logistic regression to assess the effect on relative POMR for each site of age, admission urgency, ASA score, and procedure type. 1 365 773 patient admissions involving 1 514 242 procedures were included, among which 8655 deaths were recorded within 30 days. Database inclusion and exclusion criteria differed substantially. NZ and Geelong records had less than 0·1% date-related errors and greater than 99·9% completeness. PMZ databases had 99·9% or greater completeness of all data except date-related items (94·0%). PNG had 99·9% or greater completeness for date of birth or age and admission date and operative procedure, but 80-83% completeness of patient identifiers and date related items. Coding of procedures was not standardised, and only NZ recorded ASA status and complete post-discharge mortality. In-hospital POMR range was 0·38% in NZ to 3·44% in PMZ, and in NZ it underestimated 30-day POMR by roughly a third. The difference in POMR by procedures instead of admission episodes as denominator ranged from 10% to 70%. Age older than 65 years and emergency admission had large independent effects on POMR, but relatively little effect in multivariate analysis on the relative odds of in-hospital death at each site. Hospitals can collect and provide data for case volume and POMR without sophisticated electronic information systems. POMR should initially be defined by in-hospital mortality because post-discharge deaths are not usually recorded, and with procedures as denominator because details allowing linkage of several operations within one patient's admission are not always present. Although age and admission urgency are independently associated with POMR, and ASA and case mix were not included, risk adjustment might not be essential because the relative odds between sites persisted. Standardisation of inclusion criteria and definitions is needed, as is attention to accuracy and completeness of dates of procedures, discharge and death. A one-page, paper-based form, or alternatively a simple electronic data collection form, containing a minimum dataset commenced in the operating theatre could facilitate this process. None. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Web based information on clinical toxicology for the United Kingdom: uptake and utilization of TOXBASE in 2000

    PubMed Central

    Nicholas Bateman, D; Good, Alison M; Kelly, Catherine A; Laing, William J

    2002-01-01

    Aims To examine the use and uptake of TOXBASE, an Internet database for point of care provision of poisons information in the United Kingdom during its first calendar year of web-based access. Methods Interrogation of the database software to examine: use by different types of user and geographical origin; profile of ingredient and product access; time of access to the system; profile of access to other parts of the database. Results Registered users of the system increased in the first full year of operation (1224 new users) and usage of the system increased to 111 410 sessions with 190 223 product monograph accesses in 2000. Major users were hospitals, in particular accident and emergency departments. NHS Direct, a public access information service staffed by nurses, also made increasing use of the system. Usage per head of population was highest in Northern Ireland and Scotland, and least in southern England. Ingredients accessed most frequently were similar in all four countries of the UK. Times of use of the system reflect clinical activity, with hospitals making many accesses during night-time hours. The most popular parts of the database other than poisons information were those dealing with childhood poisoning, information on decontamination procedures, teratology information and slang terms for drugs of abuse. Conclusions This Internet system has been widely used in its first full year of operation. The provision of clinically relevant, up to date, information at the point of delivery of patient care is now possible using this approach. It has wide implications for the provision of other types of therapeutic information in clinical areas. Web-based technology represents an opportunity for clinical pharmacologists to provide therapeutic information for clinical colleagues at the bedside. PMID:12100219

  15. Surgical procedures and their cost estimates among women with newly diagnosed endometriosis: a US database study.

    PubMed

    Fuldeore, M; Chwalisz, K; Marx, S; Wu, N; Boulanger, L; Ma, L; Lamothe, K

    2011-01-01

    This descriptive study assessed the rate and costs of surgical procedures among newly diagnosed endometriosis patients. Utilizing the Medstat MarketScan database, commercially insured women aged 18-45 with endometriosis newly diagnosed during 2006-2007 were identified. Each endometriosis patient was matched to four women without endometriosis (population controls) based on age and region of residence. Surgical procedures received during the 12 months post-diagnosis were assessed. Costs of surgical procedures were the amount paid by the insurance companies. This study identified 15,891 women with newly diagnosed endometriosis and 63,564 population controls. More than 65% of endometriosis patients received an endometriosis-related surgical procedure within 1 year of the initial diagnosis. The most common procedure was therapeutic laparoscopy (31.6%), followed by abdominal hysterectomy (22.1%) and vaginal hysterectomy (6.8%). Prevalence and type of surgery performed varied by patient age, including a hysterectomy rate of approximately 16% in patients younger than 35 and 37% among patients aged 35-45 years. Average costs ranged from $4,289 (standard deviation [SD]: $3,313) for diagnostic laparoscopy to $11,397 (SD: $8,749) for abdominal hysterectomy. Diagnosis of endometriosis cannot be validated against medical records, and information on the severity of endometriosis-related symptoms is not available in administrative claims data. Over 65% of patients had endometriosis-related surgical procedures, including hysterectomy, within 1 year of being diagnosed with endometriosis. The cost of surgical procedures related to endometriosis places a significant financial burden on the healthcare system.

  16. Evaluation of algorithms to identify incident cancer cases by using French health administrative databases.

    PubMed

    Ajrouche, Aya; Estellat, Candice; De Rycke, Yann; Tubach, Florence

    2017-08-01

    Administrative databases are increasingly being used in cancer observational studies. Identifying incident cancer in these databases is crucial. This study aimed to develop algorithms to estimate cancer incidence by using health administrative databases and to examine the accuracy of the algorithms in terms of national cancer incidence rates estimated from registries. We identified a cohort of 463 033 participants on 1 January 2012 in the Echantillon Généraliste des Bénéficiaires (EGB; a representative sample of the French healthcare insurance system). The EGB contains data on long-term chronic disease (LTD) status, reimbursed outpatient treatments and procedures, and hospitalizations (including discharge diagnoses, and costly medical procedures and drugs). After excluding cases of prevalent cancer, we applied 15 algorithms to estimate the cancer incidence rates separately for men and women in 2012 and compared them to the national cancer incidence rates estimated from French registries by indirect age and sex standardization. The most accurate algorithm for men combined information from LTD status, outpatient anticancer drugs, radiotherapy sessions and primary or related discharge diagnosis of cancer, although it underestimated the cancer incidence (standardized incidence ratio (SIR) 0.85 [0.80-0.90]). For women, the best algorithm used the same definition of the algorithm for men but restricted hospital discharge to only primary or related diagnosis with an additional inpatient procedure or drug reimbursement related to cancer and gave comparable estimates to those from registries (SIR 1.00 [0.94-1.06]). The algorithms proposed could be used for cancer incidence monitoring and for future etiological cancer studies involving French healthcare databases. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. [Explore method about post-marketing safety re-evaluation of Chinese patent medicines based on HIS database in real world].

    PubMed

    Yang, Wei; Xie, Yanming; Zhuang, Yan

    2011-10-01

    There are many kinds of Chinese traditional patent medicine used in clinical practice and many adverse events have been reported by clinical professionals. Chinese patent medicine's safety problems are the most concerned by patients and physicians. At present, many researchers have studied re-evaluation methods about post marketing Chinese medicine safety inside and outside China. However, it is rare that using data from hospital information system (HIS) to re-evaluating post marketing Chinese traditional patent medicine safety problems. HIS database in real world is a good resource with rich information to research medicine safety. This study planed to analyze HIS data selected from ten top general hospitals in Beijing, formed a large HIS database in real world with a capacity of 1 000 000 cases in total after a series of data cleaning and integrating procedures. This study could be a new project that using information to evaluate traditional Chinese medicine safety based on HIS database. A clear protocol has been completed as for the first step for the whole study. The protocol is as follows. First of all, separate each of the Chinese traditional patent medicines existing in the total HIS database as a single database. Secondly, select some related laboratory tests indexes as the safety evaluating outcomes, such as routine blood, routine urine, feces routine, conventional coagulation, liver function, kidney function and other tests. Thirdly, use the data mining method to analyze those selected safety outcomes which had abnormal change before and after using Chinese patent medicines. Finally, judge the relationship between those abnormal changing and Chinese patent medicine. We hope this method could imply useful information to Chinese medicine researchers interested in safety evaluation of traditional Chinese medicine.

  18. New tools for discovery from old databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.P.

    1990-05-01

    Very large quantities of information have been accumulated as a result of petroleum exploration and the practice of petroleum geology. New and more powerful methods to build and analyze databases have been developed. The new tools must be tested, and, as quickly as possible, combined with traditional methods to the full advantage of currently limited funds in the search for new and extended hydrocarbon reserves. A recommended combined sequence is (1) database validating, (2) category separating, (3) machine learning, (4) graphic modeling, (5) database filtering, and (6) regression for predicting. To illustrate this procedure, a database from the Railroad Commissionmore » of Texas has been analyzed. Clusters of information have been identified to prevent apples and oranges problems from obscuring the conclusions. Artificial intelligence has checked the database for potentially invalid entries and has identified rules governing the relationship between factors, which can be numeric or nonnumeric (words), or both. Graphic 3-Dimensional modeling has clarified relationships. Database filtering has physically separated the integral parts of the database, which can then be run through the sequence again, increasing the precision. Finally, regressions have been run on separated clusters giving equations, which can be used with confidence in making predictions. Advances in computer systems encourage the learning of much more from past records, and reduce the danger of prejudiced decisions. Soon there will be giant strides beyond current capabilities to the advantage of those who are ready for them.« less

  19. GIEMS-D3: A new long-term, dynamical, high-spatial resolution inundation extent dataset at global scale

    NASA Astrophysics Data System (ADS)

    Aires, Filipe; Miolane, Léo; Prigent, Catherine; Pham Duc, Binh; Papa, Fabrice; Fluet-Chouinard, Etienne; Lehner, Bernhard

    2017-04-01

    The Global Inundation Extent from Multi-Satellites (GIEMS) provides multi-year monthly variations of the global surface water extent at 25kmx25km resolution. It is derived from multiple satellite observations. Its spatial resolution is usually compatible with climate model outputs and with global land surface model grids but is clearly not adequate for local applications that require the characterization of small individual water bodies. There is today a strong demand for high-resolution inundation extent datasets, for a large variety of applications such as water management, regional hydrological modeling, or for the analysis of mosquitos-related diseases. A new procedure is introduced to downscale the GIEMS low spatial resolution inundations to a 3 arc second (90 m) dataset. The methodology is based on topography and hydrography information from the HydroSHEDS database. A new floodability index is adopted and an innovative smoothing procedure is developed to ensure the smooth transition, in the high-resolution maps, between the low-resolution boxes from GIEMS. Topography information is relevant for natural hydrology environments controlled by elevation, but is more limited in human-modified basins. However, the proposed downscaling approach is compatible with forthcoming fusion with other more pertinent satellite information in these difficult regions. The resulting GIEMS-D3 database is the only high spatial resolution inundation database available globally at the monthly time scale over the 1993-2007 period. GIEMS-D3 is assessed by analyzing its spatial and temporal variability, and evaluated by comparisons to other independent satellite observations from visible (Google Earth and Landsat), infrared (MODIS) and active microwave (SAR).

  20. Environmental Factor(tm) system: RCRA hazardous waste handler information (on cd-rom). Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-04-01

    Environmental Factor(tm) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information - dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less

  1. An automated procedure to identify biomedical articles that contain cancer-associated gene variants.

    PubMed

    McDonald, Ryan; Scott Winters, R; Ankuda, Claire K; Murphy, Joan A; Rogers, Amy E; Pereira, Fernando; Greenblatt, Marc S; White, Peter S

    2006-09-01

    The proliferation of biomedical literature makes it increasingly difficult for researchers to find and manage relevant information. However, identifying research articles containing mutation data, a requisite first step in integrating large and complex mutation data sets, is currently tedious, time-consuming and imprecise. More effective mechanisms for identifying articles containing mutation information would be beneficial both for the curation of mutation databases and for individual researchers. We developed an automated method that uses information extraction, classifier, and relevance ranking techniques to determine the likelihood of MEDLINE abstracts containing information regarding genomic variation data suitable for inclusion in mutation databases. We targeted the CDKN2A (p16) gene and the procedure for document identification currently used by CDKN2A Database curators as a measure of feasibility. A set of abstracts was manually identified from a MEDLINE search as potentially containing specific CDKN2A mutation events. A subset of these abstracts was used as a training set for a maximum entropy classifier to identify text features distinguishing "relevant" from "not relevant" abstracts. Each document was represented as a set of indicative word, word pair, and entity tagger-derived genomic variation features. When applied to a test set of 200 candidate abstracts, the classifier predicted 88 articles as being relevant; of these, 29 of 32 manuscripts in which manual curation found CDKN2A sequence variants were positively predicted. Thus, the set of potentially useful articles that a manual curator would have to review was reduced by 56%, maintaining 91% recall (sensitivity) and more than doubling precision (positive predictive value). Subsequent expansion of the training set to 494 articles yielded similar precision and recall rates, and comparison of the original and expanded trials demonstrated that the average precision improved with the larger data set. Our results show that automated systems can effectively identify article subsets relevant to a given task and may prove to be powerful tools for the broader research community. This procedure can be readily adapted to any or all genes, organisms, or sets of documents. Published 2006 Wiley-Liss, Inc.

  2. An Information System for European culture collections: the way forward.

    PubMed

    Casaregola, Serge; Vasilenko, Alexander; Romano, Paolo; Robert, Vincent; Ozerskaya, Svetlana; Kopf, Anna; Glöckner, Frank O; Smith, David

    2016-01-01

    Culture collections contain indispensable information about the microorganisms preserved in their repositories, such as taxonomical descriptions, origins, physiological and biochemical characteristics, bibliographic references, etc. However, information currently accessible in databases rarely adheres to common standard protocols. The resultant heterogeneity between culture collections, in terms of both content and format, notably hampers microorganism-based research and development (R&D). The optimized exploitation of these resources thus requires standardized, and simplified, access to the associated information. To this end, and in the interest of supporting R&D in the fields of agriculture, health and biotechnology, a pan-European distributed research infrastructure, MIRRI, including over 40 public culture collections and research institutes from 19 European countries, was established. A prime objective of MIRRI is to unite and provide universal access to the fragmented, and untapped, resources, information and expertise available in European public collections of microorganisms; a key component of which is to develop a dynamic Information System. For the first time, both culture collection curators as well as their users have been consulted and their feedback, concerning the needs and requirements for collection databases and data accessibility, utilised. Users primarily noted that databases were not interoperable, thus rendering a global search of multiple databases impossible. Unreliable or out-of-date and, in particular, non-homogenous, taxonomic information was also considered to be a major obstacle to searching microbial data efficiently. Moreover, complex searches are rarely possible in online databases thus limiting the extent of search queries. Curators also consider that overall harmonization-including Standard Operating Procedures, data structure, and software tools-is necessary to facilitate their work and to make high-quality data easily accessible to their users. Clearly, the needs of culture collection curators coincide with those of users on the crucial point of database interoperability. In this regard, and in order to design an appropriate Information System, important aspects on which the culture collection community should focus include: the interoperability of data sets with the ontologies to be used; setting best practice in data management, and the definition of an appropriate data standard.

  3. Measuring the effect of improvement in methodological techniques on data collection in the Gharbiah population-based cancer registry in Egypt: Implications for other Low- and Middle-Income Countries.

    PubMed

    Smith, Brittney L; Ramadan, Mohamed; Corley, Brittany; Hablas, Ahmed; Seifeldein, Ibrahim A; Soliman, Amr S

    2015-12-01

    The purpose of this study was to describe and quantify procedures and methods that maximized the efficiency of the Gharbiah Cancer Registry (GPCR), the only population-based cancer registry in Egypt. The procedures and measures included a locally-developed software program to translate names from Arabic to English, a new national ID number for demographic and occupational information, and linkage of cancer cases to new electronic mortality records of the Ministry of Health. Data was compiled from the 34,058 cases from the registry for the years 1999-2007. Cases and registry variables about demographic and clinical information were reviewed by year to assess trends associated with each new method or procedure during the study period. The introduction of the name translation software in conjunction with other demographic variables increased the identification of detected duplicates from 23.4% to 78.1%. Use of the national ID increased the proportion of cases with occupation information from 27% to 89%. Records with complete mortality information increased from 18% to 43%. Proportion of cases that came from death certificate only, decreased from 9.8% to 4.7%. Overall, the study revealed that introducing and utilizing local and culture-specific methodological changes, software, and electronic non-cancer databases had a significant impact on data quality and completeness. This study may have translational implications for improving the quality of cancer registries in LMICs considering the emerging advances in electronic databases and utilization of health software and computerization of data. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Structural composite panel performance under long-term load

    Treesearch

    Theodore L. Laufenberg

    1988-01-01

    Information on the performance of wood-based structural composite panels under long-term load is currently needed to permit their use in engineered assemblies and systems. A broad assessment of the time-dependent properties of panels is critical for creating databases and models of the creep-rupture phenomenon that lead to reliability-based design procedures. This...

  5. Data-Mining Techniques in Detecting Factors Linked to Academic Achievement

    ERIC Educational Resources Information Center

    Martínez Abad, Fernando; Chaparro Caso López, Alicia A.

    2017-01-01

    In light of the emergence of statistical analysis techniques based on data mining in education sciences, and the potential they offer to detect non-trivial information in large databases, this paper presents a procedure used to detect factors linked to academic achievement in large-scale assessments. The study is based on a non-experimental,…

  6. Freely Accessible Chemical Database Resources of Compounds for in Silico Drug Discovery.

    PubMed

    Yang, JingFang; Wang, Di; Jia, Chenyang; Wang, Mengyao; Hao, GeFei; Yang, GuangFu

    2018-05-07

    In silico drug discovery has been proved to be a solidly established key component in early drug discovery. However, this task is hampered by the limitation of quantity and quality of compound databases for screening. In order to overcome these obstacles, freely accessible database resources of compounds have bloomed in recent years. Nevertheless, how to choose appropriate tools to treat these freely accessible databases are crucial. To the best of our knowledge, this is the first systematic review on this issue. The existed advantages and drawbacks of chemical databases were analyzed and summarized based on the collected six categories of freely accessible chemical databases from literature in this review. Suggestions on how and in which conditions the usage of these databases could be reasonable were provided. Tools and procedures for building 3D structure chemical libraries were also introduced. In this review, we described the freely accessible chemical database resources for in silico drug discovery. In particular, the chemical information for building chemical database appears as attractive resources for drug design to alleviate experimental pressure. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. User's manual for the national water information system of the U.S. Geological Survey: Ground-water site-inventory system

    USGS Publications Warehouse

    ,

    2004-01-01

    The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site-Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying ground-water and sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.

  8. User's Manual for the National Water Information System of the U.S. Geological Survey: Ground-water site-inventory system

    USGS Publications Warehouse

    ,

    2005-01-01

    The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site- Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying groundwater and Sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.

  9. The CRAC cohort model: A computerized low cost registry of interventional cardiology with daily update and long-term follow-up.

    PubMed

    Rangé, G; Chassaing, S; Marcollet, P; Saint-Étienne, C; Dequenne, P; Goralski, M; Bardiére, P; Beverilli, F; Godillon, L; Sabine, B; Laure, C; Gautier, S; Hakim, R; Albert, F; Angoulvant, D; Grammatico-Guillon, L

    2018-05-01

    To assess the reliability and low cost of a computerized interventional cardiology (IC) registry to prospectively and systematically collect high-quality data for all consecutive coronary patients referred for coronary angiogram or/and coronary angioplasty. Rigorous clinical practice assessment is a key factor to improve prognosis in IC. A prospective and permanent registry could achieve this goal but, presumably, at high cost and low level of data quality. One multicentric IC registry (CRAC registry), fully integrated to usual coronary activity report software, started in the centre Val-de-Loire (CVL) French region in 2014. Quality assessment of CRAC registry was conducted on five IC CathLab of the CVL region, from January 1st to December 31st 2014. Quality of collected data was evaluated by measuring procedure exhaustivity (comparing with data from hospital information system), data completeness (quality controls) and data consistency (by checking complete medical charts as gold standard). Cost per procedure (global registry operating cost/number of collected procedures) was also estimated. CRAC model provided a high-quality level with 98.2% procedure completeness, 99.6% data completeness and 89% data consistency. The operating cost per procedure was €14.70 ($16.51) for data collection and quality control, including ST-segment elevation myocardial infarction (STEMI) preadmission information and one-year follow-up after angioplasty. This integrated computerized IC registry led to the construction of an exhaustive, reliable and costless database, including all coronary patients entering in participating IC centers in the CVL region. This solution will be developed in other French regions, setting up a national IC database for coronary patients in 2020: France PCI. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  10. An Initial Design of ISO 19152:2012 LADM Based Valuation and Taxation Data Model

    NASA Astrophysics Data System (ADS)

    Çağdaş, V.; Kara, A.; van Oosterom, P.; Lemmen, C.; Işıkdağ, Ü.; Kathmann, R.; Stubkjær, E.

    2016-10-01

    A fiscal registry or database is supposed to record geometric, legal, physical, economic, and environmental characteristics in relation to property units, which are subject to immovable property valuation and taxation. Apart from procedural standards, there is no internationally accepted data standard that defines the semantics of fiscal databases. The ISO 19152:2012 Land Administration Domain Model (LADM), as an international land administration standard focuses on legal requirements, but considers out of scope specifications of external information systems including valuation and taxation databases. However, it provides a formalism which allows for an extension that responds to the fiscal requirements. This paper introduces an initial version of a LADM - Fiscal Extension Module for the specification of databases used in immovable property valuation and taxation. The extension module is designed to facilitate all stages of immovable property taxation, namely the identification of properties and taxpayers, assessment of properties through single or mass appraisal procedures, automatic generation of sales statistics, and the management of tax collection, dealing with arrears and appeals. It is expected that the initial version will be refined through further activities held by a possible joint working group under FIG Commission 7 (Cadastre and Land Management) and FIG Commission 9 (Valuation and the Management of Real Estate) in collaboration with other relevant international bodies.

  11. XV-15 Low-Noise Terminal Area Operations Testing

    NASA Technical Reports Server (NTRS)

    Edwards, B. D.

    1998-01-01

    Test procedures related to XV-15 noise tests conducted by NASA-Langley and Bell Helicopter Textron, Inc. are discussed. The tests. which took place during October and November 1995, near Waxahachie, Texas, documented the noise signature of the XV-15 tilt-rotor aircraft at a wide variety of flight conditions. The stated objectives were to: -provide a comprehensive acoustic database for NASA and U.S. Industry -validate noise prediction methodologies, and -develop and demonstrate low-noise flight profiles. The test consisted of two distinct phases. Phase 1 provided an acoustic database for validating analytical noise prediction techniques; Phase 2 directly measured noise contour information at a broad range of operating profiles, with emphasis on minimizing 'approach' noise. This report is limited to a documentation of the test procedures, flight conditions, microphone locations, meteorological conditions, and test personnel used in the test. The acoustic results are not included.

  12. Database of Industrial Technological Information in Kanagawa : Networks for Technology Activities

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Shindo, Tadashi

    This system is one of the databases which require participation by its members and of which premise is to open all the data in it. Aiming at free technological cooperation and exchange among industries it was constructed by Kanagawa Prefecture in collaboration with enterprises located in it. The input data is 36 items such as major product, special and advantageous technology, technolagy to be wanted for cooperation, facility and equipment, which technologically characterize each enterprise. They are expressed in 2,000 characters and written by natural language including Kanji except for some coded items. 24 search items are accessed by natural language so that in addition to interactive searching procedures including menu-type it enables extensive searching. The information service started in Oct., 1986 covering data from 2,000 enterprisen.

  13. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR INSTRUCTIONS FOR THE ADDITION OF INDIVIDUAL CLEANED NON SCANNED DATA BATCHES TO MASTER DATABASES (UA-D-27.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures involved in appending cleaned individual data batches to the master databases. This procedure applies to the Arizona NHEXAS project and the Border study. Keywords: data; database.

    The U.S.-Mexico Border Program is sponsored b...

  14. QA procedures and emissions from nonstandard sources in AQUIS, a PC-based emission inventory and air permit manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, A.E.; Tschanz, J.; Monarch, M.

    1996-05-01

    The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less

  15. Contourlet textual features: improving the diagnosis of solitary pulmonary nodules in two dimensional CT images.

    PubMed

    Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua

    2014-01-01

    To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer.

  16. Genebanks: a comparison of eight proposed international genetic databases.

    PubMed

    Austin, Melissa A; Harding, Sarah; McElroy, Courtney

    2003-01-01

    To identify and compare population-based genetic databases, or "genebanks", that have been proposed in eight international locations between 1998 and 2002. A genebank can be defined as a stored collection of genetic samples in the form of blood or tissue, that can be linked with medical and genealogical or lifestyle information from a specific population, gathered using a process of generalized consent. Genebanks were identified by searching Medline and internet search engines with key words such as "genetic database" and "biobank" and by reviewing literature on previously identified databases such as the deCode project. Collection of genebank characteristics was by an electronic and literature search, augmented by correspondence with informed individuals. The proposed genebanks are located in Iceland, the United Kingdom, Estonia, Latvia, Sweden, Singapore, the Kingdom of Tonga, and Quebec, Canada. Comparisons of the genebanks were based on the following criteria: genebank location and description of purpose, role of government, commercial involvement, consent and confidentiality procedures, opposition to the genebank, and current progress. All of the groups proposing the genebanks plan to search for susceptibility genes for complex diseases while attempting to improve public health and medical care in the region and, in some cases, stimulating the local economy through expansion of the biotechnology sector. While all of the identified plans share these purposes, they differ in many aspects, including funding, subject participation, and organization. The balance of government and commercial involvement in the development of each project varies. Genetic samples and health information will be collected from participants and coded in all of the genebanks, but consent procedures range from presumed consent of the entire eligible population to recruitment of volunteers with informed consent. Issues regarding confidentiality and consent have resulted in opposition to some of the more publicized projects. None of the proposed databases are currently operational and at least one project was terminated due to opposition. Ambitious genebank projects have been proposed in numerous countries and provinces. The characteristics of the projects vary, but all intend to map genes for common diseases and hope to improve the health of the populations involved. The impact of these projects on understanding genetic susceptibility to disease will be increasingly apparent if the projects become operational. The ethical, legal, and social implications of the projects should be carefully considered during their development. Copyright 2003 S. Karger AG, Basel

  17. Intraoperative complications in pediatric neurosurgery: review of 1807 cases.

    PubMed

    van Lindert, Erik J; Arts, Sebastian; Blok, Laura M; Hendriks, Mark P; Tielens, Luc; van Bilsen, Martine; Delye, Hans

    2016-09-01

    OBJECTIVE Minimal literature exists on the intraoperative complication rate of pediatric neurosurgical procedures with respect to both surgical and anesthesiological complications. The aim of this study, therefore, was to establish intraoperative complication rates to provide patients and parents with information on which to base their informed consent and to establish a baseline for further targeted improvement of pediatric neurosurgical care. METHODS A clinical complication registration database comprising a consecutive cohort of all pediatric neurosurgical procedures carried out in a general neurosurgical department from January 1, 2004, until July 1, 2012, was analyzed. During the study period, 1807 procedures were performed on patients below the age of 17 years. RESULTS Sixty-four intraoperative complications occurred in 62 patients (3.5% of procedures). Intraoperative mortality was 0.17% (n = 3). Seventy-eight percent of the complications (n = 50) were related to the neurosurgical procedures, whereas 22% (n = 14) were due to anesthesiology. The highest intraoperative complication rates were for cerebrovascular surgery (7.7%) and tumor surgery (7.4%). The most frequently occurring complications were cerebrovascular complications (33%). CONCLUSIONS Intraoperative complications are not exceptional during pediatric neurosurgical procedures. Awareness of these complications is the first step in preventing them.

  18. [Genetic research with stored human tissue: a coding procedure with optimal use of information and protection of privacy].

    PubMed

    Schmidt, M K; van Leeuwen, F E; Klaren, H M; Tollenaar, R A; van 't Veer, L J

    2004-03-20

    To answer research questions concerning the course of disease and the optimal treatment of hereditary breast cancer, genetic typing together with the clinical and tumour characteristics of breast cancer patients are an important source of information. Part of the incidence of breast cancer can be explained by BRCA1 and BRCA2 germline mutations, which with current techniques can be retrospectively analysed in stored, paraffin-embedded tissue samples. In view of the implications of BRCA1- or BRCA2-carrier status for patients and other family members and the lack of clear legal regulations regarding the procedures to be followed when analysis is performed on historical material and no individual informed consent can be asked from the patients, an appropriate procedure for coding such data or rendering it anonymous is of great importance. By using the coding procedure described in this article, it becomes possible to follow and to work out in greater detail the guidelines of the code for 'Proper secondary use of human tissue' of the Federation of Biomedical Scientific Societies and to use these valuable databases again in the future.

  19. HUNT: launch of a full-length cDNA database from the Helix Research Institute.

    PubMed

    Yudate, H T; Suwa, M; Irie, R; Matsui, H; Nishikawa, T; Nakamura, Y; Yamaguchi, D; Peng, Z Z; Yamamoto, T; Nagai, K; Hayashi, K; Otsuki, T; Sugiyama, T; Ota, T; Suzuki, Y; Sugano, S; Isogai, T; Masuho, Y

    2001-01-01

    The Helix Research Institute (HRI) in Japan is releasing 4356 HUman Novel Transcripts and related information in the newly established HUNT database. The institute is a joint research project principally funded by the Japanese Ministry of International Trade and Industry, and the clones were sequenced in the governmental New Energy and Industrial Technology Development Organization (NEDO) Human cDNA Sequencing Project. The HUNT database contains an extensive amount of annotation from advanced analysis and represents an essential bioinformatics contribution towards understanding of the gene function. The HRI human cDNA clones were obtained from full-length enriched cDNA libraries constructed with the oligo-capping method and have resulted in novel full-length cDNA sequences. A large fraction has little similarity to any proteins of known function and to obtain clues about possible function we have developed original analysis procedures. Any putative function deduced here can be validated or refuted by complementary analysis results. The user can also extract information from specific categories like PROSITE patterns, PFAM domains, PSORT localization, transmembrane helices and clones with GENIUS structure assignments. The HUNT database can be accessed at http://www.hri.co.jp/HUNT.

  20. Biological agents database in the armed forces.

    PubMed

    Niemcewicz, Marcin; Kocik, Janusz; Bielecka, Anna; Wierciński, Michał

    2014-10-01

    Rapid detection and identification of the biological agent during both, natural or deliberate outbreak is crucial for implementation of appropriate control measures and procedures in order to mitigate the spread of disease. Determination of pathogen etiology may not only support epidemiological investigation and safety of human beings, but also enhance forensic efforts in pathogen tracing, collection of evidences and correct inference. The article presents objectives of the Biological Agents Database, which was developed for the purpose of the Ministry of National Defense of the Republic of Poland under the European Defence Agency frame. The Biological Agents Database is an electronic catalogue of genetic markers of highly dangerous pathogens and biological agents of weapon of mass destruction concern, which provides full identification of biological threats emerging in Poland and in locations of activity of Polish troops. The Biological Agents Database is a supportive tool used for tracing biological agents' origin as well as rapid identification of agent causing the disease of unknown etiology. It also provides support in diagnosis, analysis, response and exchange of information between institutions that use information contained in it. Therefore, it can be used not only for military purposes, but also in a civilian environment.

  1. Resources for global risk assessment: the International Toxicity Estimates for Risk (ITER) and Risk Information Exchange (RiskIE) databases.

    PubMed

    Wullenweber, Andrea; Kroner, Oliver; Kohrman, Melissa; Maier, Andrew; Dourson, Michael; Rak, Andrew; Wexler, Philip; Tomljanovic, Chuck

    2008-11-15

    The rate of chemical synthesis and use has outpaced the development of risk values and the resolution of risk assessment methodology questions. In addition, available risk values derived by different organizations may vary due to scientific judgments, mission of the organization, or use of more recently published data. Further, each organization derives values for a unique chemical list so it can be challenging to locate data on a given chemical. Two Internet resources are available to address these issues. First, the International Toxicity Estimates for Risk (ITER) database (www.tera.org/iter) provides chronic human health risk assessment data from a variety of organizations worldwide in a side-by-side format, explains differences in risk values derived by different organizations, and links directly to each organization's website for more detailed information. It is also the only database that includes risk information from independent parties whose risk values have undergone independent peer review. Second, the Risk Information Exchange (RiskIE) is a database of in progress chemical risk assessment work, and includes non-chemical information related to human health risk assessment, such as training modules, white papers and risk documents. RiskIE is available at http://www.allianceforrisk.org/RiskIE.htm, and will join ITER on National Library of Medicine's TOXNET (http://toxnet.nlm.nih.gov/). Together, ITER and RiskIE provide risk assessors essential tools for easily identifying and comparing available risk data, for sharing in progress assessments, and for enhancing interaction among risk assessment groups to decrease duplication of effort and to harmonize risk assessment procedures across organizations.

  2. Icelandic. Decision of the Supreme Court on the protection of privacy with regard to the processing of Health Sector Databases. Attorney at Law vs The State of Iceland.

    PubMed

    2004-01-01

    Mr. R appealed for a decision by the Court to overturn the refusal of the Medical Director of Health to her request that health information in medical records pertaining to herdeceased father should not be entered into the Health Sector Database. Furthermore, she called for recognition of her right to prohibit the transfer of such information into a database. Article 8 of Act No 139/1998 on a Health Sector Database provides for the right of patients to refuse permission, by notification to the Medical Director of Health, for information concerning them to be entered into the Health Sector Database. The Court concluded that R could not exercise this right acting as a substitute of her deceased father, but it was recognised that she might, on the basis of her right to protection of privacy, have an interest in preventing the transfer of health data concerning her father into the database, as information could be inferred from such data relating to the hereditary characteristics of her father which might also apply to herself. It was revealed in the course of proceedings that extensive information concerning people's health is entered into medical records, e.g. medical treatment, life-style and social conditions, employment and family circumstances, together with a detailed identification of the person that the information concerns. It was recognised as unequivocal that the provisions of Paragraph 1 of Article 71 of the Constitution applied to such information and guaranteed to every person the right to protection of privacy in this respect. The Court concluded that the opinion of the District Court, which, inter alia, was based on the opinion of an assessor, to the effect that so-called one-way encryption could be carried out in such a secure manner that it would be virtually impossible to read the encrypted data, had not been refuted. It was noted, however, that Act No. 139/1998 provides no details as to what information from medical records is required to be encrypted in this manner prior to transfer into the database or whether certain information contained in the medical records will not be transferred into the database. The documents of the case indicate that only the identity number of the patient would be encrypted in the database, and that names, both those of the patient and his relatives, as well as the precise address, would be omitted. It is obvious that information on these items is not the only information appearing in the medical records which could, in certain cases, unequivocally identify the person concerned. Act No. 139/1998 also provides for authorisation to the licensee to process information from the medical records transferred into the database. The Act stipulates that certain specified public entities must approve procedures and process methods and monitor all queries and processing of information in the database. However, there is no clear definition of what type of queries will be directed to the database or in what form the replies to such queries will appear. The Court concluded that even though individual provisions of Act No 139/1998 repeatedly stipulate that health information in the Health Sector Database should be non-personally identifiable, it is far from adequately ensured under statutory law that this stated objective will be achieved. In light of the obligations imposed on the legislature by Paragraph 1 of Article 71 of the Constitution, the Court concluded that various forms of monitoring of the creation and, operation of the database are no substitute in this respect without foundation in definite statutory norms. In light of these circumstances, and taking into account the principles of Icelandic law concerning the confidentiality and protection of privacy, the Court concluded that the right of R in this matter must be recognised, and her court claims, therefore, upheld.

  3. Group Connotation in the Analysis of the Images in Motion Used in Television Departments

    ERIC Educational Resources Information Center

    Caldera-Serrano, Jorge

    2010-01-01

    This paper describes a procedure to manage connotations, so that they may be identified in the document databases of television channels. The system is based on Ranganathan's facets, as this is the best tool to describe actions--i.e. the units analysed in television--which enable the identification of the connoted information by introducing or…

  4. Relational databases: a transparent framework for encouraging biology students to think informatically.

    PubMed

    Rice, Michael; Gladstone, William; Weir, Michael

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills.

  5. Relational Databases: A Transparent Framework for Encouraging Biology Students To Think Informatically

    PubMed Central

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills. PMID:15592597

  6. Creating of Central Geospatial Database of the Slovak Republic and Procedures of its Revision

    NASA Astrophysics Data System (ADS)

    Miškolci, M.; Šafář, V.; Šrámková, R.

    2016-06-01

    The article describes the creation of initial three dimensional geodatabase from planning and designing through the determination of technological and manufacturing processes to practical using of Central Geospatial Database (CGD - official name in Slovak language is Centrálna Priestorová Databáza - CPD) and shortly describes procedures of its revision. CGD ensures proper collection, processing, storing, transferring and displaying of digital geospatial information. CGD is used by Ministry of Defense (MoD) for defense and crisis management tasks and by Integrated rescue system. For military personnel CGD is run on MoD intranet, and for other users outside of MoD is transmutated to ZbGIS (Primary Geodatabase of Slovak Republic) and is run on public web site. CGD is a global set of geo-spatial information. CGD is a vector computer model which completely covers entire territory of Slovakia. Seamless CGD is created by digitizing of real world using of photogrammetric stereoscopic methods and measurements of objects properties. Basic vector model of CGD (from photogrammetric processing) is then taken out to the field for inspection and additional gathering of objects properties in the whole area of mapping. Finally real-world objects are spatially modeled as a entities of three-dimensional database. CGD gives us opportunity, to get know the territory complexly in all the three spatial dimensions. Every entity in CGD has recorded the time of collection, which allows the individual to assess the timeliness of information. CGD can be utilized for the purposes of geographical analysis, geo-referencing, cartographic purposes as well as various special-purpose mapping and has the ambition to cover the needs not only the MoD, but to become a reference model for the national geographical infrastructure.

  7. EPA Facility Registry Service (FRS): TRI

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Toxic Release Inventory (TRI) System. TRI is a publicly available EPA database reported annually by certain covered industry groups, as well as federal facilities. It contains information about more than 650 toxic chemicals that are being used, manufactured, treated, transported, or released into the environment, and includes information about waste management and pollution prevention activities. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to TRI facilities once the TRI data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  8. Idea and implementation studies of populating TOPO250 component with the data from TOPO10 - generalization of geographic information in the BDG database. (Polish Title: Koncepcja i studium implementacji procesu zasilania komponentu TOPO250 danymi TOPO10 - generalizacja informacji geograficznej w bazie danych BDG )

    NASA Astrophysics Data System (ADS)

    Olszewski, R.; Pillich-Kolipińska, A.; Fiedukowicz, A.

    2013-12-01

    Implementation of INSPIRE Directive in Poland requires not only legal transposition but also development of a number of technological solutions. The one of such tasks, associated with creation of Spatial Information Infrastructure in Poland, is developing a complex model of georeference database. Significant funding for GBDOT project enables development of the national basic topographical database as a multiresolution database (MRDB). Effective implementation of this type of database requires developing procedures for generalization of geographic information (generalization of digital landscape model - DLM), which, treating TOPO10 component as the only source for creation of TOPO250 component, will allow keeping conceptual and classification consistency between those database elements. To carry out this task, the implementation of the system's concept (prepared previously for Head Office of Geodesy and Cartography) is required. Such system is going to execute the generalization process using constrained-based modeling and allows to keep topological relationships between the objects as well as between the object classes. Full implementation of the designed generalization system requires running comprehensive tests which would help with its calibration and parameterization of the generalization procedures (related to the character of generalized area). Parameterization of this process will allow determining the criteria of specific objects selection, simplification algorithms as well as the operation order. Tests with the usage of differentiated, related to the character of the area, generalization process parameters become nowadays the priority issue. Parameters are delivered to the system in the form of XML files, which, with the help of dedicated tool, are generated from the spreadsheet files (XLS) filled in by user. Using XLS file makes entering and modifying the parameters easier. Among the other elements defined by the external parametric files there are: criteria of object selection, metric parameters of generalization algorithms (e.g. simplification or aggregation) and the operations' sequence. Testing on the trial areas of diverse character will allow developing the rules of generalization process' realization, its parameterization with the proposed tool within the multiresolution reference database. The authors have attempted to develop a generalization process' parameterization for a number of different trial areas. The generalization of the results will contribute to the development of a holistic system of generalized reference data stored in the national geodetic and cartographic resources.

  9. An automated database case definition for serious bleeding related to oral anticoagulant use.

    PubMed

    Cunningham, Andrew; Stein, C Michael; Chung, Cecilia P; Daugherty, James R; Smalley, Walter E; Ray, Wayne A

    2011-06-01

    Bleeding complications are a serious adverse effect of medications that prevent abnormal blood clotting. To facilitate epidemiologic investigations of bleeding complications, we developed and validated an automated database case definition for bleeding-related hospitalizations. The case definition utilized information from an in-progress retrospective cohort study of warfarin-related bleeding in Tennessee Medicaid enrollees 30 years of age or older. It identified inpatient stays during the study period of January 1990 to December 2005 with diagnoses and/or procedures that indicated a current episode of bleeding. The definition was validated by medical record review for a sample of 236 hospitalizations. We reviewed 186 hospitalizations that had medical records with sufficient information for adjudication. Of these, 165 (89%, 95%CI: 83-92%) were clinically confirmed bleeding-related hospitalizations. An additional 19 hospitalizations (10%, 7-15%) were adjudicated as possibly bleeding-related. Of the 165 clinically confirmed bleeding-related hospitalizations, the automated database and clinical definitions had concordant anatomical sites (gastrointestinal, cerebral, genitourinary, other) for 163 (99%, 96-100%). For those hospitalizations with sufficient information to distinguish between upper/lower gastrointestinal bleeding, the concordance was 89% (76-96%) for upper gastrointestinal sites and 91% (77-97%) for lower gastrointestinal sites. A case definition for bleeding-related hospitalizations suitable for automated databases had a positive predictive value of between 89% and 99% and could distinguish specific bleeding sites. Copyright © 2011 John Wiley & Sons, Ltd.

  10. The composite load spectra project

    NASA Technical Reports Server (NTRS)

    Newell, J. F.; Ho, H.; Kurth, R. E.

    1990-01-01

    Probabilistic methods and generic load models capable of simulating the load spectra that are induced in space propulsion system components are being developed. Four engine component types (the transfer ducts, the turbine blades, the liquid oxygen posts and the turbopump oxidizer discharge duct) were selected as representative hardware examples. The composite load spectra that simulate the probabilistic loads for these components are typically used as the input loads for a probabilistic structural analysis. The knowledge-based system approach used for the composite load spectra project provides an ideal environment for incremental development. The intelligent database paradigm employed in developing the expert system provides a smooth coupling between the numerical processing and the symbolic (information) processing. Large volumes of engine load information and engineering data are stored in database format and managed by a database management system. Numerical procedures for probabilistic load simulation and database management functions are controlled by rule modules. Rules were hard-wired as decision trees into rule modules to perform process control tasks. There are modules to retrieve load information and models. There are modules to select loads and models to carry out quick load calculations or make an input file for full duty-cycle time dependent load simulation. The composite load spectra load expert system implemented today is capable of performing intelligent rocket engine load spectra simulation. Further development of the expert system will provide tutorial capability for users to learn from it.

  11. Getting meaningful informed consent from older adults: a structured literature review of empirical research.

    PubMed

    Sugarman, J; McCrory, D C; Hubal, R C

    1998-04-01

    To perform a structured literature review of the published empirical research on informed consent with older adults in order to make recommendations to improve the informed consent process and to highlight areas needing further examination. Relevant literature was identified by searching electronic databases (AGELINE, BIOETHICSLINE, CancerLit, Ethics Index, Health, LegalTrac, MEDLINE, PAIS International, PsycInfo, and Sociofile). Studies were included if they were reports of primary research data about informed consent and, if patients or other subjects were used, older subjects were included in the sample. Data related to the aspect of informed consent under study (recruitment, decision-making capacity, voluntariness, disclosure of information, understanding of information, consent forms, authorization, and policies and procedures) were abstracted and entered into a specially designed database. Characterization of the population, age of subjects, setting, whether informed consent was being studied in the context of research or treatment, study design, the nature of outcome or dependent variables, independent variables (e.g., experimental conditions in a randomized controlled trial or patient/subject characteristics in a nonrandomized comparison), and results according to the aspect of informed consent under study. A total of 99 articles met all the inclusion criteria and posed 289 unique research questions covering a wide range of aspects of informed consent: recruitment (60); decision making capacity (21); voluntariness (6); disclosure (30); understanding (139); consent forms (7); authorization (11); policies (13); and other (2). In the secondary analyses of numerous studies, diminished understanding of informed consent information was associated with older age and fewer years of education. Older age was also sometimes associated with decreased participation in research. Studies of disclosure of informed consent information suggest strategies to improve understanding and include a variety of novel formats (e.g., simplified, storybook, video) and procedures (e.g., use of health educators, quizzing subjects, multiple disclosure sessions). A systematic review of the published literature on informed consent reveals evidence for impaired understanding of informed consent information in older subjects and those with less formal education. Effective strategies to improve the understanding of informed consent information should be considered when designing materials, forms, policies, and procedures for obtaining informed consent. Other than empirical research that has investigated disclosure and understanding of informed consent information, little systematic research has examined other aspects of the informed consent process. This deficit should be rectified to ensure that the rights and interests of patients and of human subjects who participate in research are adequately protected.

  12. Assessment of tissue allograft safety monitoring with administrative healthcare databases: a pilot project using Medicare data.

    PubMed

    Dhakal, Sanjaya; Burwen, Dale R; Polakowski, Laura L; Zinderman, Craig E; Wise, Robert P

    2014-03-01

    Assess whether Medicare data are useful for monitoring tissue allograft safety and utilization. We used health care claims (billing) data from 2007 for 35 million fee-for-service Medicare beneficiaries, a predominantly elderly population. Using search terms for transplant-related procedures, we generated lists of ICD-9-CM and CPT(®) codes and assessed the frequency of selected allograft procedures. Step 1 used inpatient data and ICD-9-CM procedure codes. Step 2 added non-institutional provider (e.g., physician) claims, outpatient institutional claims, and CPT codes. We assembled preliminary lists of diagnosis codes for infections after selected allograft procedures. Many ICD-9-CM codes were ambiguous as to whether the procedure involved an allograft. Among 1.3 million persons with a procedure ascertained using the list of ICD-9-CM codes, only 1,886 claims clearly involved an allograft. CPT codes enabled better ascertainment of some allograft procedures (over 17,000 persons had corneal transplants and over 2,700 had allograft skin transplants). For spinal fusion procedures, CPT codes improved specificity for allografts; of nearly 100,000 patients with ICD-9-CM codes for spinal fusions, more than 34,000 had CPT codes indicating allograft use. Monitoring infrequent events (infections) after infrequent exposures (tissue allografts) requires large study populations. A strength of the large Medicare databases is the substantial number of certain allograft procedures. Limitations include lack of clinical detail and donor information. Medicare data can potentially augment passive reporting systems and may be useful for monitoring tissue allograft safety and utilization where codes clearly identify allograft use and coding algorithms can effectively screen for infections.

  13. Educational Data Mining Application for Estimating Students Performance in Weka Environment

    NASA Astrophysics Data System (ADS)

    Gowri, G. Shiyamala; Thulasiram, Ramasamy; Amit Baburao, Mahindra

    2017-11-01

    Educational data mining (EDM) is a multi-disciplinary research area that examines artificial intelligence, statistical modeling and data mining with the data generated from an educational institution. EDM utilizes computational ways to deal with explicate educational information keeping in mind the end goal to examine educational inquiries. To make a country stand unique among the other nations of the world, the education system has to undergo a major transition by redesigning its framework. The concealed patterns and data from various information repositories can be extracted by adopting the techniques of data mining. In order to summarize the performance of students with their credentials, we scrutinize the exploitation of data mining in the field of academics. Apriori algorithmic procedure is extensively applied to the database of students for a wider classification based on various categorizes. K-means procedure is applied to the same set of databases in order to accumulate them into a specific category. Apriori algorithm deals with mining the rules in order to extract patterns that are similar along with their associations in relation to various set of records. The records can be extracted from academic information repositories. The parameters used in this study gives more importance to psychological traits than academic features. The undesirable student conduct can be clearly witnessed if we make use of information mining frameworks. Thus, the algorithms efficiently prove to profile the students in any educational environment. The ultimate objective of the study is to suspect if a student is prone to violence or not.

  14. British Society of Interventional Radiology Iliac Artery Angioplasty-Stent Registry III

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uberoi, Raman, E-mail: raman.uberoi@orh.nhs.uk; Milburn, Simon; Moss, Jon

    2009-09-15

    The objective of this study was to audit current practice in iliac artery intervention in the United Kingdom. In 2001 the British Society of Interventional Radiology Iliac Artery Angioplasty-Stent (BIAS) III registry provided the first national database for iliac intervention. It recommended that data collection needed to continue in order to facilitate the dissemination of comparative data to individual units. BIAS III was designed to continue this work and has a simplified data set with an online submission form. Interventionalists were invited to complete a 3-page tick sheet for all iliac angioplasties and stents. Questions covered risk factors, procedural data,more » and outcome. Data for 2233 patients were submitted from 37 institutions over a 43-month period. Consultants performed 80% of the procedures, 62% of which were for claudication. Fifty-four percent of lesions were treated with stents and 25% of patients underwent bilateral intervention, resulting in a residual stenosis of <50% in 98%. Ninety-seven percent of procedures had no limb complication and there was a 98% inpatient survival rate. In conclusion, these figures provide an essential benchmark for both audit and patient information. National databases need to be expanded across the range of interventional procedures, and their collection made simple and, preferably, online.« less

  15. Development and application of a database of food ingredient fraud and economically motivated adulteration from 1980 to 2010.

    PubMed

    Moore, Jeffrey C; Spink, John; Lipp, Markus

    2012-04-01

    Food ingredient fraud and economically motivated adulteration are emerging risks, but a comprehensive compilation of information about known problematic ingredients and detection methods does not currently exist. The objectives of this research were to collect such information from publicly available articles in scholarly journals and general media, organize into a database, and review and analyze the data to identify trends. The results summarized are a database that will be published in the US Pharmacopeial Convention's Food Chemicals Codex, 8th edition, and includes 1305 records, including 1000 records with analytical methods collected from 677 references. Olive oil, milk, honey, and saffron were the most common targets for adulteration reported in scholarly journals, and potentially harmful issues identified include spices diluted with lead chromate and lead tetraoxide, substitution of Chinese star anise with toxic Japanese star anise, and melamine adulteration of high protein content foods. High-performance liquid chromatography and infrared spectroscopy were the most common analytical detection procedures, and chemometrics data analysis was used in a large number of reports. Future expansion of this database will include additional publically available articles published before 1980 and in other languages, as well as data outside the public domain. The authors recommend in-depth analyses of individual incidents. This report describes the development and application of a database of food ingredient fraud issues from publicly available references. The database provides baseline information and data useful to governments, agencies, and individual companies assessing the risks of specific products produced in specific regions as well as products distributed and sold in other regions. In addition, the report describes current analytical technologies for detecting food fraud and identifies trends and developments. © 2012 US Pharmacupia Journal of Food Science © 2012 Institute of Food Technologistsreg;

  16. A cloud and radiation model-based algorithm for rainfall retrieval from SSM/I multispectral microwave measurements

    NASA Technical Reports Server (NTRS)

    Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.

    1992-01-01

    A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.

  17. Requests for post-registration studies (PRS), patients follow-up in actual practice: Changes in the role of databases.

    PubMed

    Berdaï, Driss; Thomas-Delecourt, Florence; Szwarcensztein, Karine; d'Andon, Anne; Collignon, Cécile; Comet, Denis; Déal, Cécile; Dervaux, Benoît; Gaudin, Anne-Françoise; Lamarque-Garnier, Véronique; Lechat, Philippe; Marque, Sébastien; Maugendre, Philippe; Méchin, Hubert; Moore, Nicholas; Nachbaur, Gaëlle; Robain, Mathieu; Roussel, Christophe; Tanti, André; Thiessard, Frantz

    2018-02-01

    Early market access of health products is associated with a larger number of requests for information by the health authorities. Compared with these expectations, the growing expansion of health databases represents an opportunity for responding to questions raised by the authorities. The computerised nature of the health system provides numerous sources of data, and first and foremost medical/administrative databases such as the French National Inter-Scheme Health Insurance Information System (SNIIRAM) database. These databases, although developed for other purposes, have already been used for many years with regard to post-registration studies (PRS). The use thereof will continue to increase with the recent creation of the French National Health Data System (SNDS [2016 health system reform law]). At the same time, other databases are available in France, offering an illustration of "product use under actual practice conditions" by patients and health professionals (cohorts, specific registries, data warehouses, etc.). Based on a preliminary analysis of requests for PRS, approximately two-thirds appeared to have found at least a partial response in existing databases. Using these databases has a number of disadvantages, but also numerous advantages, which are listed. In order to facilitate access and optimise their use, it seemed important to draw up recommendations aiming to facilitate these developments and guarantee the conditions for their technical validity. The recommendations drawn up notably include the need for measures aiming to promote the visibility of research conducted on databases in the field of PRS. Moreover, it seemed worthwhile to promote the interoperability of health data warehouses, to make it possible to match information originating from field studies with information originating from databases, and to develop and share algorithms aiming to identify criteria of interest (proxies). Methodological documents, such as the French National Authority for Health (HAS) recommendations on "Les études post-inscription sur les technologies de santé (médicaments, dispositifs médicaux et actes). Principes et méthodes" [Post-registration studies on health technologies (medicinal products, medical devices and procedures). Principles and methods] should be updated to incorporate these developments. Copyright © 2018 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.

  18. Sources and performance criteria of uncertainty of reference measurement procedures.

    PubMed

    Mosca, Andrea; Paleari, Renata

    2018-05-29

    This article wants to focus on the today available Reference Measurement Procedures (RMPs) for the determination of various analytes in Laboratory Medicine and the possible tools to evaluate their performance in the laboratories who are currently using them. A brief review on the RMPs has been performed by investigating the Joint Committee for Traceability in Laboratory Medicine (JCTLM) database. In order to evaluate their performances, we have checked the organization of three international ring trials, i.e. those regularly performed by the IFCC External Quality assessment scheme for Reference Laboratories in Laboratory Medicine (RELA), by the Center for Disease Control and Prevention (CDC) cholesterol network and by the IFCC Network for HbA 1c . Several RMPs are available through the JCTLM database, but the best way to collect information about the RMPs and their uncertainties is to look at the reference measurement service providers (RMS). This part of the database and the background on how to listed in the database is very helpful for the assessment of expanded uncertainty (MU) and performance in general of RMPs. Worldwide, 17 RMS are listed in the database, and for most of the measurands more than one RMS is able to run the relative RMPs, with similar expanded uncertainties. As an example, for a-amylase, 4 SP offer their services with MU between 1.6 and 3.3%. In other cases (such as total cholesterol, the U may span over a broader range, i.e. from 0.02 to 3.6%). With regard to the performance evaluation, the approach is often heterogenous, and it is difficult to compare the performance of laboratories running the same RMP for the same measurand if involved in more than one EQAS. The reference measurement services have been created to help laboratory professionals and manufacturers to implement the correct metrological traceability, and the JCTLM database is the only correct way to retrieve all the necessary important information to this end. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  19. PropBase Query Layer: a single portal to UK subsurface physical property databases

    NASA Astrophysics Data System (ADS)

    Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham

    2013-04-01

    Until recently, the delivery of geological information for industry and public was achieved by geological mapping. Now pervasively available computers mean that 3D geological models can deliver realistic representations of the geometric location of geological units, represented as shells or volumes. The next phase of this process is to populate these with physical properties data that describe subsurface heterogeneity and its associated uncertainty. Achieving this requires capture and serving of physical, hydrological and other property information from diverse sources to populate these models. The British Geological Survey (BGS) holds large volumes of subsurface property data, derived both from their own research data collection and also other, often commercially derived data sources. This can be voxelated to incorporate this data into the models to demonstrate property variation within the subsurface geometry. All property data held by BGS has for many years been stored in relational databases to ensure their long-term continuity. However these have, by necessity, complex structures; each database contains positional reference data and model information, and also metadata such as sample identification information and attributes that define the source and processing. Whilst this is critical to assessing these analyses, it also hugely complicates the understanding of variability of the property under assessment and requires multiple queries to study related datasets making extracting physical properties from these databases difficult. Therefore the PropBase Query Layer has been created to allow simplified aggregation and extraction of all related data and its presentation of complex data in simple, mostly denormalized, tables which combine information from multiple databases into a single system. The structure from each relational database is denormalized in a generalised structure, so that each dataset can be viewed together in a common format using a simple interface. Data are re-engineered to facilitate easy loading. The query layer structure comprises tables, procedures, functions, triggers, views and materialised views. The structure contains a main table PRB_DATA which contains all of the data with the following attribution: • a unique identifier • the data source • the unique identifier from the parent database for traceability • the 3D location • the property type • the property value • the units • necessary qualifiers • precision information and an audit trail Data sources, property type and units are constrained by dictionaries, a key component of the structure which defines what properties and inheritance hierarchies are to be coded and also guides the process as to what and how these are extracted from the structure. Data types served by the Query Layer include site investigation derived geotechnical data, hydrogeology datasets, regional geochemistry, geophysical logs as well as lithological and borehole metadata. The size and complexity of the data sets with multiple parent structures requires a technically robust approach to keep the layer synchronised. This is achieved through Oracle procedures written in PL/SQL containing the logic required to carry out the data manipulation (inserts, updates, deletes) to keep the layer synchronised with the underlying databases either as regular scheduled jobs (weekly, monthly etc) or invoked on demand. The PropBase Query Layer's implementation has enabled rapid data discovery, visualisation and interpretation of geological data with greater ease, simplifying the parametrisation of 3D model volumes and facilitating the study of intra-unit heterogeneity.

  20. Design and implementation of a portal for the medical equipment market: MEDICOM.

    PubMed

    Palamas, S; Kalivas, D; Panou-Diamandi, O; Zeelenberg, C; van Nimwegen, C

    2001-01-01

    The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers Web sites with itself. The network of the Portal and the connected manufacturers sites is called the MEDICOM system. To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system s databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM s functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support.

  1. Design and Implementation of a Portal for the Medical Equipment Market: MEDICOM

    PubMed Central

    Kalivas, Dimitris; Panou-Diamandi, Ourania; Zeelenberg, Cees; van Nimwegen, Chris

    2001-01-01

    Background The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers' Web sites with itself. The network of the Portal and the connected manufacturers' sites is called the MEDICOM system. Objective To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). Methods The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers' servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system's databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers' servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM's functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. Results The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. Conclusions The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support. PMID:11772547

  2. Geographic Information Systems and Web Page Development

    NASA Technical Reports Server (NTRS)

    Reynolds, Justin

    2004-01-01

    The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet Explorer as the foundation. After some programming, it was possible to view AutoCAD files and other GIS-related applications on Internet Explorer, while providing the user with a variety of editing commands and setting options. I was also given the task of launching a divisional website using Macromedia Flash and other web- development programs.

  3. Environmental factor(tm) system: RCRA hazardous waste handler information (on CD-ROM). Data file

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-11-01

    Environmental Factor(trademark) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity, and compliance history for facilities found in the EPA Research Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management, and minimization by companies who are large quantity generators; and (3) Data on the waste management practices of treatment, storage, and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action, or violation information, TSD status, generator and transporter status, and more. (2) View compliance information - dates of evaluation, violation, enforcement, and corrective action. (3) Lookup facilities by waste processing categories of marketing, transporting, processing, and energy recovery. (4) Use owner/operator information and names, titles, and telephone numbers of project managers for prospecting. (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving, and exporting.« less

  4. Diagnostic Assessment of Troubleshooting Skill in an Intelligent Tutoring System

    DTIC Science & Technology

    1994-03-01

    the information that can be provided from studying gauges and indicators and conventional test equipment procedures. Experts are particularly adept at...uses the results of the strategy and action evaluator to update the student profile, represented as a network, using the ERGO ( Noetic Systems, 1993...1990). Individualized tutoring using an intelligent fuzzy temporal relational database. International Tournal of Man-Machine Studies . & 409-429. 𔄁. 34

  5. The Internet as an Information Source for Environmental Chemicals--First Results of the Evaluation of the Meta-Database of Internet Resources.

    ERIC Educational Resources Information Center

    Voigt, Kristina; Benz, Joachim; Bruggemann, Rainer

    An evaluation approach using the mathematical method of the Hasse diagram technique is applied on 20 environmental and chemical Internet resources. The data for this evaluation procedure are taken out of a metadatabase called DAIN (Metadatabase of Internet Resources for Environmental Chemicals) which is set up by the GSF Research Centre for…

  6. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR DEFINING WORKING DATABASES AND DATA ENTRY FORMS (HAND ENTRY) (UA-D-3.0)

    EPA Science Inventory

    The purpose of this SOP is to outline a standard approach to naming and defining variables, data types, and data entry forms. This procedure applies to all working databases created during the NHEXAS project and the "Border" study. Keywords: databases; standards.

    The National...

  7. Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking

    NASA Technical Reports Server (NTRS)

    Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward

    2011-01-01

    To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk calculation procedure.

  8. EPA Facility Registry System (FRS): NCES

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry System (FRS) for the subset of facilities that link to the National Center for Education Statistics (NCES). The primary federal database for collecting and analyzing data related to education in the United States and other Nations, NCES is located in the U.S. Department of Education, within the Institute of Education Sciences. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA00e2??s national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to NCES school facilities once the NCES data has been integrated into the FRS database. Additional information on FRS is available at the EPA website http://www.epa.gov/enviro/html/fii/index.html.

  9. One year of anaesthesia in France: A comprehensive survey based on the national medical information (PMSI) database. Part 1: In-hospital patients.

    PubMed

    Dadure, Christophe; Marie, Anaïs; Seguret, Fabienne; Capdevila, Xavier

    2015-08-01

    Anaesthesia has evolved in France since the last epidemiologic survey in 1996. The national database program for medical information systems (the PMSI) can be used to track specific knowledge concerning anaesthesia for a selected period of time. The goal of this study was to perform a contemporary epidemiological description of anaesthesia in France for the year 2010. The data concerning private or public hospital stays were collected from the national PMSI database. All surgical/medical institutions performing anaesthesia in France and French Overseas Departments and Territories were queried concerning the number of anaesthesias, patient age, sex ratios, institution characteristics, hospitalization types, the duration of hospital stays, and the surgical procedures performed. In 2010, the number of anaesthesia procedures performed was 11,323,630 during 8,568,630 hospital stays. We found that 9,544,326 (84.3%) anaesthetic procedures were performed in adults (> 18 years of age; excluding childbirth), 845,568 (7.5%) were related to childbirth and 933,736 (8.2%) were acts in children (up to 18 years of age). The mean duration of hospital stay was 5.7±8.2 days. 56.5% of adults and 39.5% of children were managed as inpatient hospital stays. The male/female sex ratio and mean age were 42/58 and 54±19 years, respectively. In adults, anaesthesia was predominantly performed for abdominal surgery (24.5%), orthopaedics (16.7%), gynaecology (10.3%), ophthalmology (9.7%) and vascular surgeries (7.1%). For paediatric populations, the main surgical activities were Ear-Nose-Throat surgery (43.1%), orthopaedic surgery (15.1%) and urological surgeries (12.8%). The number of anaesthesias performed in France has dramatically increased (42.7%) since the last major epidemiological survey. Anaesthesia in the 21th century has been adapted to associated demographic changes: an older population with more comorbidities and fewer in-hospital procedures. Copyright © 2015 Société française d’anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  10. A carcinogenic potency database of the standardized results of animal bioassays

    PubMed Central

    Gold, Lois Swirsky; Sawyer, Charles B.; Magaw, Renae; Backman, Georganne M.; De Veciana, Margarita; Levinson, Robert; Hooper, N. Kim; Havender, William R.; Bernstein, Leslie; Peto, Richard; Pike, Malcolm C.; Ames, Bruce N.

    1984-01-01

    The preceding paper described our numerical index of carcinogenic potency, the TD50 and the statistical procedures adopted for estimating it from experimental data. This paper presents the Carcinogenic Potency Database, which includes results of about 3000 long-term, chronic experiments of 770 test compounds. Part II is a discussion of the sources of our data, the rationale for the inclusion of particular experiments and particular target sites, and the conventions adopted in summarizing the literature. Part III is a guide to the plot of results presented in Part IV. A number of appendices are provided to facilitate use of the database. The plot includes information about chronic cancer tests in mammals, such as dose and other aspects of experimental protocol, histopathology and tumor incidence, TD50 and its statistical significance, dose response, author's opinion and literature reference. The plot readily permits comparisons of carcinogenic potency and many other aspects of cancer tests; it also provides quantitative information about negative tests. The range of carcinogenic potency is over 10 million-fold. PMID:6525996

  11. EPA Facility Registry Service (FRS): ICIS

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of facilities that link to the Integrated Compliance Information System (ICIS). When complete, ICIS will provide a database that will contain integrated enforcement and compliance information across most of EPA's programs. The vision for ICIS is to replace EPA's independent databases that contain enforcement data with a single repository for that information. Currently, ICIS contains all Federal Administrative and Judicial enforcement actions and a subset of the Permit Compliance System (PCS), which supports the National Pollutant Discharge Elimination System (NPDES). ICIS exchanges non-sensitive enforcement/compliance activities, non-sensitive formal enforcement actions and NPDES information with FRS. This web feature service contains the enforcement/compliance activities and formal enforcement action related facilities; the NPDES facilities are contained in the PCS_NPDES web feature service. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on f

  12. An integrated approach to reservoir modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaldson, K.

    1993-08-01

    The purpose of this research is to evaluate the usefulness of the following procedural and analytical methods in investigating the heterogeneity of the oil reserve for the Mississipian Big Injun Sandstone of the Granny Creek field, Clay and Roane counties, West Virginia: (1) relational database, (2) two-dimensional cross sections, (3) true three-dimensional modeling, (4) geohistory analysis, (5) a rule-based expert system, and (6) geographical information systems. The large data set could not be effectively integrated and interpreted without this approach. A relational database was designed to fully integrate three- and four-dimensional data. The database provides an effective means for maintainingmore » and manipulating the data. A two-dimensional cross section program was designed to correlate stratigraphy, depositional environments, porosity, permeability, and petrographic data. This flexible design allows for additional four-dimensional data. Dynamic Graphics[sup [trademark

  13. A comparative study of six European databases of medically oriented Web resources.

    PubMed

    Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes

    2005-10-01

    The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.

  14. The regulatory use of the Local Lymph Node Assay for the notification of new chemicals in Europe.

    PubMed

    Angers-Loustau, Alexandre; Tosti, Luca; Casati, Silvia

    2011-08-01

    The regulatory use of the Local Lymph Node Assay (LLNA) for new chemicals registration was monitored by screening the New Chemicals Database (NCD), which was managed by the former European Chemicals Bureau (ECB) at the European Commission Joint Research Centre (JRC). The NCD centralised information for chemicals notified after 1981, where toxicological information has been generated predominantly according to approved test methods. The database was searched to extract notifications for which the information for skin sensitisation labelling was based on results derived with the LLNA. The details of these records were extracted and pooled, and evaluated with regard to the extent of use of the LLNA over time, as well as for analysing the information retrieved on critical aspects of the procedure e.g. strain and amount of animals used, lymph node processing, solvent and doses selected, stimulation indices, and for assessing their level of compliance to the OECD Test Guideline 429. In addition the accuracy of the reduced LLNA when applied to new chemicals was investigated. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Intelligent Data Granulation on Load: Improving Infobright's Knowledge Grid

    NASA Astrophysics Data System (ADS)

    Ślęzak, Dominik; Kowalski, Marcin

    One of the major aspects of Infobright's relational database technology is automatic decomposition of each of data tables onto Rough Rows, each consisting of 64K of original rows. Rough Rows are automatically annotated by Knowledge Nodes that represent compact information about the rows' values. Query performance depends on the quality of Knowledge Nodes, i.e., their efficiency in minimizing the access to the compressed portions of data stored on disk, according to the specific query optimization procedures. We show how to implement the mechanism of organizing the incoming data into such Rough Rows that maximize the quality of the corresponding Knowledge Nodes. Given clear business-driven requirements, the implemented mechanism needs to be fully integrated with the data load process, causing no decrease in the data load speed. The performance gain resulting from better data organization is illustrated by some tests over our benchmark data. The differences between the proposed mechanism and some well-known procedures of database clustering or partitioning are discussed. The paper is a continuation of our patent application [22].

  16. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR GLOBAL CODING USED BY NHEXAS ARIZONA (HAND ENTRY) (UA-D-5.0)

    EPA Science Inventory

    The purpose of this SOP is to define the global coding scheme to used in the working and master databases. This procedure applies to all of the databases used during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; databases.

    The National Human Exposu...

  17. [Required Framework for the Collection of Real-life Data: An Example from University Eye Hospital Munich].

    PubMed

    Kortüm, Karsten; Kern, Christoph; Meyer, Gerhard; Priglinger, Siegfried; Hirneiß, Christoph

    2017-12-01

    Background The importance of evaluating real-life data is constantly increasing. Currently available computer systems better allow for analyses of data, as more and more data is available in a digital form. Before a project for real-life data analyses is started, technical considerations and staff, legal, and data protection procedures need to be addressed. In this manuscript, experiences made at the University Eye Hospital in Munich will be shared. Materials and Methods Legal requirements, as found in laws and guidelines governing documentation and data privacy, are highlighted. Technical requirements for information technology infrastructure and software are defined. A survey conducted by the German Ophthalmological Society, among German eye hospitals investigating the current state of digitalization, was conducted. Also, staff requirements are outlined. Results A database comprising results of 330,801 patients was set up. It includes all diagnoses, procedures, clinical findings and results from diagnostic devices. This database was approved by the local data protection officer. In less than half of German eye hospitals (n = 21) that participated in the survey (n = 54), a complete electronic documentation is done. Fourteen institutions are completely paper-based, and the remainder of the hospitals used a mixed system. Conclusion In this work, we examined the framework that is required to develop a comprehensive database containing real-life data from clinics. In future, these databases will become increasingly important as more and more innovation are made in decision support systems. The base for this is comprehensive and well-curated databases. Georg Thieme Verlag KG Stuttgart · New York.

  18. Chemical and isotopic database of water and gas from hydrothermal systems with an emphasis for the western United States

    USGS Publications Warehouse

    Mariner, R.H.; Venezky, D.Y.; Hurwitz, S.

    2006-01-01

    Chemical and isotope data accumulated by two USGS Projects (led by I. Barnes and R. Mariner) over a time period of about 40 years can now be found using a basic web search or through an image search (left). The data are primarily chemical and isotopic analyses of waters (thermal, mineral, or fresh) and associated gas (free and/or dissolved) collected from hot springs, mineral springs, cold springs, geothermal wells, fumaroles, and gas seeps. Additional information is available about the collection methods and analysis procedures.The chemical and isotope data are stored in a MySQL database and accessed using PHP from a basic search form below. Data can also be accessed using an Open Source GIS called WorldKit by clicking on the image to the left. Additional information is available about WorldKit including the files used to set up the site.

  19. Contourlet Textual Features: Improving the Diagnosis of Solitary Pulmonary Nodules in Two Dimensional CT Images

    PubMed Central

    Wang, Jingjing; Sun, Tao; Gao, Ni; Menon, Desmond Dev; Luo, Yanxia; Gao, Qi; Li, Xia; Wang, Wei; Zhu, Huiping; Lv, Pingxin; Liang, Zhigang; Tao, Lixin; Liu, Xiangtong; Guo, Xiuhua

    2014-01-01

    Objective To determine the value of contourlet textural features obtained from solitary pulmonary nodules in two dimensional CT images used in diagnoses of lung cancer. Materials and Methods A total of 6,299 CT images were acquired from 336 patients, with 1,454 benign pulmonary nodule images from 84 patients (50 male, 34 female) and 4,845 malignant from 252 patients (150 male, 102 female). Further to this, nineteen patient information categories, which included seven demographic parameters and twelve morphological features, were also collected. A contourlet was used to extract fourteen types of textural features. These were then used to establish three support vector machine models. One comprised a database constructed of nineteen collected patient information categories, another included contourlet textural features and the third one contained both sets of information. Ten-fold cross-validation was used to evaluate the diagnosis results for the three databases, with sensitivity, specificity, accuracy, the area under the curve (AUC), precision, Youden index, and F-measure were used as the assessment criteria. In addition, the synthetic minority over-sampling technique (SMOTE) was used to preprocess the unbalanced data. Results Using a database containing textural features and patient information, sensitivity, specificity, accuracy, AUC, precision, Youden index, and F-measure were: 0.95, 0.71, 0.89, 0.89, 0.92, 0.66, and 0.93 respectively. These results were higher than results derived using the database without textural features (0.82, 0.47, 0.74, 0.67, 0.84, 0.29, and 0.83 respectively) as well as the database comprising only textural features (0.81, 0.64, 0.67, 0.72, 0.88, 0.44, and 0.85 respectively). Using the SMOTE as a pre-processing procedure, new balanced database generated, including observations of 5,816 benign ROIs and 5,815 malignant ROIs, and accuracy was 0.93. Conclusion Our results indicate that the combined contourlet textural features of solitary pulmonary nodules in CT images with patient profile information could potentially improve the diagnosis of lung cancer. PMID:25250576

  20. The database of the Nikolaev Astronomical Observatory as a unit of an international virtual observatory

    NASA Astrophysics Data System (ADS)

    Protsyuk, Yu.; Pinigin, G.; Shulga, A.

    2005-06-01

    Results of the development and organization of the digital database of the Nikolaev Astronomical Observatory (NAO) are presented. At present, three telescopes are connected to the local area network of NAO. All the data obtained, and results of data processing are entered into the common database of NAO. The daily average volume of new astronomical information obtained from the CCD instruments ranges from 300 MB up to 2 GB, depending on the purposes and conditions of observations. The overwhelming majority of the data are stored in the FITS format. Development and further improvement of storage standards, procedures of data handling and data processing are being carried out. It is planned to create an astronomical web portal with the possibility to have interactive access to databases and telescopes. In the future, this resource may become a part of an international virtual observatory. There are the prototypes of search tools with the use of PHP and MySQL. Efforts for getting more links to the Internet are being made.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brickstad, B.; Bergman, M.

    A computerized procedure has been developed that predicts the growth of an initial circumferential surface crack through a pipe and further on to failure. The crack growth mechanism can either be fatigue or stress corrosion. Consideration is taken to complex crack shapes and for the through-wall cracks, crack opening areas and leak rates are also calculated. The procedure is based on a large number of three-dimensional finite element calculations of cracked pipes. The results from these calculations are stored in a database from which the PC-program, denoted LBBPIPE, reads all necessary information. In this paper, a sensitivity analysis is presentedmore » for cracked pipes subjected to both stress corrosion and vibration fatigue.« less

  2. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR INSTRUCTIONS FOR THE ADDITION OF INDIVIDUAL CLEANED NON SCANNED DATA BATCHES TO MASTER DATABASES (UA-D-27.0)

    EPA Science Inventory

    The purpose of this SOP is to define the procedures involved in appending cleaned individual data batches to the master databases. This procedure applies to the Arizona NHEXAS project and the "Border" study. Keywords: data; appending.

    The National Human Exposure Assessment Sur...

  3. The European general thoracic surgery database project.

    PubMed

    Falcoz, Pierre Emmanuel; Brunelli, Alessandro

    2014-05-01

    The European Society of Thoracic Surgeons (ESTS) Database is a free registry created by ESTS in 2001. The current online version was launched in 2007. It runs currently on a Dendrite platform with extensive data security and frequent backups. The main features are a specialty-specific, procedure-specific, prospectively maintained, periodically audited and web-based electronic database, designed for quality control and performance monitoring, which allows for the collection of all general thoracic procedures. Data collection is the "backbone" of the ESTS database. It includes many risk factors, processes of care and outcomes, which are specially designed for quality control and performance audit. The user can download and export their own data and use them for internal analyses and quality control audits. The ESTS database represents the gold standard of clinical data collection for European General Thoracic Surgery. Over the past years, the ESTS database has achieved many accomplishments. In particular, the database hit two major milestones: it now includes more than 235 participating centers and 70,000 surgical procedures. The ESTS database is a snapshot of surgical practice that aims at improving patient care. In other words, data capture should become integral to routine patient care, with the final objective of improving quality of care within Europe.

  4. Psychology of plastic and reconstructive surgery: a systematic clinical review.

    PubMed

    Shridharani, Sachin M; Magarakis, Michael; Manson, Paul N; Rodriguez, Eduardo D

    2010-12-01

    The authors sought to review the various types of patients with psychological abnormalities who may present to the plastic surgeon and the psychological impact of various plastic surgery procedures on these patients. After systematically searching the Embase and PubMed databases and following further refinement (based on the authors' inclusion and exclusion criteria), the authors identified 65 studies. In addition, the authors felt that important information was contained in four textbooks, two press releases, and one Internet database. The inclusion criteria were studies that investigated the psychological outcomes, background, and personality types of patients seeking specific plastic surgery procedures. In addition, studies that addressed the impact of plastic surgery on patients' psychological status and quality of life were also included. The authors excluded studies with fewer than 30 patients, studies that did not pertain to the particular plastic surgery procedures, and studies that addressed psychological sequelae of revision operations. Narcissistic and histrionic personality disorders and body dysmorphic disorder are the three most common psychiatric conditions encountered in patients seeking cosmetic surgery. Overall, plastic surgery not only restores the appearance and function of the disfigured body unit but also alleviates psychological distress. Identifying the psychologically challenging patient before surgical intervention will allow the patient to obtain the appropriate psychological assistance and may result in a healthier individual with or without associated plastic surgery procedures.

  5. Aviation Safety Reporting System: Process and Procedures

    NASA Technical Reports Server (NTRS)

    Connell, Linda J.

    1997-01-01

    The Aviation Safety Reporting System (ASRS) was established in 1976 under an agreement between the Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA). This cooperative safety program invites pilots, air traffic controllers, flight attendants, maintenance personnel, and others to voluntarily report to NASA any aviation incident or safety hazard. The FAA provides most of the program funding. NASA administers the program, sets its policies in consultation with the FAA and aviation community, and receives the reports submitted to the program. The FAA offers those who use the ASRS program two important reporting guarantees: confidentiality and limited immunity. Reports sent to ASRS are held in strict confidence. More than 350,000 reports have been submitted since the program's beginning without a single reporter's identity being revealed. ASRS removes all personal names and other potentially identifying information before entering reports into its database. This system is a very successful, proof-of-concept for gathering safety data in order to provide timely information about safety issues. The ASRS information is crucial to aviation safety efforts both nationally and internationally. It can be utilized as the first step in safety by providing the direction and content to informed policies, procedures, and research, especially human factors. The ASRS process and procedures will be presented as one model of safety reporting feedback systems.

  6. Environmental Factor{trademark} system: RCRA hazardous waste handler information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1999-03-01

    Environmental Factor{trademark} RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information -- dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less

  7. IPD-MHC 2.0: an improved inter-species database for the study of the major histocompatibility complex

    PubMed Central

    Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A.; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G.; Flicek, Paul; Bontrop, Ronald E.; Hammond, John A.; Marsh, Steven G. E.

    2017-01-01

    The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. PMID:27899604

  8. A spatial database for landslides in northern Bavaria: A methodological approach

    NASA Astrophysics Data System (ADS)

    Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit

    2018-04-01

    Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.

  9. Mining Claim Activity on Federal Land for the Period 1976 through 2003

    USGS Publications Warehouse

    Causey, J. Douglas

    2005-01-01

    Previous reports on mining claim records provided information and statistics (number of claims) using data from the U.S. Bureau of Land Management's (BLM) Mining Claim Recordation System. Since that time, BLM converted their mining claim data to the Legacy Repost 2000 system (LR2000). This report describes a process to extract similar statistical data about mining claims from LR2000 data using different software and procedures than were used in the earlier work. A major difference between this process and the previous work is that every section that has a mining claim record is assigned a value. This is done by proportioning a claim between each section in which it is recorded. Also, the mining claim data in this report includes all BLM records, not just the western states. LR2000 mining claim database tables for the United States were provided by BLM in text format and imported into a Microsoft? Access2000 database in January, 2004. Data from two tables in the BLM LR2000 database were summarized through a series of database queries to determine a number that represents active mining claims in each Public Land Survey (PLS) section for each of the years from 1976 to 2002. For most of the area, spatial databases are also provided. The spatial databases are only configured to work with the statistics provided in the non-spatial data files. They are suitable for geographic information system (GIS)-based regional assessments at a scale of 1:100,000 or smaller (for example, 1:250,000).

  10. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    USGS Publications Warehouse

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  11. GeneSCF: a real-time based functional enrichment tool with support for multiple organisms.

    PubMed

    Subhash, Santhilal; Kanduri, Chandrasekhar

    2016-09-13

    High-throughput technologies such as ChIP-sequencing, RNA-sequencing, DNA sequencing and quantitative metabolomics generate a huge volume of data. Researchers often rely on functional enrichment tools to interpret the biological significance of the affected genes from these high-throughput studies. However, currently available functional enrichment tools need to be updated frequently to adapt to new entries from the functional database repositories. Hence there is a need for a simplified tool that can perform functional enrichment analysis by using updated information directly from the source databases such as KEGG, Reactome or Gene Ontology etc. In this study, we focused on designing a command-line tool called GeneSCF (Gene Set Clustering based on Functional annotations), that can predict the functionally relevant biological information for a set of genes in a real-time updated manner. It is designed to handle information from more than 4000 organisms from freely available prominent functional databases like KEGG, Reactome and Gene Ontology. We successfully employed our tool on two of published datasets to predict the biologically relevant functional information. The core features of this tool were tested on Linux machines without the need for installation of more dependencies. GeneSCF is more reliable compared to other enrichment tools because of its ability to use reference functional databases in real-time to perform enrichment analysis. It is an easy-to-integrate tool with other pipelines available for downstream analysis of high-throughput data. More importantly, GeneSCF can run multiple gene lists simultaneously on different organisms thereby saving time for the users. Since the tool is designed to be ready-to-use, there is no need for any complex compilation and installation procedures.

  12. Roadmap for the development of the University of North Carolina at Chapel Hill Genitourinary OncoLogy Database--UNC GOLD.

    PubMed

    Gallagher, Sarah A; Smith, Angela B; Matthews, Jonathan E; Potter, Clarence W; Woods, Michael E; Raynor, Mathew; Wallen, Eric M; Rathmell, W Kimryn; Whang, Young E; Kim, William Y; Godley, Paul A; Chen, Ronald C; Wang, Andrew; You, Chaochen; Barocas, Daniel A; Pruthi, Raj S; Nielsen, Matthew E; Milowsky, Matthew I

    2014-01-01

    The management of genitourinary malignancies requires a multidisciplinary care team composed of urologists, medical oncologists, and radiation oncologists. A genitourinary (GU) oncology clinical database is an invaluable resource for patient care and research. Although electronic medical records provide a single web-based record used for clinical care, billing, and scheduling, information is typically stored in a discipline-specific manner and data extraction is often not applicable to a research setting. A GU oncology database may be used for the development of multidisciplinary treatment plans, analysis of disease-specific practice patterns, and identification of patients for research studies. Despite the potential utility, there are many important considerations that must be addressed when developing and implementing a discipline-specific database. The creation of the GU oncology database including prostate, bladder, and kidney cancers with the identification of necessary variables was facilitated by meetings of stakeholders in medical oncology, urology, and radiation oncology at the University of North Carolina (UNC) at Chapel Hill with a template data dictionary provided by the Department of Urologic Surgery at Vanderbilt University Medical Center. Utilizing Research Electronic Data Capture (REDCap, version 4.14.5), the UNC Genitourinary OncoLogy Database (UNC GOLD) was designed and implemented. The process of designing and implementing a discipline-specific clinical database requires many important considerations. The primary consideration is determining the relationship between the database and the Institutional Review Board (IRB) given the potential applications for both clinical and research uses. Several other necessary steps include ensuring information technology security and federal regulation compliance; determination of a core complete dataset; creation of standard operating procedures; standardizing entry of free text fields; use of data exports, queries, and de-identification strategies; inclusion of individual investigators' data; and strategies for prioritizing specific projects and data entry. A discipline-specific database requires a buy-in from all stakeholders, meticulous development, and data entry resources to generate a unique platform for housing information that may be used for clinical care and research with IRB approval. The steps and issues identified in the development of UNC GOLD provide a process map for others interested in developing a GU oncology database. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Building a QC Database of Meteorological Data from NASA KSC and the United States Air Force's Eastern Range

    NASA Technical Reports Server (NTRS)

    Brenton, J. C.; Barbre, R. E.; Decker, R. K.; Orcutt, J. M.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) provides atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large datasets consists of ensuring erroneous data are removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, development methodologies, and periods of record. The goal of this activity is to use the previous efforts to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, It is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.

  14. Stent thrombosis with bioabsorbable polymer drug-eluting stents: insights from the Food and Drug Administration database.

    PubMed

    Khan, Abdur R; Tripathi, Avnish; Farid, Talha A; Abaid, Bilal; Bhatt, Deepak L; Resar, Jon R; Flaherty, Michael P

    2017-11-01

    SYNERGY, a bioabsorbable polymer-based, everolimus-eluting stent (BP-DES), recently received regulatory approval in the USA for use in percutaneous coronary interventions. Yet, information on the safety of BP-DES in routine clinical practice is limited. Our aim was to compare the safety of the recently approved BP-DES with current durable polymer drug-eluting stents (DP-DES) by analyzing adverse events, namely, stent thrombosis (ST), reported to the Manufacturer and User Facility Device Experience (MAUDE) database. The MAUDE database requires nationwide mandatory notification for adverse events on devices approved for clinical use. This database was searched for adverse events reported between 1 October 2015 and 25 December 2016, encountered after the placement of either BP-DES or DP-DES. Only those adverse events were included where the exposure period to the stents was comparable after the index procedure. Of all the adverse events reported, the event of interest was ST. A total of 951 adverse events were reported. ST occurred in 48/951 of all events, 31/309 and 17/642 when BP-DES or DP-DES were used, respectively (P=0.00001). Of the 31 ST events with BP-DES, 68% (21/31) occurred within less than or equal to 24 h of the index procedure and 52% (16/31) occurred within less than or equal to 2 h. Our results raise the possibility of an increased risk of ST, particularly early ST (within 24 h), with the recently approved BP-DES. However, because of the inherent limitations of reporting within the MAUDE database, these data merely highlight a potential need for additional surveillance and randomized trials to assess further the safety of the bioabsorbable platform.

  15. Chemical analyses of coal, coal-associated rocks and coal combustion products collected for the National Coal Quality Inventory

    USGS Publications Warehouse

    Hatch, Joseph R.; Bullock, John H.; Finkelman, Robert B.

    2006-01-01

    In 1999, the USGS initiated the National Coal Quality Inventory (NaCQI) project to address a need for quality information on coals that will be mined during the next 20-30 years. At the time this project was initiated, the publicly available USGS coal quality data was based on samples primarily collected and analyzed between 1973 and 1985. The primary objective of NaCQI was to create a database containing comprehensive, accurate and accessible chemical information on the quality of mined and prepared United States coals and their combustion byproducts. This objective was to be accomplished through maintaining the existing publicly available coal quality database, expanding the database through the acquisition of new samples from priority areas, and analysis of the samples using updated coal analytical chemistry procedures. Priorities for sampling include those areas where future sources of compliance coal are federally owned. This project was a cooperative effort between the U.S. Geological Survey (USGS), State geological surveys, universities, coal burning utilities, and the coal mining industry. Funding support came from the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE).

  16. New standards for reducing gravity data: The North American gravity database

    USGS Publications Warehouse

    Hinze, W. J.; Aiken, C.; Brozena, J.; Coakley, B.; Dater, D.; Flanagan, G.; Forsberg, R.; Hildenbrand, T.; Keller, Gordon R.; Kellogg, J.; Kucks, R.; Li, X.; Mainville, A.; Morin, R.; Pilkington, M.; Plouff, D.; Ravat, D.; Roman, D.; Urrutia-Fucugauchi, J.; Veronneau, M.; Webring, M.; Winester, D.

    2005-01-01

    The North American gravity database as well as databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revising procedures for calculating gravity anomalies, taking into account our enhanced computational power, improved terrain databases and datums, and increased interest in more accurately defining long-wavelength anomaly components. Users of the databases may note minor differences between previous and revised database values as a result of these procedures. Generally, the differences do not impact the interpretation of local anomalies but do improve regional anomaly studies. The most striking revision is the use of the internationally accepted terrestrial ellipsoid for the height datum of gravity stations rather than the conventionally used geoid or sea level. Principal facts of gravity observations and anomalies based on both revised and previous procedures together with germane metadata will be available on an interactive Web-based data system as well as from national agencies and data centers. The use of the revised procedures is encouraged for gravity data reduction because of the widespread use of the global positioning system in gravity fieldwork and the need for increased accuracy and precision of anomalies and consistency with North American and national databases. Anomalies based on the revised standards should be preceded by the adjective "ellipsoidal" to differentiate anomalies calculated using heights with respect to the ellipsoid from those based on conventional elevations referenced to the geoid. ?? 2005 Society of Exploration Geophysicists. All rights reserved.

  17. Dragon pulse information management system (DPIMS): A unique model-based approach to implementing domain agnostic system of systems and behaviors

    NASA Astrophysics Data System (ADS)

    Anderson, Thomas S.

    2016-05-01

    The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.

  18. EPA Facility Registry Service (FRS): CERCLIS

    EPA Pesticide Factsheets

    This data provides location and attribute information on Facilities regulated under the Comprehensive Environmental Responsibility Compensation and Liability Information System (CERCLIS) for a intranet web feature service . The data provided in this service are obtained from EPA's Facility Registry Service (FRS). The FRS is an integrated source of comprehensive (air, water, and waste) environmental information about facilities, sites or places. This service connects directly to the FRS database to provide this data as a feature service. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous verification and management procedures that incorporate information from program national systems, state master facility records, data collected from EPA's Central Data Exchange registrations and data management personnel. Additional Information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs.

  19. Resource utilization and national demographics of laryngotracheal trauma in children.

    PubMed

    McCormick, Michael E; Fissenden, Thomas M; Chun, Robert H; Lander, Lina; Shah, Rahul K

    2014-09-01

    Pediatric laryngotracheal trauma is rare but can carry considerable morbidity and health care resource expenditure. However, the true cost of these injuries has not been thoroughly investigated. To use a national administrative pediatric database to identify normative data on pediatric laryngotracheal trauma, specifically with regard to cost and resource utilization. Retrospective medical record review using the Kids' Inpatient Database (KID) 2009. Inclusion criteria were admissions with International Classification of Diseases, Ninth Revision, Clinical Modification, codes for fractures or open wounds of the larynx and trachea. Among many data analyzed were demographic information and admission characteristics, including length of stay, diagnoses, procedures performed, and total charges. There were 106 admissions that met inclusion criteria. Patient mean (SE) age was 15.9 (0.45) years, and 79% were males. The mean (SE) length of stay (LOS) was 8.4 (1.1) days; more than 50% of patients had a LOS longer than 4 days. The mean number of diagnoses per patient was 6.9 (0.6); other traumatic injuries included pneumothorax (n = 18). More than 75% of patients underwent more than 2 procedures during their admission; 60.2% underwent a major operative procedure. The most common procedures performed were laryngoscopy (n = 54) and operative repair of the larynx and/or trachea (n = 32). Tracheostomy was performed in only 30 patients. The mean (SE) total charge was $90,879 ($11,419), and one-third of patients had total charges more than $100,000. Pediatric laryngotracheal trauma remains a relatively rare clinical entity. These injuries primarily affect older children and are associated with long hospitalizations, multiple procedures, and high resource utilization.

  20. National Communicable Disease Surveillance System: A review on Information and Organizational Structures in Developed Countries.

    PubMed

    Bagherian, Hossein; Farahbakhsh, Mohammad; Rabiei, Reza; Moghaddasi, Hamid; Asadi, Farkhondeh

    2017-12-01

    To obtain necessary information for managing communicable diseases, different countries have developed national communicable diseases surveillance systems (NCDSS). Exploiting the lesson learned from the leading countries in development of surveillance systems provides the foundation for developing these systems in other countries. In this study, the information and organizational structure of NCDSS in developed countries were reviewed. The study reviewed publications found on the organizational structure, content and data flow of NCDSS in the United States of America (USA), Australia and Germany that were published in English between 2000 and 2016. The publications were identified by searching the CINAHL, Science Direct, ProQuest, PubMed, Google Scholar databases and the related databases in selected countries. Thirty-four studies were investigated. All of the reviewed countries have implemented the NCDSS. In majority of countries the department of health (DoH) is responsible for managing this system. The reviewed countries have created a minimum data set for reporting communicable diseases data and information. For developing NCDSS, establishing coordinator centers, setting the effective policies and procedures, providing appropriate communication infrastructures for data exchange and defining a communicable diseases minimum data set are essential.

  1. Building the School Attendance Boundary Information System (SABINS): Collecting, Processing, and Modeling K to 12 Educational Geography

    PubMed Central

    Saporito, Salvatore; Van Riper, David; Wakchaure, Ashwini

    2017-01-01

    The School Attendance Boundary Information System is a social science data infrastructure project that assembles, processes, and distributes spatial data delineating K through 12th grade school attendance boundaries for thousands of school districts in U.S. Although geography is a fundamental organizing feature of K to 12 education, until now school attendance boundary data have not been made readily available on a massive basis and in an easy-to-use format. The School Attendance Boundary Information System removes these barriers by linking spatial data delineating school attendance boundaries with tabular data describing the demographic characteristics of populations living within those boundaries. This paper explains why a comprehensive GIS database of K through 12 school attendance boundaries is valuable, how original spatial information delineating school attendance boundaries is collected from local agencies, and techniques for modeling and storing the data so they provide maximum flexibility to the user community. An important goal of this paper is to share the techniques used to assemble the SABINS database so that local and state agencies apply a standard set of procedures and models as they gather data for their regions. PMID:29151773

  2. Building the School Attendance Boundary Information System (SABINS): Collecting, Processing, and Modeling K to 12 Educational Geography.

    PubMed

    Saporito, Salvatore; Van Riper, David; Wakchaure, Ashwini

    2013-01-01

    The School Attendance Boundary Information System is a social science data infrastructure project that assembles, processes, and distributes spatial data delineating K through 12 th grade school attendance boundaries for thousands of school districts in U.S. Although geography is a fundamental organizing feature of K to 12 education, until now school attendance boundary data have not been made readily available on a massive basis and in an easy-to-use format. The School Attendance Boundary Information System removes these barriers by linking spatial data delineating school attendance boundaries with tabular data describing the demographic characteristics of populations living within those boundaries. This paper explains why a comprehensive GIS database of K through 12 school attendance boundaries is valuable, how original spatial information delineating school attendance boundaries is collected from local agencies, and techniques for modeling and storing the data so they provide maximum flexibility to the user community. An important goal of this paper is to share the techniques used to assemble the SABINS database so that local and state agencies apply a standard set of procedures and models as they gather data for their regions.

  3. Procedural Modeling for Rapid-Prototyping of Multiple Building Phases

    NASA Astrophysics Data System (ADS)

    Saldana, M.; Johanson, C.

    2013-02-01

    RomeLab is a multidisciplinary working group at UCLA that uses the city of Rome as a laboratory for the exploration of research approaches and dissemination practices centered on the intersection of space and time in antiquity. In this paper we present a multiplatform workflow for the rapid-prototyping of historical cityscapes through the use of geographic information systems, procedural modeling, and interactive game development. Our workflow begins by aggregating archaeological data in a GIS database. Next, 3D building models are generated from the ArcMap shapefiles in Esri CityEngine using procedural modeling techniques. A GIS-based terrain model is also adjusted in CityEngine to fit the building elevations. Finally, the terrain and city models are combined in Unity, a game engine which we used to produce web-based interactive environments which are linked to the GIS data using keyhole markup language (KML). The goal of our workflow is to demonstrate that knowledge generated within a first-person virtual world experience can inform the evaluation of data derived from textual and archaeological sources, and vice versa.

  4. Serials Management by Microcomputer: The Potential of DBMS.

    ERIC Educational Resources Information Center

    Vogel, J. Thomas; Burns, Lynn W.

    1984-01-01

    Describes serials management at Philadelphia College of Textiles and Science library via a microcomputer, a file manager called PFS, and a relational database management system called dBase II. Check-in procedures, programing with dBase II, "static" and "active" databases, and claim procedures are discussed. Check-in forms are…

  5. Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States

    ERIC Educational Resources Information Center

    Ivan, Ion; Ciurea, Cristian; Pavel, Sorin

    2010-01-01

    The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)

  6. Planned and ongoing projects (pop) database: development and results.

    PubMed

    Wild, Claudia; Erdös, Judit; Warmuth, Marisa; Hinterreiter, Gerda; Krämer, Peter; Chalon, Patrice

    2014-11-01

    The aim of this study was to present the development, structure and results of a database on planned and ongoing health technology assessment (HTA) projects (POP Database) in Europe. The POP Database (POP DB) was set up in an iterative process from a basic Excel sheet to a multifunctional electronic online database. The functionalities, such as the search terminology, the procedures to fill and update the database, the access rules to enter the database, as well as the maintenance roles, were defined in a multistep participatory feedback loop with EUnetHTA Partners. The POP Database has become an online database that hosts not only the titles and MeSH categorizations, but also some basic information on status and contact details about the listed projects of EUnetHTA Partners. Currently, it stores more than 1,200 planned, ongoing or recently published projects of forty-three EUnetHTA Partners from twenty-four countries. Because the POP Database aims to facilitate collaboration, it also provides a matching system to assist in identifying similar projects. Overall, more than 10 percent of the projects in the database are identical both in terms of pathology (indication or disease) and technology (drug, medical device, intervention). In addition, approximately 30 percent of the projects are similar, meaning that they have at least some overlap in content. Although the POP DB is successful concerning regular updates of most national HTA agencies within EUnetHTA, little is known about its actual effects on collaborations in Europe. Moreover, many non-nationally nominated HTA producing agencies neither have access to the POP DB nor can share their projects.

  7. BloodSpot: a database of gene expression profiles and transcriptional programs for healthy and malignant haematopoiesis

    PubMed Central

    Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T.

    2016-01-01

    Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan–Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. PMID:26507857

  8. Estimation of the Past and Future Infrastructure Damage Due the Permafrost Evolution Processes

    NASA Astrophysics Data System (ADS)

    Sergeev, D. O.; Chesnokova, I. V.; Morozova, A. V.

    2015-12-01

    The geocryological processes such as thermokarst, frost heaving and fracturing, icing, thermal erosion are the source of immediate danger for the structures. The economic losses during the construction procedures in the permafrost area are linked also with the other geological processes that have the specific character in cold regions. These processes are swamping, desertification, deflation, flooding, mudflows and landslides. Linear transport structures are most vulnerable component of regional and national economy. Because the high length the transport structures have to cross the landscapes with different permafrost conditions that have the different reaction to climate change. The climate warming is favorable for thermokarst and the frost heaving is linked with climate cooling. In result the structure falls in the circumstances that are not predicted in the construction project. Local engineering problems of structure exploitation lead to global risks of sustainable development of regions. Authors developed the database of geocryological damage cases for the last twelve years at the Russian territory. Spatial data have the attributive table that was filled by the published information from various permafrost conference proceedings. The preliminary GIS-analysis of gathered data showed the widespread territorial distribution of the cases of negative consequences of geocryological processes activity. The information about maximum effect from geocryological processes was validated by detailed field investigation along the railways in Yamal and Transbaicalia Regions. Authors expect the expanding of database by similar data from other sectors of Arctic. It is important for analyzing the regional, time and industrial tendencies of geocryological risk evolution. Obtained information could be used in insurance procedures and in information systems of decisions support in different management levels. The investigation was completed with financial support by Russian Foundation of Basic Research (Project #13-05-00462).

  9. Mining for Murder-Suicide: An Approach to Identifying Cases of Murder-Suicide in the National Violent Death Reporting System Restricted Access Database.

    PubMed

    McNally, Matthew R; Patton, Christina L; Fremouw, William J

    2016-01-01

    The National Violent Death Reporting System (NVDRS) is a United States Centers for Disease Control and Prevention (CDC) database of violent deaths from 2003 to the present. The NVDRS collects information from 32 states on several types of violent deaths, including suicides, homicides, homicides followed by suicides, and deaths resulting from child maltreatment or intimate partner violence, as well as legal intervention and accidental firearm deaths. Despite the availability of data from police narratives, medical examiner reports, and other sources, reliably finding the cases of murder-suicide in the NVDRS has proven problematic due to the lack of a unique code for murder-suicide incidents and outdated descriptions of case-finding procedures from previous researchers. By providing a description of the methods used to access to the NVDRS and coding procedures used to decipher these data, the authors seek to assist future researchers in correctly identifying cases of murder-suicide deaths while avoiding false positives. © 2015 American Academy of Forensic Sciences.

  10. Complications of Non-Operating Room Procedures: Outcomes From the National Anesthesia Clinical Outcomes Registry.

    PubMed

    Chang, Beverly; Kaye, Alan D; Diaz, James H; Westlake, Benjamin; Dutton, Richard P; Urman, Richard D

    2015-04-07

    This study examines the impact of procedural locations and types of anesthetics on patient outcomes in non-operating room anesthesia (NORA) locations. The National Anesthesia Clinical Outcomes Registry database was examined to compare OR to NORA anesthetic complications and patient demographics. The National Anesthesia Clinical Outcomes Registry database was examined for all patient procedures from 2010 to 2013. A total of 12,252,846 cases were analyzed, with 205 practices contributing information, representing 1494 facilities and 7767 physician providers. Cases were separated on the basis of procedure location, OR, or NORA. Subgroup analysis examined outcomes from specific subspecialties. Non-OR anesthesia procedures were performed on a higher percentage of patients older than 50 years (61.92% versus 55.56%, P < 0.0001). Monitored anesthesia care (MAC) (20.15%) and sedation (2.05%) were more common in NORA locations. The most common minor complications were postoperative nausea and vomiting (1.06%), inadequate pain control (1.01%), and hemodynamic instability (0.62%). The most common major complications were serious hemodynamic instability (0.10%) and upgrade of care (0.10%). There was a greater incidence of complications in cardiology and radiology locations. Overall mortality was higher in OR versus NORA (0.04% versus 0.02%, P < 0.0001). Subcategory analysis showed increased incidence of death in cardiology and radiology locations (0.05%). Non-OR anesthesia procedures have lower morbidity and mortality rates than OR procedures, contrary to some previously published studies. However, the increased complication rates in both the cardiology and radiology locations may need to be the target of future safety investigations. Providers must ensure proper monitoring of patients, and NORA locations need to be held to the same standard of care as the main operating room. Further studies need to identify at-risk patients and procedures that may predispose patients to complications.

  11. Interventional Procedures Outside of the Operating Room: Results From the National Anesthesia Clinical Outcomes Registry.

    PubMed

    Chang, Beverly; Kaye, Alan D; Diaz, James H; Westlake, Benjamin; Dutton, Richard P; Urman, Richard D

    2018-03-01

    This study examines the impact of procedural locations and types of anesthetics on patient outcomes in non-operating room anesthesia (NORA) locations. The National Anesthesia Clinical Outcomes Registry database was examined to compare OR to NORA anesthetic complications and patient demographics. The National Anesthesia Clinical Outcomes Registry database was examined for all patient procedures from 2010 to 2013. A total of 12,252,846 cases were analyzed, with 205 practices contributing information, representing 1494 facilities and 7767 physician providers. Cases were separated on the basis of procedure location, OR, or NORA. Subgroup analysis examined outcomes from specific subspecialties. NORA procedures were performed on a higher percentage of patients older than 50 years (61.92% versus 55.56%, P < 0.0001). Monitored anesthesia care (MAC) (20.15%) and sedation (2.05%) were more common in NORA locations. The most common minor complications were postoperative nausea and vomiting (1.06%), inadequate pain control (1.01%), and hemodynamic instability (0.62%). The most common major complications were serious hemodynamic instability (0.10%) and upgrade of care (0.10%). There was a greater incidence of complications in cardiology and radiology locations. Overall mortality was higher in OR versus NORA (0.04% versus 0.02%, P < 0.0001). Subcategory analysis showed increased incidence of death in cardiology and radiology locations (0.05%). NORA procedures have lower morbidity and mortality rates than OR procedures, contrary to some previously published studies. However, the increased complication rates in both the cardiology and radiology locations may need to be the target of future safety investigations. Providers must ensure proper monitoring of patients, and NORA locations need to be held to the same standard of care as the main operating room. Further studies need to identify at-risk patients and procedures that may predispose patients to complications.

  12. Building a Quality Controlled Database of Meteorological Data from NASA Kennedy Space Center and the United States Air Force's Eastern Range

    NASA Technical Reports Server (NTRS)

    Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.

  13. Racial/ethnic disparities in provision of dental procedures to children enrolled in Delta Dental insurance in Milwaukee, Wisconsin.

    PubMed

    Bhagavatula, Pradeep; Xiang, Qun; Eichmiller, Fredrick; Szabo, Aniko; Okunseri, Christopher

    2014-01-01

    Most studies on the provision of dental procedures have focused on Medicaid enrollees known to have inadequate access to dental care. Little information on private insurance enrollees exists. This study documents the rates of preventive, restorative, endodontic, and surgical dental procedures provided to children enrolled in Delta Dental of Wisconsin (DDWI) in Milwaukee. We analyzed DDWI claims data for Milwaukee children aged 0-18 years between 2002 and 2008. We linked the ZIP codes of enrollees to the 2000 U.S. Census information to derive racial/ethnic estimates in the different ZIP codes. We estimated the rates of preventive, restorative, endodontic, and surgical procedures provided to children in different racial/ethnic groups based on the population estimates derived from the U.S. Census data. Descriptive and multivariable analysis was done using Poisson regression modeling on dental procedures per year. In 7 years, a total of 266,380 enrollees were covered in 46 ZIP codes in the database. Approximately, 64 percent, 44 percent, and 49 percent of White, African American, and Hispanic children had at least one dental visit during the study period, respectively. The rates of preventive procedures increased up to the age of 9 years and decreased thereafter among children in all three racial groups included in the analysis. African American and Hispanic children received half as many preventive procedures as White children. Our study shows that substantial racial disparities may exist in the types of dental procedures that were received by children. © 2012 American Association of Public Health Dentistry.

  14. Consulting report on the NASA technology utilization network system

    NASA Technical Reports Server (NTRS)

    Hlava, Marjorie M. K.

    1992-01-01

    The purposes of this consulting effort are: (1) to evaluate the existing management and production procedures and workflow as they each relate to the successful development, utilization, and implementation of the NASA Technology Utilization Network System (TUNS) database; (2) to identify, as requested by the NASA Project Monitor, the strengths, weaknesses, areas of bottlenecking, and previously unaddressed problem areas affecting TUNS; (3) to recommend changes or modifications of existing procedures as necessary in order to effect corrections for the overall benefit of NASA TUNS database production, implementation, and utilization; and (4) to recommend the addition of alternative procedures, routines, and activities that will consolidate and facilitate the production, implementation, and utilization of the NASA TUNS database.

  15. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...

  16. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...

  17. 47 CFR 0.241 - Authority delegated.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...

  18. IPD-MHC 2.0: an improved inter-species database for the study of the major histocompatibility complex.

    PubMed

    Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G; Flicek, Paul; Bontrop, Ronald E; Hammond, John A; Marsh, Steven G E

    2017-01-04

    The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Correcting ligands, metabolites, and pathways

    PubMed Central

    Ott, Martin A; Vriend, Gert

    2006-01-01

    Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry) and that a considerable number (about one third) had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect) reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and visualization. It is freely available at provided that the copyright notice of all original data is cited. The database will be useful for querying and browsing biochemical pathways, and to obtain reference information for identifying compounds. However, these applications require that the underlying data be correct, and that is the focus of BioMeta. PMID:17132165

  20. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849

  1. A new approach to preserve privacy data mining based on fuzzy theory in numerical database

    NASA Astrophysics Data System (ADS)

    Cui, Run; Kim, Hyoung Joong

    2014-01-01

    With the rapid development of information techniques, data mining approaches have become one of the most important tools to discover the in-deep associations of tuples in large-scale database. Hence how to protect the private information is quite a huge challenge, especially during the data mining procedure. In this paper, a new method is proposed for privacy protection which is based on fuzzy theory. The traditional fuzzy approach in this area will apply fuzzification to the data without considering its readability. A new style of obscured data expression is introduced to provide more details of the subsets without reducing the readability. Also we adopt a balance approach between the privacy level and utility when to achieve the suitable subgroups. An experiment is provided to show that this approach is suitable for the classification without a lower accuracy. In the future, this approach can be adapted to the data stream as the low computation complexity of the fuzzy function with a suitable modification.

  2. Unified Database for Rejected Image Analysis Across Multiple Vendors in Radiography.

    PubMed

    Little, Kevin J; Reiser, Ingrid; Liu, Lili; Kinsey, Tiffany; Sánchez, Adrian A; Haas, Kateland; Mallory, Florence; Froman, Carmen; Lu, Zheng Feng

    2017-02-01

    Reject rate analysis has been part of radiography departments' quality control since the days of screen-film radiography. In the era of digital radiography, one might expect that reject rate analysis is easily facilitated because of readily available information produced by the modality during the examination procedure. Unfortunately, this is not always the case. The lack of an industry standard and the wide variety of system log entries and formats have made it difficult to implement a robust multivendor reject analysis program, and logs do not always include all relevant information. The increased use of digital detectors exacerbates this problem because of higher reject rates associated with digital radiography compared with computed radiography. In this article, the authors report on the development of a unified database for vendor-neutral reject analysis across multiple sites within an academic institution and share their experience from a team-based approach to reduce reject rates. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  3. A DATABASE FOR TRACKING TOXICOGENOMIC SAMPLES AND PROCEDURES WITH GENOMIC, PROTEOMIC AND METABONOMIC COMPONENTS

    EPA Science Inventory

    A Database for Tracking Toxicogenomic Samples and Procedures with Genomic, Proteomic and Metabonomic Components
    Wenjun Bao1, Jennifer Fostel2, Michael D. Waters2, B. Alex Merrick2, Drew Ekman3, Mitchell Kostich4, Judith Schmid1, David Dix1
    Office of Research and Developmen...

  4. Rhinology and medical malpractice: An update of the medicolegal landscape of the last ten years.

    PubMed

    Tolisano, Anthony M; Justin, Grant A; Ruhl, Douglas S; Cable, Benjamin B

    2016-01-01

    Malpractice claims pertaining to rhinological procedures are a potentially important source of information that could be used to minimize the risk of future litigation and improve patient care. A retrospective review of a publicly available database containing jury verdicts and settlements. The LexisNexis Jury Verdicts and Settlements database was reviewed for all lawsuits and out-of-court adjudications related to the practice of rhinology. Data including patient demographics, type of surgery performed, plaintiff allegation, nature of injury, outcomes, and indemnities were collected and analyzed. Of 85 cases meeting inclusion criteria, 42 were decided by a jury and 43 were adjudicated out of court. Endoscopic sinus surgery was the most commonly litigated surgery. The plaintiff was favored when the eye was injured (P = 0.0196), but the defendant was favored when neuropsychological injuries (P = 0.0137) or recurrent/worsened symptoms (P = 0.0050) were cited. No difference was found when death or skull base injuries occurred. When lack of informed consent was an allegation, the defendant was favored (P = 0.0001). A payout was made in two-thirds of cases overall, but the defendant was favored in two-thirds of cases decided by a jury. Payments were significant for both out-of-court settlements ($1.3 million) and jury verdicts ($2 million). Endoscopic sinus surgery remains the most commonly litigated rhinology procedure and has the potential to result in large payouts. Meticulous dissection, recognition of complications, and documentation of informed consent remain paramount for providing optimal patient care. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  5. From the operating room to the courtroom: a comprehensive characterization of litigation related to facial plastic surgery procedures.

    PubMed

    Svider, Peter F; Keeley, Brieze R; Zumba, Osvaldo; Mauro, Andrew C; Setzen, Michael; Eloy, Jean Anderson

    2013-08-01

    Malpractice litigation has increased in recent decades, contributing to higher health-care costs. Characterization of complications leading to litigation is of special interest to practitioners of facial plastic surgery procedures because of the higher proportion of elective cases relative to other subspecialties. In this analysis, we comprehensively examine malpractice litigation in facial plastic surgery procedures and characterize factors important in determining legal responsibility, as this information may be of great interest and use to practitioners in several specialties. Retrospective analysis. The Westlaw legal database was examined for court records pertaining to facial plastic surgery procedures. The term "medical malpractice" was searched in combination with numerous procedures obtained from the American Academy of Facial Plastic and Reconstructive Surgery website. Of the 88 cases included, 62.5% were decided in the physician's favor, 9.1% were resolved with an out-of-court settlement, and 28.4% ended in a jury awarding damages for malpractice. The mean settlement was $577,437 and mean jury award was $352,341. The most litigated procedures were blepharoplasties and rhinoplasties. Alleged lack of informed consent was noted in 38.6% of cases; other common complaints were excessive scarring/disfigurement, functional considerations, and postoperative pain. This analysis characterized factors in determining legal responsibility in facial plastic surgery cases. Several factors were identified as potential targets for minimizing liability. Informed consent was the most reported entity in these malpractice suits. This finding emphasizes the importance of open communication between physicians and their patients regarding expectations as well as documentation of specific risks, benefits, and alternatives. © 2013 The American Laryngological, Rhinological, and Otological Society, Inc.

  6. Operative record using intraoperative digital data in neurosurgery.

    PubMed

    Houkin, K; Kuroda, S; Abe, H

    2000-01-01

    The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.

  7. Using statistical process control to make data-based clinical decisions.

    PubMed

    Pfadt, A; Wheeler, D J

    1995-01-01

    Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.

  8. One approach to design of speech emotion database

    NASA Astrophysics Data System (ADS)

    Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav

    2016-05-01

    This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.

  9. Reporting of HIV-infected pregnant women: estimates from a Brazilian study.

    PubMed

    Domingues, Rosa Maria Soares Madeira; Saraceni, Valéria; Leal, Maria do Carmo

    2018-01-01

    To estimate the coverage of the reporting of cases of HIV-infected pregnant women, to estimate the increase in the coverage of the reporting with the routine search of data in other Brazilian health information systems, and to identify missed opportunities for identification of HIV-infected pregnant women in Brazilian maternity hospitals. This is a descriptive study on the linkage of Brazilian databases with primary data from the "Nascer no Brasil" study and secondary database collection from national health information systems. The "Nascer no Brasil" is a national-based study carried out in 2011-2012 with 23,894 pregnant women, which identified HIV-infected pregnant women using prenatal and medical records. We searched for cases of HIV-infected pregnant women identified in the "Nascer no Brasil" study in the Information System of Notifiable Diseases, the Control System for Laboratory Tests of the National CD4+/CD8+ Lymphocyte Count and HIV Viral Load Network, and the Logistics Control System for Medications. We used the OpenRecLink software for the linkage of databases. We estimated the notification coverage, with the respective confidence interval, of the evaluated Brazilian health information systems. We estimated the coverage of the reporting of HIV-infected pregnant women in the Information System of Notifiable Diseases as 57.1% (95%CI 42.9-70.2), and we located 89.3% of the HIV-infected pregnant women (95%CI 81.2-94.2) in some of the Brazilian health information systems researched. The search in other national health information systems would result in an increase of 57.1% of the reported cases. We identified no missed opportunities for the diagnosis of HIV+ in pregnant women in the maternity hospitals evaluated by the "Nascer no Brasil" study. The routine search for information in other Brazilian health information systems, a procedure carried out by the Ministry of Health for cases of AIDS in adults and children, should be adopted for cases of HIV in pregnancy.

  10. Data collection procedures for the Software Engineering Laboratory (SEL) database

    NASA Technical Reports Server (NTRS)

    Heller, Gerard; Valett, Jon; Wild, Mary

    1992-01-01

    This document is a guidebook to collecting software engineering data on software development and maintenance efforts, as practiced in the Software Engineering Laboratory (SEL). It supersedes the document entitled Data Collection Procedures for the Rehosted SEL Database, number SEL-87-008 in the SEL series, which was published in October 1987. It presents procedures to be followed on software development and maintenance projects in the Flight Dynamics Division (FDD) of Goddard Space Flight Center (GSFC) for collecting data in support of SEL software engineering research activities. These procedures include detailed instructions for the completion and submission of SEL data collection forms.

  11. Pediatric reduction mammaplasty: A retrospective analysis of the Kids' Inpatient Database (KID).

    PubMed

    Soleimani, Tahereh; Evans, Tyler A; Sood, Rajiv; Hadad, Ivan; Socas, Juan; Flores, Roberto L; Tholpady, Sunil S

    2015-09-01

    Pediatric breast reduction mammaplasty is a procedure commonly performed in children suffering from excess breast tissue, back pain, and social anxiety. Minimal information exists regarding demographics, epidemiology, and complications in adolescents. As health care reform progresses, investigating the socioeconomic and patient-related factors affecting cost and operative outcomes is essential. The Kids' Inpatient Database (KID) was used from 2000 to 2009. Patients with an International Classification of Diseases, 9th Revision code of macromastia and procedure code of reduction mammaplasty 20 and less were included. Demographic data, including age, sex, payer mix, and location, were collected. Significant independent variables associated with complications and duration of stay were identified with bivariate and multiple regression analysis. A total of 1,345 patients between the ages 12 and 20 were evaluated. The majority of patients were white (64%), from a zip code with greatest income (36%), and had private insurance (75%). Overall comorbidity and complication rates were 30% and 3.2%, respectively. Duration of stay was associated with race, income quartile, insurance type, having complications, and hospital type. African-American race, Medicaid, lower income, and private-investor owned hospitals were predictive of greater hospital charges. In this large retrospective database analysis, pediatric reduction mammaplasty had a relatively low early complication rate and short duration of stay. Complications, total charges, and duration of stay discrepancies were associated with race, location, and socioeconomic status. Although demonstrably safe, this is the first study demonstrating the negative effect of race and socioeconomic status on a completely elective procedure involving children. These results demonstrate the intricate association between socioeconomic and patient-related factors influencing overall outcomes in the pediatric population. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Malodorous consequences: what comprises negligence in anosmia litigation?

    PubMed

    Svider, Peter F; Mauro, Andrew C; Eloy, Jean Anderson; Setzen, Michael; Carron, Michael A; Folbe, Adam J

    2014-03-01

    Our objectives were to evaluate factors raised in malpractice litigation in which plaintiffs alleged that physician negligence led to olfactory dysfunction. We analyzed publically available federal and court records using Westlaw, a widely used computerized legal database. Pertinent jury verdicts and settlements were comprehensively examined for alleged causes of malpractice (including procedures for iatrogenic causes), defendant specialty, patient demographics, and other factors raised in legal proceedings. Of 25 malpractice proceedings meeting inclusion criteria, 60.0% were resolved for the defendant, 12.0% were settled, and 28.0% had jury-awarded damages. Median payments were significant ($300,000 and $412,500 for settlements and awards, respectively). Otolaryngologists were the most frequently named defendants (68.0%), with the majority of iatrogenic cases (55.0%) related to rhinologic procedures. Associated medical events accompanying anosmia included dysgeusia, cerebrospinal fluid leaks, and meningitis. Other alleged factors included requiring additional surgery (80.0%), unnecessary procedures (47.4% of iatrogenic procedural cases), untimely diagnosis leading to anosmia (44.0%), inadequate informed consent (35.0%), dysgeusia (56.0%), and psychological sequelae (24.0%). Olfactory dysfunction can adversely affect quality of life and thus is a potential area for malpractice litigation. This is particularly true for iatrogenic causes of anosmia, especially following rhinologic procedures. Settlements and damages awarded were considerable, making an understanding of factors detailed in this analysis of paramount importance for the practicing otolaryngologist. This analysis reinforces the importance of explicitly including anosmia in a comprehensive informed consent process for any rhinologic procedure. © 2013 ARS-AAOA, LLC.

  13. Data Extraction and Management in Networks of Observational Health Care Databases for Scientific Research: A Comparison of EU-ADR, OMOP, Mini-Sentinel and MATRICE Strategies

    PubMed Central

    Gini, Rosa; Schuemie, Martijn; Brown, Jeffrey; Ryan, Patrick; Vacchi, Edoardo; Coppola, Massimo; Cazzola, Walter; Coloma, Preciosa; Berni, Roberto; Diallo, Gayo; Oliveira, José Luis; Avillach, Paul; Trifirò, Gianluca; Rijnbeek, Peter; Bellentani, Mariadonata; van Der Lei, Johan; Klazinga, Niek; Sturkenboom, Miriam

    2016-01-01

    Introduction: We see increased use of existing observational data in order to achieve fast and transparent production of empirical evidence in health care research. Multiple databases are often used to increase power, to assess rare exposures or outcomes, or to study diverse populations. For privacy and sociological reasons, original data on individual subjects can’t be shared, requiring a distributed network approach where data processing is performed prior to data sharing. Case Descriptions and Variation Among Sites: We created a conceptual framework distinguishing three steps in local data processing: (1) data reorganization into a data structure common across the network; (2) derivation of study variables not present in original data; and (3) application of study design to transform longitudinal data into aggregated data sets for statistical analysis. We applied this framework to four case studies to identify similarities and differences in the United States and Europe: Exploring and Understanding Adverse Drug Reactions by Integrative Mining of Clinical Records and Biomedical Knowledge (EU-ADR), Observational Medical Outcomes Partnership (OMOP), the Food and Drug Administration’s (FDA’s) Mini-Sentinel, and the Italian network—the Integration of Content Management Information on the Territory of Patients with Complex Diseases or with Chronic Conditions (MATRICE). Findings: National networks (OMOP, Mini-Sentinel, MATRICE) all adopted shared procedures for local data reorganization. The multinational EU-ADR network needed locally defined procedures to reorganize its heterogeneous data into a common structure. Derivation of new data elements was centrally defined in all networks but the procedure was not shared in EU-ADR. Application of study design was a common and shared procedure in all the case studies. Computer procedures were embodied in different programming languages, including SAS, R, SQL, Java, and C++. Conclusion: Using our conceptual framework we found several areas that would benefit from research to identify optimal standards for production of empirical knowledge from existing databases.an opportunity to advance evidence-based care management. In addition, formalized CM outcomes assessment methodologies will enable us to compare CM effectiveness across health delivery settings. PMID:27014709

  14. Ménière's Disease: A CHEER Database Study of Local and Regional Patient Encounter and Procedure Patterns.

    PubMed

    Crowson, Matthew G; Schulz, Kristine; Parham, Kourosh; Vambutas, Andrea; Witsell, David; Lee, Walter T; Shin, Jennifer J; Pynnonen, Melissa A; Nguyen-Huynh, Anh; Ryan, Sheila E; Langman, Alan

    2016-07-01

    (1) Integrate practice-based patient encounters using the Dartmouth Atlas Medicare database to understand practice treatments for Ménière's disease (MD). (2) Describe differences in the practice patterns between academic and community providers for MD. Practice-based research database review. CHEER (Creating Healthcare Excellence through Education and Research) network academic and community providers. MD patient data were identified with ICD-9 and CPT codes. Demographics, unique visits, and procedures per patient were tabulated. The Dartmouth Atlas of Health Care was used to reference regional health care utilization. Statistical analysis included 1-way analyses of variance, bivariate linear regression, and Student's t tests, with significance set at P < .05. A total of 2071 unique patients with MD were identified from 8 academic and 10 community otolaryngology-head and neck surgery provider centers nationally. Average age was 56.5 years; 63.9% were female; and 91.4% self-reported white ethnicity. There was an average of 3.2 visits per patient. Western providers had the highest average visits per patient. Midwest providers had the highest average procedures per patient. Community providers had more visits per site and per patient than did academic providers. Academic providers had significantly more operative procedures per site (P = .0002) when compared with community providers. Health care service areas with higher total Medicare reimbursements per enrollee did not report significantly more operative procedures being performed. This is the first practice-based clinical research database study to describe MD practice patterns. We demonstrate that academic otolaryngology-head and neck surgery providers perform significantly more operative procedures than do community providers for MD, and we validate these data with an independent Medicare spending database. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  15. Monitoring rationale, strategy, issues, and methods: UMRR-EMP LTRMP fish component

    USGS Publications Warehouse

    Ickes, Brian S.; Sauer, Jennifer S.; Rogala, James T.

    2014-01-01

    The Long Term Resource Monitoring Program (LTRMP), an element of the multiagency partnership Upper Mississippi River Restoration-Environmental Management Program, has been monitoring fishes in the Upper Mississippi River System (UMRS) for over two decades, using scientific and highly standardized methods. Today, the LTRMP’s data assets represent one of the world’s largest and most extensive datasets on a great river. Methods and procedures used over the past two decades have been documented and have proven a key tool towards gaining data that are (a) scientifically valid, (b) comparable over time, and (c) comparable over space. These procedures manuals coordinate and standardize methods, procedures, and field behaviors in the execution of long-term monitoring, permitting the informed management and control of important sources of error actually under program control. As LTRMP databases have matured in scope and accumulated more years' worth of data, their utility in research and management in the UMRS basin has increased notably. To maximize their utility, data users need not only be aware of “how the data were collected,” as portrayed in the procedures manuals, but also “why the data were collected in the way they were, at the scales they were, and in the manner that they were.” Whereas the procedures manuals contribute information as to the “how” the data were gained, this document seeks to contribute information as to the “why.” As such, this document is intended to be a companion document to the procedures manuals. Herein, we present information on the rationale for monitoring nearly one-fifth of the entire North American freshwater fish fauna (representing the greatest freshwater fish diversity on the planet at temperate latitudes); strategies employed and their reasoning; and discussions on issues associated with the sampling design itself, data arising therefrom, and uses of those data in different contexts.

  16. Blurry-frame detection and shot segmentation in colonoscopy videos

    NASA Astrophysics Data System (ADS)

    Oh, JungHwan; Hwang, Sae; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2003-12-01

    Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.

  17. Developing an Approach to Prioritize River Restoration using Data Extracted from Flood Risk Information System Databases.

    NASA Astrophysics Data System (ADS)

    Vimal, S.; Tarboton, D. G.; Band, L. E.; Duncan, J. M.; Lovette, J. P.; Corzo, G.; Miles, B.

    2015-12-01

    Prioritizing river restoration requires information on river geometry. In many states in the US detailed river geometry has been collected for floodplain mapping and is available in Flood Risk Information Systems (FRIS). In particular, North Carolina has, for its 100 Counties, developed a database of numerous HEC-RAS models which are available through its Flood Risk Information System (FRIS). These models that include over 260 variables were developed and updated by numerous contractors. They contain detailed surveyed or LiDAR derived cross-sections and modeled flood extents for different extreme event return periods. In this work, over 4700 HEC-RAS models' data was integrated and upscaled to utilize detailed cross-section information and 100-year modelled flood extent information to enable river restoration prioritization for the entire state of North Carolina. We developed procedures to extract geomorphic properties such as entrenchment ratio, incision ratio, etc. from these models. Entrenchment ratio quantifies the vertical containment of rivers and thereby their vulnerability to flooding and incision ratio quantifies the depth per unit width. A map of entrenchment ratio for the whole state was derived by linking these model results to a geodatabase. A ranking of highly entrenched counties enabling prioritization for flood allowance and mitigation was obtained. The results were shared through HydroShare and web maps developed for their visualization using Google Maps Engine API.

  18. Automated Data Aggregation for Time-Series Analysis: Study Case on Anaesthesia Data Warehouse.

    PubMed

    Lamer, Antoine; Jeanne, Mathieu; Ficheur, Grégoire; Marcilly, Romaric

    2016-01-01

    Data stored in operational databases are not reusable directly. Aggregation modules are necessary to facilitate secondary use. They decrease volume of data while increasing the number of available information. In this paper, we present four automated engines of aggregation, integrated into an anaesthesia data warehouse. Four instances of clinical questions illustrate the use of those engines for various improvements of quality of care: duration of procedure, drug administration, assessment of hypotension and its related treatment.

  19. Activity-based costing in services: literature bibliometric review.

    PubMed

    Stefano, Nara Medianeira; Filho, Nelson Casarotto

    2013-12-01

    This article is aimed at structuring a bibliography portfolio to treat the application of the ABC method in service and contribute to discussions within the scientific community. The methodology followed a three-stage procedure: Planning, execution and Synthesis. Also, the process ProKnow-C (Knowledge Process Development - Constructivist) was used in the execution stage. International databases were used to collect information (ISI Web of Knowledge and Scopus). As a result, we obtained a bibliography portfolio of 21 articles (with scientific recognition) dealing with the proposed theme.

  20. Database management systems for process safety.

    PubMed

    Early, William F

    2006-03-17

    Several elements of the process safety management regulation (PSM) require tracking and documentation of actions; process hazard analyses, management of change, process safety information, operating procedures, training, contractor safety programs, pre-startup safety reviews, incident investigations, emergency planning, and compliance audits. These elements can result in hundreds of actions annually that require actions. This tracking and documentation commonly is a failing identified in compliance audits, and is difficult to manage through action lists, spreadsheets, or other tools that are comfortably manipulated by plant personnel. This paper discusses the recent implementation of a database management system at a chemical plant and chronicles the improvements accomplished through the introduction of a customized system. The system as implemented modeled the normal plant workflows, and provided simple, recognizable user interfaces for ease of use.

  1. Multimodal optical imaging database from tumour brain human tissue: endogenous fluorescence from glioma, metastasis and control tissues

    NASA Astrophysics Data System (ADS)

    Poulon, Fanny; Ibrahim, Ali; Zanello, Marc; Pallud, Johan; Varlet, Pascale; Malouki, Fatima; Abi Lahoud, Georges; Devaux, Bertrand; Abi Haidar, Darine

    2017-02-01

    Eliminating time-consuming process of conventional biopsy is a practical improvement, as well as increasing the accuracy of tissue diagnoses and patient comfort. We addressed these needs by developing a multimodal nonlinear endomicroscope that allows real-time optical biopsies during surgical procedure. It will provide immediate information for diagnostic use without removal of tissue and will assist the choice of the optimal surgical strategy. This instrument will combine several means of contrast: non-linear fluorescence, second harmonic generation signal, reflectance, fluorescence lifetime and spectral analysis. Multimodality is crucial for reliable and comprehensive analysis of tissue. Parallel to the instrumental development, we currently improve our understanding of the endogeneous fluorescence signal with the different modalities that will be implemented in the stated. This endeavor will allow to create a database on the optical signature of the diseased and control brain tissues. This proceeding will present the preliminary results of this database on three types of tissues: cortex, metastasis and glioblastoma.

  2. Achievable Rate Estimation of IEEE 802.11ad Visual Big-Data Uplink Access in Cloud-Enabled Surveillance Applications.

    PubMed

    Kim, Joongheon; Kim, Jong-Kook

    2016-01-01

    This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.

  3. Intelligent data management

    NASA Technical Reports Server (NTRS)

    Campbell, William J.

    1985-01-01

    Intelligent data management is the concept of interfacing a user to a database management system with a value added service that will allow a full range of data management operations at a high level of abstraction using human written language. The development of such a system will be based on expert systems and related artificial intelligence technologies, and will allow the capturing of procedural and relational knowledge about data management operations and the support of a user with such knowledge in an on-line, interactive manner. Such a system will have the following capabilities: (1) the ability to construct a model of the users view of the database, based on the query syntax; (2) the ability to transform English queries and commands into database instructions and processes; (3) the ability to use heuristic knowledge to rapidly prune the data space in search processes; and (4) the ability to use an on-line explanation system to allow the user to understand what the system is doing and why it is doing it. Additional information is given in outline form.

  4. SFINX-a drug-drug interaction database designed for clinical decision support systems.

    PubMed

    Böttiger, Ylva; Laine, Kari; Andersson, Marine L; Korhonen, Tuomas; Molin, Björn; Ovesjö, Marie-Louise; Tirkkonen, Tuire; Rane, Anders; Gustafsson, Lars L; Eiermann, Birgit

    2009-06-01

    The aim was to develop a drug-drug interaction database (SFINX) to be integrated into decision support systems or to be used in website solutions for clinical evaluation of interactions. Key elements such as substance properties and names, drug formulations, text structures and references were defined before development of the database. Standard operating procedures for literature searches, text writing rules and a classification system for clinical relevance and documentation level were determined. ATC codes, CAS numbers and country-specific codes for substances were identified and quality assured to ensure safe integration of SFINX into other data systems. Much effort was put into giving short and practical advice regarding clinically relevant drug-drug interactions. SFINX includes over 8,000 interaction pairs and is integrated into Swedish and Finnish computerised decision support systems. Over 31,000 physicians and pharmacists are receiving interaction alerts through SFINX. User feedback is collected for continuous improvement of the content. SFINX is a potentially valuable tool delivering instant information on drug interactions during prescribing and dispensing.

  5. FIREDOC users manual, 3rd edition

    NASA Astrophysics Data System (ADS)

    Jason, Nora H.

    1993-12-01

    FIREDOC is the on-line bibliographic database which reflects the holdings (published reports, journal articles, conference proceedings, books, and audiovisual items) of the Fire Research Information Services (FRIS) at the Building and Fire Research Laboratory (BFRL), National Institute of Standards and Technology (NIST). This manual provides step-by-step procedures for entering and exiting the database via telecommunication lines, as well as a number of techniques for searching the database and processing the results of the searches. This Third Edition is necessitated by the change to a UNIX platform. The new computer allows for faster response time if searching via a modem and, in addition, offers internet accessibility. FIREDOC may be used with personal computers, using DOS or Windows, or with Macintosh computers and workstations. A new section on how to access Internet is included, and one on how to obtain the references of interest to you. Appendix F: Quick Guide to Getting Started will be useful to both modem and Internet users.

  6. Informed Consent for Interventional Radiology Procedures: A Survey Detailing Current European Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Dwyer, H.M.; Lyon, S.M.; Fotheringham, T.

    Purpose: Official recommendations for obtaining informed consent for interventional radiology procedures are that the patient gives their consent to the operator more than 24 hr prior to the procedure. This has significant implications for interventional radiology practice. The purpose of this study was to identify the proportion of European interventional radiologists who conform to these guidelines. Methods: A questionnaire was designed consisting of 12 questions on current working practice and opinions regarding informed consent. These questions related to where, when and by whom consent was obtained from the patient. Questions also related to the use of formal consent forms andmore » written patient information leaflets. Respondents were asked whether they felt patients received adequate explanation regarding indications for intervention,the procedure, alternative treatment options and complications. The questionnaire was distributed to 786 European interventional radiologists who were members of interventional societies. The anonymous replies were then entered into a database and analyzed. Results: Two hundred and fifty-four (32.3%) questionnaires were returned. Institutions were classified as academic (56.7%),non-academic (40.5%) or private (2.8%). Depending on the procedure,in a significant proportion of patients consent was obtained in the outpatient department (22%), on the ward (65%) and in the radiology day case ward (25%), but in over half (56%) of patients consent or re-consent was obtained in the interventional suite. Fifty percent of respondents indicated that they obtain consent more than 24 hr before some procedures, in 42.9% consent is obtained on the morning of the procedure and 48.8% indicated that in some patients consent is obtained immediately before the procedure. We found that junior medical staff obtained consent in 58% of cases. Eighty-two percent of respondents do not use specific consent forms and 61% have patient information leaflets. The majority of respondents were satisfied with their level of explanation regarding indications for treatment (69.3%) and the procedure (78.7%). Fifty-nine percent felt patients understood alternative treatment options. Only 37.8% of radiologists document possible complications in the patient's chart. Comments from respondents indicated that there is insufficient time for radiologists to obtain consent in all patients. Suggestions to improve current local policies included developing the role of radiology nursing staff and the use of radiology outpatient clinics. Conclusions: More than 50% of respondents are unhappy with their policies for obtaining informed consent. Interventional societies have a role to play in advocating formal consent guidelines.« less

  7. Rapid Landslide Mapping by Means of Post-Event Polarimetric SAR Imagery

    NASA Astrophysics Data System (ADS)

    Plank, Simon; Martinis, Sandro; Twele, Andre

    2016-08-01

    Rapid mapping of landslides, quickly providing information about the extent of the affected area and type and grade of damage, is crucial to enable fast crisis response. Reviewing the literature shows that most synthetic aperture radar (SAR) data-based landslide mapping procedures use change detection techniques. However, the required very high resolution (VHR) pre-event SAR imagery, acquired shortly before the landslide event, is commonly not available. Due to limitations in onboard disk space and downlink transmission rates modern VHR SAR missions do not systematically cover the entire world. We present a fast and robust procedure for mapping of landslides, based on change detection between freely available and systematically acquired pre-event optical and post-event polarimetric SAR data.

  8. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    PubMed

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  9. Empirical study on neural network based predictive techniques for automatic number plate recognition

    NASA Astrophysics Data System (ADS)

    Shashidhara, M. S.; Indrakumar, S. S.

    2011-10-01

    The objective of this study is to provide an easy, accurate and effective technology for the Bangalore city traffic control. This is based on the techniques of image processing and laser beam technology. The core concept chosen here is an image processing technology by the method of automatic number plate recognition system. First number plate is recognized if any vehicle breaks the traffic rules in the signals. The number is fetched from the database of the RTO office by the process of automatic database fetching. Next this sends the notice and penalty related information to the vehicle owner email-id and an SMS sent to vehicle owner. In this paper, we use of cameras with zooming options & laser beams to get accurate pictures further applied image processing techniques such as Edge detection to understand the vehicle, Identifying the location of the number plate, Identifying the number plate for further use, Plain plate number, Number plate with additional information, Number plates in the different fonts. Accessing the database of the vehicle registration office to identify the name and address and other information of the vehicle number. The updates to be made to the database for the recording of the violation and penalty issues. A feed forward artificial neural network is used for OCR. This procedure is particularly important for glyphs that are visually similar such as '8' and '9' and results in training sets of between 25,000 and 40,000 training samples. Over training of the neural network is prevented by Bayesian regularization. The neural network output value is set to 0.05 when the input is not desired glyph, and 0.95 for correct input.

  10. Design storm prediction and hydrologic modeling using a web-GIS approach on a free-software platform

    NASA Astrophysics Data System (ADS)

    Castrogiovanni, E. M.; La Loggia, G.; Noto, L. V.

    2005-09-01

    The aim of this work has been to implement a set of procedures useful to automatise the evaluation, the design storm prediction and the flood discharge associated with a selected risk level. For this purpose a Geographic Information System has been implemented using Grass 5.0. One of the main topics of such a system is a georeferenced database of the highest intensity rainfalls and their assigned duration recorded in Sicily. This database contains the main characteristics for more than 250 raingauges, as well as the values of intense rainfall events recorded by these raingauges. These data are managed through the combined use of the PostgreSQL and GRASS-GIS 5.0 databases. Some of the best-known probability distributions have been implemented within the Geographical Information System in order to determine the point and/or areal rain values once duration and return period have been defined. The system also includes a hydrological module necessary to compute the probable flow, for a selected risk level, at points chosen by the user. A peculiarity of the system is the possibility to querying the model using a web-interface. The assumption is that the rising needs of geographic information, and dealing with the rising importance of peoples participation in the decision process, requires new forms for the diffusion of territorial data. Furthermore, technicians as well as public administrators needs to get customized and specialist data to support planning, particularly in emergencies. In this perspective a Web-interface has been developed for the hydrologic system. The aim is to allow remote users to access a centralized database and processing-power to serve the needs of knowledge without complex hardware/software infrastructures.

  11. Joint use of Disparate Data for the Surveillance of Zoonoses: A Feasibility Study for a One Health Approach in Germany.

    PubMed

    Wendt, A; Kreienbrock, L; Campe, A

    2016-11-01

    Zoonotic diseases concern human and animal populations and are transmitted between both humans and animals. Nevertheless, surveillance data on zoonoses are collected separately for the most part in different databases for either humans or animals. Bearing in mind the concept of One Health, it is assumed that a global view of these data might help to prevent and control zoonotic diseases. In following this approach, we wanted to determine which zoonotic data are routinely collected in Germany and whether these data could be integrated in a useful way to improve surveillance. Therefore, we conducted an inventory of the existing data collections and gathered information on possible One Health surveillance areas in Germany by approaching experts through a scoping survey, personal interviews and during a workshop. In matching the information between the status quo for existing data collections and the possible use cases for One Health surveillance, this study revealed that data integration is currently hindered by missing data, missing pathogen information or a lack of timeliness, depending on the surveillance purpose. Therefore, integrating the existing data would require substantial efforts and changes to adapt the collection procedures for routine databases. Nevertheless, during this study, we observed a need for different stakeholders from the human and animal health sectors to share information to improve the surveillance of zoonoses. Therefore, our findings suggest that before the data sets from different databases are integrated for joint analyses, the surveillance could be improved by the sharing of information and knowledge through a collaboration of stakeholders from different sectors and institutions. © 2016 Blackwell Verlag GmbH.

  12. Practice Benchmarking in the Age of Targeted Auditing

    PubMed Central

    Langdale, Ryan P.; Holland, Ben F.

    2012-01-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847

  13. Practice benchmarking in the age of targeted auditing.

    PubMed

    Langdale, Ryan P; Holland, Ben F

    2012-11-01

    The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.

  14. The Human Transcript Database: A Catalogue of Full Length cDNA Inserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouckk John; Michael McLeod; Kim Worley

    1999-09-10

    The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less

  15. Interdisciplinary Collaboration amongst Colleagues and between Initiatives with the Magnetics Information Consortium (MagIC) Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.; Jonestrask, L.; Shaar, R.

    2014-12-01

    Earth science grand challenges often require interdisciplinary and geographically distributed scientific collaboration to make significant progress. However, this organic collaboration between researchers, educators, and students only flourishes with the reduction or elimination of technological barriers. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the geo-, paleo-, and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples. MagIC is dedicated to facilitating scientific progress towards several highly multidisciplinary grand challenges and the MagIC Database team is currently beta testing a new MagIC Search Interface and API designed to be flexible enough for the incorporation of large heterogeneous datasets and for horizontal scalability to tens of millions of records and hundreds of requests per second. In an effort to reduce the barriers to effective collaboration, the search interface includes a simplified data model and upload procedure, support for online editing of datasets amongst team members, commenting by reviewers and colleagues, and automated contribution workflows and data retrieval through the API. This web application has been designed to generalize to other databases in MagIC's umbrella website (EarthRef.org) so the Geochemical Earth Reference Model (http://earthref.org/GERM/) portal, Seamount Biogeosciences Network (http://earthref.org/SBN/), EarthRef Digital Archive (http://earthref.org/ERDA/) and EarthRef Reference Database (http://earthref.org/ERR/) will benefit from its development.

  16. Surgical treatment of malrotation after infancy: a population-based study.

    PubMed

    Malek, Marcus M; Burd, Randall S

    2005-01-01

    Because malrotation most commonly presents in infants, treatment recommendations for older children (>1 year) have been based on data obtained from small case series. The purpose of this study was to use a large national database to determine the clinical significance of older children presenting with malrotation to develop treatment recommendations for this group. Records of children undergoing a Ladd's procedure were identified in the Kids' Inpatient Database, an administrative database that contains all pediatric discharges from 27 states during 2000. Patient characteristics, associated diagnoses, operations performed, and mortality were evaluated. Discharge weighting was used to obtain a national estimate of the number of children older than 1 year treated for malrotation. Two hundred nineteen older children (>1 and <18 years) undergoing a Ladd's procedure were identified in the database. One hundred sixty-four (75%) of these patients were admitted for treatment of malrotation, whereas most of the remaining 55 patients (25%) were admitted for another diagnosis and underwent a Ladd's procedure incidental to another abdominal operation. Seventy-five patients underwent a Ladd's procedure during an emergency admission. Thirty-one patients had volvulus or intestinal ischemia, 7 underwent intestinal resection, and 1 patient died. Based on case weightings, it was estimated that 362 older children underwent a Ladd's procedure for symptoms related to malrotation in 2000 in the United States (5.3 cases per million population). These findings provide support for performing a Ladd's procedure in older children with incidentally found malrotation to prevent the rare but potentially devastating complications of this anomaly.

  17. Alaska IPASS database preparation manual.

    Treesearch

    P. McHugh; D. Olson; C. Schallau

    1989-01-01

    Describes the data, their sources, and the calibration procedures used in compiling a database for the Alaska IPASS (interactive policy analysis simulation system) model. Although this manual is for Alaska, it provides generic instructions for analysts preparing databases for other geographical areas.

  18. CDS - Database Administrator's Guide

    NASA Astrophysics Data System (ADS)

    Day, J. P.

    This guide aims to instruct the CDS database administrator in: o The CDS file system. o The CDS index files. o The procedure for assimilating a new CDS tape into the database. It is assumed that the administrator has read SUN/79.

  19. BloodSpot: a database of gene expression profiles and transcriptional programs for healthy and malignant haematopoiesis.

    PubMed

    Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T

    2016-01-04

    Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan-Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. TOWARD THE DEVELOPMENT OF A CONSENSUS MATERIALS DATABASE FOR PRESSURE TECHNOLGY APPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swindeman, Robert W; Ren, Weiju

    The ASME construction code books specify materials and fabrication procedures that are acceptable for pressure technology applications. However, with few exceptions, the materials properties provided in the ASME code books provide no statistics or other information pertaining to material variability. Such information is central to the prediction and prevention of failure events. Many sources of materials data exist that provide variability information but such sources do not necessarily represent a consensus of experts with respect to the reported trends that are represented. Such a need has been identified by the ASME Standards Technology, LLC and initial steps have been takenmore » to address these needs: however, these steps are limited to project-specific applications only, such as the joint DOE-ASME project on materials for Generation IV nuclear reactors. In contrast to light-water reactor technology, the experience base for the Generation IV nuclear reactors is somewhat lacking and heavy reliance must be placed on model development and predictive capability. The database for model development is being assembled and includes existing code alloys such as alloy 800H and 9Cr-1Mo-V steel. Ownership and use rights are potential barriers that must be addressed.« less

  1. AutoFACT: An Automatic Functional Annotation and Classification Tool

    PubMed Central

    Koski, Liisa B; Gray, Michael W; Lang, B Franz; Burger, Gertraud

    2005-01-01

    Background Assignment of function to new molecular sequence data is an essential step in genomics projects. The usual process involves similarity searches of a given sequence against one or more databases, an arduous process for large datasets. Results We present AutoFACT, a fully automated and customizable annotation tool that assigns biologically informative functions to a sequence. Key features of this tool are that it (1) analyzes nucleotide and protein sequence data; (2) determines the most informative functional description by combining multiple BLAST reports from several user-selected databases; (3) assigns putative metabolic pathways, functional classes, enzyme classes, GeneOntology terms and locus names; and (4) generates output in HTML, text and GFF formats for the user's convenience. We have compared AutoFACT to four well-established annotation pipelines. The error rate of functional annotation is estimated to be only between 1–2%. Comparison of AutoFACT to the traditional top-BLAST-hit annotation method shows that our procedure increases the number of functionally informative annotations by approximately 50%. Conclusion AutoFACT will serve as a useful annotation tool for smaller sequencing groups lacking dedicated bioinformatics staff. It is implemented in PERL and runs on LINUX/UNIX platforms. AutoFACT is available at . PMID:15960857

  2. The Spanish national health care-associated infection surveillance network (INCLIMECC): data summary January 1997 through December 2006 adapted to the new National Healthcare Safety Network Procedure-associated module codes.

    PubMed

    Pérez, Cristina Díaz-Agero; Rodela, Ana Robustillo; Monge Jodrá, Vincente

    2009-12-01

    In 1997, a national standardized surveillance system (designated INCLIMECC [Indicadores Clínicos de Mejora Continua de la Calidad]) was established in Spain for health care-associated infection (HAI) in surgery patients, based on the National Nosocomial Infection Surveillance (NNIS) system. In 2005, in its procedure-associated module, the National Healthcare Safety Network (NHSN) inherited the NNIS program for surveillance of HAI in surgery patients and reorganized all surgical procedures. INCLIMECC actively monitors all patients referred to the surgical ward of each participating hospital. We present a summary of the data collected from January 1997 to December 2006 adapted to the new NHSN procedures. Surgical site infection (SSI) rates are provided by operative procedure and NNIS risk index category. Further quality indicators reported are surgical complications, length of stay, antimicrobial prophylaxis, mortality, readmission because of infection or other complication, and revision surgery. Because the ICD-9-CM surgery procedure code is included in each patient's record, we were able to reorganize our database avoiding the loss of extensive information, as has occurred with other systems.

  3. Evaluation of the performance of MP4-based procedures for a wide range of thermochemical and kinetic properties

    NASA Astrophysics Data System (ADS)

    Yu, Li-Juan; Wan, Wenchao; Karton, Amir

    2016-11-01

    We evaluate the performance of standard and modified MPn procedures for a wide set of thermochemical and kinetic properties, including atomization energies, structural isomerization energies, conformational energies, and reaction barrier heights. The reference data are obtained at the CCSD(T)/CBS level by means of the Wn thermochemical protocols. We find that none of the MPn-based procedures show acceptable performance for the challenging W4-11 and BH76 databases. For the other thermochemical/kinetic databases, the MP2.5 and MP3.5 procedures provide the most attractive accuracy-to-computational cost ratios. The MP2.5 procedure results in a weighted-total-root-mean-square deviation (WTRMSD) of 3.4 kJ/mol, whilst the computationally more expensive MP3.5 procedure results in a WTRMSD of 1.9 kJ/mol (the same WTRMSD obtained for the CCSD(T) method in conjunction with a triple-zeta basis set). We also assess the performance of the computationally economical CCSD(T)/CBS(MP2) method, which provides the best overall performance for all the considered databases, including W4-11 and BH76.

  4. Groundwater modeling in integrated water resources management--visions for 2020.

    PubMed

    Refsgaard, Jens Christian; Højberg, Anker Lajer; Møller, Ingelise; Hansen, Martin; Søndergaard, Verner

    2010-01-01

    Groundwater modeling is undergoing a change from traditional stand-alone studies toward being an integrated part of holistic water resources management procedures. This is illustrated by the development in Denmark, where comprehensive national databases for geologic borehole data, groundwater-related geophysical data, geologic models, as well as a national groundwater-surface water model have been established and integrated to support water management. This has enhanced the benefits of using groundwater models. Based on insight gained from this Danish experience, a scientifically realistic scenario for the use of groundwater modeling in 2020 has been developed, in which groundwater models will be a part of sophisticated databases and modeling systems. The databases and numerical models will be seamlessly integrated, and the tasks of monitoring and modeling will be merged. Numerical models for atmospheric, surface water, and groundwater processes will be coupled in one integrated modeling system that can operate at a wide range of spatial scales. Furthermore, the management systems will be constructed with a focus on building credibility of model and data use among all stakeholders and on facilitating a learning process whereby data and models, as well as stakeholders' understanding of the system, are updated to currently available information. The key scientific challenges for achieving this are (1) developing new methodologies for integration of statistical and qualitative uncertainty; (2) mapping geological heterogeneity and developing scaling methodologies; (3) developing coupled model codes; and (4) developing integrated information systems, including quality assurance and uncertainty information that facilitate active stakeholder involvement and learning.

  5. NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR DATABASE TREE AND DATA SOURCES (UA-D-41.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the database storage organization, as well as describe the sources of data for each database used during the Arizona NHEXAS project and the "Border" study. Keywords: data; database; organization.

    The National Human Exposure Assessment Sur...

  6. Database Systems and Oracle: Experiences and Lessons Learned

    ERIC Educational Resources Information Center

    Dunn, Deborah

    2005-01-01

    In a tight job market, IT professionals with database experience are likely to be in great demand. Companies need database personnel who can help improve access to and security of data. The events of September 11 have increased business' awareness of the need for database security, backup, and recovery procedures. It is our responsibility to…

  7. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation.

    PubMed

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific information about genes or microRNAs is quick and easily accessible. Hence, this platform can support the ongoing OS research and biomarker discovery. Database URL: http://osteosarcoma-db.uni-muenster.de. © The Author(s) 2014. Published by Oxford University Press.

  8. Structuring osteosarcoma knowledge: an osteosarcoma-gene association database based on literature mining and manual annotation

    PubMed Central

    Poos, Kathrin; Smida, Jan; Nathrath, Michaela; Maugg, Doris; Baumhoer, Daniel; Neumann, Anna; Korsching, Eberhard

    2014-01-01

    Osteosarcoma (OS) is the most common primary bone cancer exhibiting high genomic instability. This genomic instability affects multiple genes and microRNAs to a varying extent depending on patient and tumor subtype. Massive research is ongoing to identify genes including their gene products and microRNAs that correlate with disease progression and might be used as biomarkers for OS. However, the genomic complexity hampers the identification of reliable biomarkers. Up to now, clinico-pathological factors are the key determinants to guide prognosis and therapeutic treatments. Each day, new studies about OS are published and complicate the acquisition of information to support biomarker discovery and therapeutic improvements. Thus, it is necessary to provide a structured and annotated view on the current OS knowledge that is quick and easily accessible to researchers of the field. Therefore, we developed a publicly available database and Web interface that serves as resource for OS-associated genes and microRNAs. Genes and microRNAs were collected using an automated dictionary-based gene recognition procedure followed by manual review and annotation by experts of the field. In total, 911 genes and 81 microRNAs related to 1331 PubMed abstracts were collected (last update: 29 October 2013). Users can evaluate genes and microRNAs according to their potential prognostic and therapeutic impact, the experimental procedures, the sample types, the biological contexts and microRNA target gene interactions. Additionally, a pathway enrichment analysis of the collected genes highlights different aspects of OS progression. OS requires pathways commonly deregulated in cancer but also features OS-specific alterations like deregulated osteoclast differentiation. To our knowledge, this is the first effort of an OS database containing manual reviewed and annotated up-to-date OS knowledge. It might be a useful resource especially for the bone tumor research community, as specific information about genes or microRNAs is quick and easily accessible. Hence, this platform can support the ongoing OS research and biomarker discovery. Database URL: http://osteosarcoma-db.uni-muenster.de PMID:24865352

  9. Trends in Gender-affirming Surgery in Insured Patients in the United States

    PubMed Central

    Ives, Graham C.; Sluiter, Emily C.; Waljee, Jennifer F.; Yao, Tsung-Hung; Hu, Hsou Mei

    2018-01-01

    Background: An estimated 0.6% of the U.S. population identifies as transgender and an increasing number of patients are presenting for gender-related medical and surgical services. Utilization of health care services, especially surgical services, by transgender patients is poorly understood beyond survey-based studies. In this article, our aim is 2-fold; first, we intend to demonstrate the utilization of datasets generated by insurance claims data as a means of analyzing gender-related health services, and second, we use this modality to provide basic demographic, utilization, and outcomes data about the insured transgender population. Methods: The Truven MarketScan Database, containing data from 2009 to 2015, was utilized, and a sample set was created using the Gender Identity Disorder diagnosis code. Basic demographic information and utilization of gender-affirming procedures was tabulated. Results: We identified 7,905 transgender patients, 1,047 of which underwent surgical procedures from 2009 to 2015. Our demographic results were consistent with previous survey-based studies, suggesting transgender patients are on average young adults (average age = 29.8), and geographically diverse. The most common procedure from 2009 to 2015 was mastectomy. Complications of all gender-affirming procedures was 5.8%, with the highest rate of complications occurring with phalloplasty. There was a marked year-by-year increase in utilization of surgical services. Conclusion: Transgender care and gender confirming surgery are an increasing component of health care in the United States. The data contained in existing databases can provide demographic, utilization, and outcomes data relevant to providers caring for the transgender patient population. PMID:29876180

  10. Human Connectome Project Informatics: quality control, database services, and data visualization

    PubMed Central

    Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.

    2013-01-01

    The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591

  11. [Informatics support for risk assessment and identification of preventive measures in small and micro-enterprises: occupational hazard datasheets].

    PubMed

    de Merich, D; Forte, Giulia

    2011-01-01

    Risk assessment is the fundamental process of an enterprise's prevention system and is the principal mandatory provision contained in the Health and Safety Law (Legislative Decree 81/2008) amended by Legislative Decree 106/2009. In order to properly comply with this obligation also in small-sized enterprises, the appropriate regulatory bodies should provide the enterprises with standardized tools and methods for identifying, assessing and managing risks. To assist in particular small and micro-enterprises (SMEs) with risk assessment, by providing a flexible tool that can also be standardized in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. Official efforts to provide Italian SMEs with information may initially make use of the findings of research conducted by ISPESL over the past 20 years, thanks in part to cooperation with other institutions (Regions, INAIL-National Insurance Institute for Occupational Accidents and Diseases), which have led to the creation of an information system on prevention consisting of numerous databases, both statistical and documental ("National System of Surveillance on fatal and serious accidents", "National System of Surveillance on work-related diseases", "Sector hazard profiles" database, "Solutions and Best Practices" database, "Technical Guidelines" database, "Training packages for prevention professionals in enterprises" database). With regard to evaluation criteria applicable within the enterprise, the possibility of combining traditional and uniform areas of assessment (by sector or by risk factor) with assessments by job/occupation has become possible thanks to the cooperation agreement made in 2009 by ISPESL, the ILO (International Labour Organisation) of Geneva and IIOSH (Israel Institute for Occupational Health and Hygiene) regarding the creation of an international Database (HDODB) based on risk datasheets per occupation. The project sets out to assist in particular small and micro-enterprises with risk assessment, providing a flexible and standardized tool in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. The model proposed by ISPESL selected the ILO's "Hazard Datasheet on Occupation" as an initial information tool to steer efforts to assess and manage hazards in small and micro-enterprises. In addition to being an internationally validated tool, the occupation datasheet has a very simple structure that is very effective in communicating and updating information in relation to the local context. According to the logic based on the providing support to enterprises by means of a collaborative network among institutions, local supervisory services and social partners, standardised hazard assessment procedures should be, irrespective of any legal obligations, the preferred tools of an "updatable information system" capable of providing support for the need to improve the process of assessing and managing hazards in enterprises.

  12. The National Eutrophication Survey: lake characteristics and historical nutrient concentrations

    NASA Astrophysics Data System (ADS)

    Stachelek, Joseph; Ford, Chanse; Kincaid, Dustin; King, Katelyn; Miller, Heather; Nagelkirk, Ryan

    2018-01-01

    Historical ecological surveys serve as a baseline and provide context for contemporary research, yet many of these records are not preserved in a way that ensures their long-term usability. The National Eutrophication Survey (NES) database is currently only available as scans of the original reports (PDF files) with no embedded character information. This limits its searchability, machine readability, and the ability of current and future scientists to systematically evaluate its contents. The NES data were collected by the US Environmental Protection Agency between 1972 and 1975 as part of an effort to investigate eutrophication in freshwater lakes and reservoirs. Although several studies have manually transcribed small portions of the database in support of specific studies, there have been no systematic attempts to transcribe and preserve the database in its entirety. Here we use a combination of automated optical character recognition and manual quality assurance procedures to make these data available for analysis. The performance of the optical character recognition protocol was found to be linked to variation in the quality (clarity) of the original documents. For each of the four archival scanned reports, our quality assurance protocol found an error rate between 5.9 and 17 %. The goal of our approach was to strike a balance between efficiency and data quality by combining entry of data by hand with digital transcription technologies. The finished database contains information on the physical characteristics, hydrology, and water quality of about 800 lakes in the contiguous US (Stachelek et al.(2017), https://doi.org/10.5063/F1639MVD). Ultimately, this database could be combined with more recent studies to generate meta-analyses of water quality trends and spatial variation across the continental US.

  13. A database system for characterization of munitions items in conventional ammunition demilitarization stockpiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chun, K.C.; Chiu, S.Y.; Ditmars, J.D.

    1994-05-01

    The MIDAS (Munition Items Disposition Action System) database system is an electronic data management system capable of storage and retrieval of information on the detailed structures and material compositions of munitions items designated for demilitarization. The types of such munitions range from bulk propellants and small arms to projectiles and cluster bombs. The database system is also capable of processing data on the quantities of inert, PEP (propellant, explosives and pyrotechnics) and packaging materials associated with munitions, components, or parts, and the quantities of chemical compounds associated with parts made of PEP materials. Development of the MIDAS database system hasmore » been undertaken by the US Army to support disposition of unwanted ammunition stockpiles. The inventory of such stockpiles currently includes several thousand items, which total tens of thousands of tons, and is still growing. Providing systematic procedures for disposing of all unwanted conventional munitions is the mission of the MIDAS Demilitarization Program. To carry out this mission, all munitions listed in the Single Manager for Conventional Ammunition inventory must be characterized, and alternatives for resource recovery and recycling and/or disposal of munitions in the demilitarization inventory must be identified.« less

  14. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  15. 16 CFR 1102.6 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...

  16. Using Biblio-Link...For Those Other Databases.

    ERIC Educational Resources Information Center

    Joy, Albert

    1989-01-01

    Sidebar describes the use of the Biblio-Link software packages to download citations from online databases and convert them into a form that can be automatically uploaded into a Pro-Cite database. An example of this procedure using DIALOG2 is given. (CLB)

  17. The Design of Integrated Information System for High Voltage Metering Lab

    NASA Astrophysics Data System (ADS)

    Ma, Yan; Yang, Yi; Xu, Guangke; Gu, Chao; Zou, Lida; Yang, Feng

    2018-01-01

    With the development of smart grid, intelligent and informatization management of high-voltage metering lab become increasingly urgent. In the paper we design an integrated information system, which automates the whole transactions from accepting instruments, make experiments, generating report, report signature to instrument claims. Through creating database for all the calibrated instruments, using two-dimensional code, integrating report templates in advance, establishing bookmarks and online transmission of electronical signatures, our manual procedures reduce largely. These techniques simplify the complex process of account management and report transmission. After more than a year of operation, our work efficiency improves about forty percent averagely, and its accuracy rate and data reliability are much higher as well.

  18. Qualifying information on deaths and serious injuries caused by road traffic in five Brazilian capitals using record linkage.

    PubMed

    Mandacaru, Polyana Maria Pimenta; Andrade, Ana Lucia; Rocha, Marli Souza; Aguiar, Fernanda Pinheiro; Nogueira, Maria Sueli M; Girodo, Anne Marielle; Pedrosa, Ana Amélia Galas; Oliveira, Vera Lídia Alves de; Alves, Marta Maria Malheiros; Paixão, Lúcia Maria Miana M; Malta, Deborah Carvalho; Silva, Marta Maria Alves; Morais Neto, Otaliba Libanio de

    2017-09-01

    Road traffic crashes (RTC) are an important public health problem, accounting for 1.2 million deaths per year worldwide. In Brazil, approximately 40,000 deaths caused by RTC occur every year, with different trends in the Federal Units. However, these figures may be even greater if health databases are linked to police records. In addition, the linkage procedure would make it possible to qualify information from the health and police databases, improving the quality of the data regarding underlying cause of death, cause of injury in hospital records, and injury severity. This study linked different data sources to measure the numbers of deaths and serious injuries and to estimate the percentage of corrections regarding the underlying cause of death, cause of injury, and the severity injury in victims in matched pairs from record linkage in five representative state capitals of the five macro-regions of Brazil. This cross-sectional, population-based study used data from the Hospital Information System (HIS), Mortality Information System (MIS), and Police Road Traffic database of Belo Horizonte, Campo Grande, Curitiba, Palmas, and Teresina, for the year 2013 for Teresina, and 2012 for the other capitals. RecLink III was used to perform probabilistic record linkage by identifying matched pairs to calculate the global correction percentage of the underlying cause of death, the circumstance that caused the road traffic injury, and the injury severity of the victims in the police database. There was a change in the cause of injury in the HIS, with an overall percentage of correction estimated at 24.4% for Belo Horizonte, 96.9% for Campo Grande, 100.0% for Palmas, and 33.2% for Teresina. The overall percentages of correction of the underlying cause of death in the MIS were 29.9%, 11.9%, 4.2%, and 33.5% for Belo Horizonte, Campo Grande, Curitiba, and Teresina, respectively. The correction of the classification of injury severity in police database were 100.0% for Belo Horizonte and Teresina, 48.0% for Campo Grande, and 51.4% for Palmas after linkage with hospital database. The linkage between mortality and police database found a percentage of correction of 29.5%, 52.3%, 4.4%, 74.3 and 72.9% for Belo Horizonte, Campo Grande, Palmas, Curitiba and Teresina, respectively in the police records. The results showed the importance of linking records of the health and police databases for estimating the quality of data on road traffic injuries and the victims in the five capital cities studied. The true causes of death and degrees of severity of the injuries caused by RTC are underestimated in the absence of integration of health and police databases. Thus, it is necessary to define national rules and standards of integration between health and traffic databases in national and state levels in Brazil. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Automated quantitative assessment of proteins' biological function in protein knowledge bases.

    PubMed

    Mayr, Gabriele; Lepperdinger, Günter; Lackner, Peter

    2008-01-01

    Primary protein sequence data are archived in databases together with information regarding corresponding biological functions. In this respect, UniProt/Swiss-Prot is currently the most comprehensive collection and it is routinely cross-examined when trying to unravel the biological role of hypothetical proteins. Bioscientists frequently extract single entries and further evaluate those on a subjective basis. In lieu of a standardized procedure for scoring the existing knowledge regarding individual proteins, we here report about a computer-assisted method, which we applied to score the present knowledge about any given Swiss-Prot entry. Applying this quantitative score allows the comparison of proteins with respect to their sequence yet highlights the comprehension of functional data. pfs analysis may be also applied for quality control of individual entries or for database management in order to rank entry listings.

  20. Usefulness of Canadian Public Health Insurance Administrative Databases to Assess Breast and Ovarian Cancer Screening Imaging Technologies for BRCA1/2 Mutation Carriers.

    PubMed

    Larouche, Geneviève; Chiquette, Jocelyne; Plante, Marie; Pelletier, Sylvie; Simard, Jacques; Dorval, Michel

    2016-11-01

    In Canada, recommendations for clinical management of hereditary breast and ovarian cancer among individuals carrying a deleterious BRCA1 or BRCA2 mutation have been available since 2007. Eight years later, very little is known about the uptake of screening and risk-reduction measures in this population. Because Canada's public health care system falls under provincial jurisdictions, using provincial health care administrative databases appears a valuable option to assess management of BRCA1/2 mutation carriers. The objective was to explore the usefulness of public health insurance administrative databases in British Columbia, Ontario, and Quebec to assess management after BRCA1/2 genetic testing. Official public health insurance documents were considered potentially useful if they had specific procedure codes, and pertained to procedures performed in the public and private health care systems. All 3 administrative databases have specific procedures codes for mammography and breast ultrasounds. Only Quebec and Ontario have a specific procedure code for breast magnetic resonance imaging. It is impossible to assess, on an individual basis, the frequency of others screening exams, with the exception of CA-125 testing in British Columbia. Screenings done in private practice are excluded from the administrative databases unless covered by special agreements for reimbursement, such as all breast imaging exams in Ontario and mammograms in British Columbia and Quebec. There are no specific procedure codes for risk-reduction surgeries for breast and ovarian cancer. Population-based assessment of breast and ovarian cancer risk management strategies other than mammographic screening, using only administrative data, is currently challenging in the 3 Canadian provinces studied. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.

  1. mirEX: a platform for comparative exploration of plant pri-miRNA expression data.

    PubMed

    Bielewicz, Dawid; Dolata, Jakub; Zielezinski, Andrzej; Alaba, Sylwia; Szarzynska, Bogna; Szczesniak, Michal W; Jarmolowski, Artur; Szweykowska-Kulinska, Zofia; Karlowski, Wojciech M

    2012-01-01

    mirEX is a comprehensive platform for comparative analysis of primary microRNA expression data. RT-qPCR-based gene expression profiles are stored in a universal and expandable database scheme and wrapped by an intuitive user-friendly interface. A new way of accessing gene expression data in mirEX includes a simple mouse operated querying system and dynamic graphs for data mining analyses. In contrast to other publicly available databases, the mirEX interface allows a simultaneous comparison of expression levels between various microRNA genes in diverse organs and developmental stages. Currently, mirEX integrates information about the expression profile of 190 Arabidopsis thaliana pri-miRNAs in seven different developmental stages: seeds, seedlings and various organs of mature plants. Additionally, by providing RNA structural models, publicly available deep sequencing results, experimental procedure details and careful selection of auxiliary data in the form of web links, mirEX can function as a one-stop solution for Arabidopsis microRNA information. A web-based mirEX interface can be accessed at http://bioinfo.amu.edu.pl/mirex.

  2. Review of sampling, sample and data collection procedures in nursing research--An example of research on ethical climate as perceived by nurses.

    PubMed

    Suhonen, Riitta; Stolt, Minna; Katajisto, Jouko; Leino-Kilpi, Helena

    2015-12-01

    To report a review of quality regarding sampling, sample and data collection procedures of empirical nursing research of ethical climate studies where nurses were informants. Surveys are needed to obtain generalisable information about topics sensitive to nursing. Methodological quality of the studies is of key concern, especially the description of sampling and data collection procedures. Methodological literature review. Using the electronic MEDLINE database, empirical nursing research articles focusing on ethical climate were accessed in 2013 (earliest-22 November 2013). Using the search terms 'ethical' AND ('climate*' OR 'environment*') AND ('nurse*' OR 'nursing'), 376 citations were retrieved. Based on a four-phase retrieval process, 26 studies were included in the detailed analysis. Sampling method was reported in 58% of the studies, and it was random in a minority of the studies (26%). The identification of target sample and its size (92%) was reported, whereas justification for sample size was less often given. In over two-thirds (69%) of the studies with identifiable response rate, it was below 75%. A variety of data collection procedures were used with large amount of missing data about the details of who distributed, recruited and collected the questionnaires. Methods to increase response rates were seldom described. Discussion about nonresponse, representativeness of the sample and generalisability of the results was missing in many studies. This review highlights the methodological challenges and developments that need to be considered in ensuring the use of valid information in developing health care through research findings. © 2015 Nordic College of Caring Science.

  3. Hysteroscopic morcellation: review of the manufacturer and user facility device experience (MAUDE) database.

    PubMed

    Haber, Karina; Hawkins, Eleanor; Levie, Mark; Chudnoff, Scott

    2015-01-01

    To investigate the number and type of adverse events associated with hysteroscopic morcellation of intrauterine disease. Systematic review of Manufacturer and User Device Experience (MAUDE) database from 2005 to June 2014 (Canadian Task Force classification III). Women undergoing hysteroscopic surgery for removal of intrauterine polyps or myomas with use of a reciprocating morcellator. The MAUDE database was searched for the key words "Hysteroscope," "Hysteroscopic reciprocating morcellator," "Interlace," "MyoSure," "Smith & Nephew," and "TRUCLEAR," to identify reported incidences of device malfunction, injury, or death. A total of 119 adverse events were analyzed. Reports were reviewed individually and categorized by date of occurrence, type of morcellation device, type of complication, and a brief description. Each company was contacted to provide an estimate of the number of procedures performed or units sold to date. From 2005 to June 2014, 119 adverse events were reported to the MAUDE database. On the basis of severity, adverse events were categorized as major or minor complications. Major events included intubation/admission to an intensive care unit (n = 14), bowel damage (n = 12), hysterectomy (n = 6), and death (n = 2). Minor events included uterine perforation requiring no other treatment (n = 29), device failure (n = 25), uncomplicated fluid overload (n = 19), postoperative bleeding controlled using noninvasive measures (n = 6), and pelvic infection (n = 4). These events were then categorized according to manufacturer. The number of adverse events reported to the MAUDE database was divided by the total units sold as a surrogate for the estimated number of procedures performed. Understanding the limitation of the numbers used as a numerator and denominator, we concluded that adverse events complicated hysteroscopic morcellation in <0.1% cases. The suction-based, mechanical energy, rotating tubular cutting system was developed to overcome adverse events that occur during traditional resectoscopy. On the basis of acknowledged limited information from the MAUDE database, it seems that life-threatening complications such as fluid overload, uterine perforation, and bleeding do occur with hysteroscopic morcellation but less frequently than with traditional electrocautery. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.

  4. Prevalence of Diabetes Mellitus in the Surgical Population of the University of Puerto Rico Affiliated Hospitals: A Study using the Surgery Database.

    PubMed

    Cruz, Norma I; Santiago, Elvis; Abdul-Hadi, Anwar

    2016-09-01

    To evaluate the prevalence of diabetes mellitus in the surgical population of the University of Puerto Rico (UPR)-affiliated hospitals. We examined all the surgical cases that were entered into the Surgical Database from April 1, 2014 through September 30, 2014. This database collects patient and procedural information from different surgical services of various UPR-affiliated hospitals (the University District Hospital, the University Pediatric Hospital, the UPR Carolina Hospital, the Dr. Isaac Gonzalez Oncologic Hospital, the PR Cardiovascular Center [thoracic service], the Pavia Hospital [colorectal service], and the Auxilio Mutuo Hospital [colorectal and oncological services]). The prevalence of diabetes mellitus (types 1 and 2 combined) was estimated, and the nondiabetic and diabetic groups were compared. The difference between groups was evaluated using a Chi2 test, Student's t-test, or ANOVA, whichever was appropriate, with a p-value of less than 0.05 being considered significant. Information from 2,603 surgical patients was available. The mean age of the group was 49 (±23) years. The gender distribution indicated that 56% were women and 44% were men. Diabetes was present in 21% of the surgical population, increasing to 40% in patients aged 65 and over. The surgical procedures most frequently required by diabetic patients were in the categories of general surgery (36%), colorectal surgery (22%), vascular surgery (16%) and oncologic surgery (14%). Complications (5%, diabetic group vs. 2%, nondiabetic group; p < 0.05) and postoperative mortality (2%, diabetic group vs. 0.2%, nondiabetic group; p < 0.05) were significantly higher in the diabetic group than in the nondiabetic group. Our surgical population has a high prevalence of diabetes, and these diabetic patients showed higher complication and mortality rates from surgery than did the non-diabetic patients. Surgeons must consider the specific needs of these diabetic patients in order to provide optimal care.

  5. Alignment of high-throughput sequencing data inside in-memory databases.

    PubMed

    Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias

    2014-01-01

    In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.

  6. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  7. 49 CFR 630.1 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.1 Purpose. The purpose of this part is to prescribe requirements and procedures necessary for compliance with the National Transit Database Reporting System and...

  8. 49 CFR 630.1 - Purpose.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.1 Purpose. The purpose of this part is to prescribe requirements and procedures necessary for compliance with the National Transit Database Reporting System and...

  9. 49 CFR 630.1 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.1 Purpose. The purpose of this part is to prescribe requirements and procedures necessary for compliance with the National Transit Database Reporting System and...

  10. 49 CFR 630.1 - Purpose.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.1 Purpose. The purpose of this part is to prescribe requirements and procedures necessary for compliance with the National Transit Database Reporting System and...

  11. 49 CFR 630.1 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.1 Purpose. The purpose of this part is to prescribe requirements and procedures necessary for compliance with the National Transit Database Reporting System and...

  12. Evaluation of personal digital assistant drug information databases for the managed care pharmacist.

    PubMed

    Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W

    2003-01-01

    Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.

  13. Reliability-based econometrics of aerospace structural systems: Design criteria and test options. Ph.D. Thesis - Georgia Inst. of Tech.

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1974-01-01

    The design criteria and test options for aerospace structural reliability were investigated. A decision methodology was developed for selecting a combination of structural tests and structural design factors. The decision method involves the use of Bayesian statistics and statistical decision theory. Procedures are discussed for obtaining and updating data-based probabilistic strength distributions for aerospace structures when test information is available and for obtaining subjective distributions when data are not available. The techniques used in developing the distributions are explained.

  14. Standard methods for sampling North American freshwater fishes

    USGS Publications Warehouse

    Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.

    2009-01-01

    This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

  15. U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--STANDARD OPERATING PROCEDURE FOR DATABASE TREE AND DATA SOURCES (UA-D-41.0)

    EPA Science Inventory

    The purpose of this SOP is to describe the database storage organization, and to describe the sources of data for each database used during the Arizona NHEXAS project and the Border study. Keywords: data; database; organization.

    The U.S.-Mexico Border Program is sponsored by t...

  16. Bowel resection for severe endometriosis: an Australian series of 177 cases.

    PubMed

    Wills, Hannah J; Reid, Geoffrey D; Cooper, Michael J W; Tsaltas, Jim; Morgan, Matthew; Woods, Rodney J

    2009-08-01

    Colorectal resection for severe endometriosis has been increasingly described in the literature over the last 20 years. To describe the experiences of three gynaecological surgeons who perform radical surgery for colorectal endometriosis. The records of three surgeons were reviewed. Relevant information was extracted and complied into a database. One hundred and seventy-seven women were identified as having undergone surgery between February 1997 and October 2007. The primary reason for presentation was pain in the majority of women (79%). Eighty-one segmental resections were performed, 71 disc excisions, ten appendicectomies and multiple procedures in ten women. The majority of procedures (81.4%) were performed laparoscopically. Histology confirmed the presence of disease in 98.3% of cases. A further 124 procedures to remove other sites of endometriosis were conducted, along with an additional 44 procedures not primarily for endometriosis. A total of 16 unintended events occurred. Our study adds to the growing body of literature describing colorectal resection for severe endometriosis. Overall, the surgery appeared to be well tolerated, demonstrating the role for this surgery.

  17. Dissemination of Periodontal Pathogens in the Bloodstream after Periodontal Procedures: A Systematic Review

    PubMed Central

    Horliana, Anna Carolina Ratto Tempestini; Chambrone, Leandro; Foz, Adriana Moura; Artese, Hilana Paula Carillo; Rabelo, Mariana de Sousa; Pannuti, Cláudio Mendes; Romito, Giuseppe Alexandre

    2014-01-01

    Background To date, there is no compilation of evidence-based information associating bacteremia and periodontal procedures. This systematic review aims to assess magnitude, duration, prevalence and nature of bacteremia caused by periodontal procedures. Study Design Systematic Review Types of Studies Reviewed MEDLINE, EMBASE and LILACS databases were searched in duplicate through August, 2013 without language restriction. Observational studies were included if blood samples were collected before, during or after periodontal procedures of patients with periodontitis. The methodological quality was assessed in duplicate using the modified Newcastle-Ottawa scale (NOS). Results Search strategy identified 509 potentially eligible articles and nine were included. Only four studies demonstrated high methodological quality, whereas five were of medium or low methodological quality. The study characteristics were considered too heterogeneous to conduct a meta-analysis. Among 219 analyzed patients, 106 (49.4%) had positive bacteremia. More frequent bacteria were S. viridans, A. actinomycetemcomitans P. gingivalis, M. micros and species Streptococcus and Actinomyces, although identification methods of microbiologic assays were different among studies. Clinical Implications Although half of the patients presented positive bacteremia after periodontal procedures, accurate results regarding the magnitude, duration and nature of bacteremia could not be confidentially assessed. PMID:24870125

  18. [A web-based integrated clinical database for laryngeal cancer].

    PubMed

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  19. Surgical specialty procedures in rural surgery practices: implications for rural surgery training.

    PubMed

    Sticca, Robert P; Mullin, Brady C; Harris, Joel D; Hosford, Clint C

    2012-12-01

    Specialty procedures constitute one eighth of rural surgery practice. Currently, general surgeons intending to practice in rural hospitals may not get adequate training for specialty procedures, which they will be expected to perform. Better definition of these procedures will help guide rural surgery training. Current Procedural Terminology codes for all surgical procedures for 81% of North Dakota and South Dakota rural surgeons were entered into the Dakota Database for Rural Surgery. Specialty procedures were analyzed and compared with the Surgical Council on Resident Education curriculum to determine whether general surgery training is adequate preparation for rural surgery practice. The Dakota Database for Rural Surgery included 46,052 procedures, of which 5,666 (12.3%) were specialty procedures. Highest volume specialty categories included vascular, obstetrics and gynecology, orthopedics, cardiothoracic, urology, and otolaryngology. Common procedures in cardiothoracic and vascular surgery are taught in general surgical residency, while common procedures in obstetrics and gynecology, orthopedics, urology, and otolaryngology are usually not taught in general surgery training. Optimal training for rural surgery practice should include experience in specialty procedures in obstetrics and gynecology, orthopedics, urology, and otolaryngology. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. 76 FR 53912 - FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-30

    ...] FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code Names... replaced non- informative code names with descriptive identifiers on its public database of products that... on our public database with non-informative code names. After careful consideration of this matter...

  1. Computerized training management system

    DOEpatents

    Rice, H.B.; McNair, R.C.; White, K.; Maugeri, T.

    1998-08-04

    A Computerized Training Management System (CTMS) is disclosed for providing a procedurally defined process that is employed to develop accreditable performance based training programs for job classifications that are sensitive to documented regulations and technical information. CTMS is a database that links information needed to maintain a five-phase approach to training-analysis, design, development, implementation, and evaluation independent of training program design. CTMS is designed using R-Base{trademark}, an-SQL compliant software platform. Information is logically entered and linked in CTMS. Each task is linked directly to a performance objective, which, in turn, is linked directly to a learning objective; then, each enabling objective is linked to its respective test items. In addition, tasks, performance objectives, enabling objectives, and test items are linked to their associated reference documents. CTMS keeps all information up to date since it automatically sorts, files and links all data; CTMS includes key word and reference document searches. 18 figs.

  2. Computerized training management system

    DOEpatents

    Rice, Harold B.; McNair, Robert C.; White, Kenneth; Maugeri, Terry

    1998-08-04

    A Computerized Training Management System (CTMS) for providing a procedurally defined process that is employed to develop accreditable performance based training programs for job classifications that are sensitive to documented regulations and technical information. CTMS is a database that links information needed to maintain a five-phase approach to training-analysis, design, development, implementation, and evaluation independent of training program design. CTMS is designed using R-Base.RTM., an-SQL compliant software platform. Information is logically entered and linked in CTMS. Each task is linked directly to a performance objective, which, in turn, is linked directly to a learning objective; then, each enabling objective is linked to its respective test items. In addition, tasks, performance objectives, enabling objectives, and test items are linked to their associated reference documents. CTMS keeps all information up to date since it automatically sorts, files and links all data; CTMS includes key word and reference document searches.

  3. User’s Manual for the National Water Information System of the U.S. Geological Survey: Aggregate Water-Use Data System, Version 3.2

    USGS Publications Warehouse

    Nawyn, John P.; Sargent, B. Pierre; Hoopes, Barbara; Augenstein, Todd; Rowland, Kathleen M.; Barber, Nancy L.

    2017-10-06

    The Aggregate Water-Use Data System (AWUDS) is the database management system used to enter, store, and analyze state aggregate water-use data. It is part of the U.S. Geological Survey National Water Information System. AWUDS has a graphical user interface that facilitates data entry, revision, review, and approval. This document provides information on the basic functions of AWUDS and the steps for carrying out common tasks that are a part of compiling an aggregated dataset. Also included are explanations of terminology and descriptions of user-interface structure, procedures for using the AWUDS operations, and dataset-naming conventions. Information on water-use category definitions, data-collection methods, and data sources are found in the report “Guidelines for preparation of State water-use estimates,” available at https://pubs.er.usgs.gov/publication/ofr20171029.

  4. The Utilization of Rehabilitation in Patients with Hemophilia A in Taiwan: A Nationwide Population-Based Study

    PubMed Central

    Yang, Yao-Hsu; Chang, Chia-Hao; Chen, Chih-Cheng; Chen, Pau-Chung

    2016-01-01

    Introduction Rehabilitation plays an important role in the physical health of patients with hemophilia. However, comprehensive information regarding the utilization of rehabilitation for such patients remains scarce. Aim This population-based study aimed to examine the characteristics, trends, and most important factors affecting rehabilitation usage in patients with hemophilia A using a nationwide database in Taiwan. Methods Data from 777 patients with hemophilia A who were registered in the National Health Insurance Research Database between 1998 and 2008 were analyzed using SAS 9.0. Results Musculoskeletal or nervous system-related surgical procedures and clotting factor VIII concentrate costs were identified as factors affecting rehabilitation usage; musculoskeletal or nervous system-related surgical procedures (odds ratio = 3.788; P < 0.001) were the most important predictor of whether a patient with hemophilia A would use rehabilitation services. Joint disorders, arthropathies, bone and cartilage disorders, intracranial hemorrhage, and brain trauma were common diagnoses during rehabilitation use. The costs of physical therapy (physiotherapy) comprised the majority (71.2%) of rehabilitation therapy categories. Increasingly, rehabilitation therapy was performed at physician clinics. The total rehabilitation costs were <0.1% of the total annual medical costs. Conclusion Musculoskeletal or nervous system-related surgical procedures and increased use of clotting factor VIII concentrate affect the rehabilitation utilization of patients with hemophilia A the most. The findings in this study could help clinicians comprehensively understand the rehabilitation utilization of patients with hemophilia A. PMID:27690229

  5. Quantitative X-ray Map Analyser (Q-XRMA): A new GIS-based statistical approach to Mineral Image Analysis

    NASA Astrophysics Data System (ADS)

    Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino

    2018-06-01

    We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.

  6. Trends Analysis of rhBMP Utilization in Single-Level Posterior Lumbar Interbody Fusion in the United States

    PubMed Central

    Lao, Lifeng; Cohen, Jeremiah R.; Brodke, Darrel S.; Youssef, Jim A.; Park, Jong-Beom; Yoon, S. Tim; Wang, Jeffrey C.; Meisel, Hans-Joerg

    2017-01-01

    Study Design: Retrospective study. Objectives: Recombinant human bone morphogenetic protein-2 (rhBMP-2) has been widely used in spinal fusion surgery, but there is little information on rhBMP-2 utilization in single-level posterior lumbar interbody fusion (PLIF). The purpose of our study was to evaluate the trends and demographics of rhBMP-2 utilization in single-level PLIF. Methods: Patients who underwent single-level PLIF from 2005 to 2011 were identified by searching ICD-9 diagnosis and procedure codes in the PearlDiver Patient Records Database, a national database of orthopedic insurance records. The year of procedure, age, gender, and region of the United States were recorded for each patient. Results were reported for each variable as the incidence of procedures identified per 100 000 patients searched in the database. Results: A total of 2735 patients had single-level PLIF. The average rate of single-level PLIF with rhBMP-2 maintained at a relatively stable level (28% to 31%) from 2005 to 2009, but decreased in 2010 (9.9%) and 2011 (11.8%). The overall incidence of single-level PLIF without rhBMP-2 (0.68 cases per 100 000 patients) was statistically higher (P < .01) compared to single-level PLIF with rhBMP-2 (0.21 cases per 100 000 patients). The average rate of single-level PLIF with rhBMP-2 utilization was the highest in West (30.1%), followed by Midwest (26.9%), South (20.5%), and Northeast (17.8%). The highest incidence of single-level PLIF with rhBMP-2 was observed in the age group <65 years (0.3 per 100 000 patients). Conclusions: To our knowledge, this is the first study to report on the demographics associated with rhBMP-2 use in single-level PLIF. There was a 3-fold increase in the rate of PLIF without rhBMP-2 compared to PLIF with rhBMP-2, with both procedures being mainly done in patients less than 65 years of age. PMID:28989840

  7. Trends Analysis of rhBMP Utilization in Single-Level Posterior Lumbar Interbody Fusion in the United States.

    PubMed

    Lao, Lifeng; Cohen, Jeremiah R; Buser, Zorica; Brodke, Darrel S; Youssef, Jim A; Park, Jong-Beom; Yoon, S Tim; Wang, Jeffrey C; Meisel, Hans-Joerg

    2017-10-01

    Retrospective study. Recombinant human bone morphogenetic protein-2 (rhBMP-2) has been widely used in spinal fusion surgery, but there is little information on rhBMP-2 utilization in single-level posterior lumbar interbody fusion (PLIF). The purpose of our study was to evaluate the trends and demographics of rhBMP-2 utilization in single-level PLIF. Patients who underwent single-level PLIF from 2005 to 2011 were identified by searching ICD-9 diagnosis and procedure codes in the PearlDiver Patient Records Database, a national database of orthopedic insurance records. The year of procedure, age, gender, and region of the United States were recorded for each patient. Results were reported for each variable as the incidence of procedures identified per 100 000 patients searched in the database. A total of 2735 patients had single-level PLIF. The average rate of single-level PLIF with rhBMP-2 maintained at a relatively stable level (28% to 31%) from 2005 to 2009, but decreased in 2010 (9.9%) and 2011 (11.8%). The overall incidence of single-level PLIF without rhBMP-2 (0.68 cases per 100 000 patients) was statistically higher ( P < .01) compared to single-level PLIF with rhBMP-2 (0.21 cases per 100 000 patients). The average rate of single-level PLIF with rhBMP-2 utilization was the highest in West (30.1%), followed by Midwest (26.9%), South (20.5%), and Northeast (17.8%). The highest incidence of single-level PLIF with rhBMP-2 was observed in the age group <65 years (0.3 per 100 000 patients). To our knowledge, this is the first study to report on the demographics associated with rhBMP-2 use in single-level PLIF. There was a 3-fold increase in the rate of PLIF without rhBMP-2 compared to PLIF with rhBMP-2, with both procedures being mainly done in patients less than 65 years of age.

  8. S2RSLDB: a comprehensive manually curated, internet-accessible database of the sigma-2 receptor selective ligands.

    PubMed

    Nastasi, Giovanni; Miceli, Carla; Pittalà, Valeria; Modica, Maria N; Prezzavento, Orazio; Romeo, Giuseppe; Rescifina, Antonio; Marrazzo, Agostino; Amata, Emanuele

    2017-01-01

    Sigma (σ) receptors are accepted as a particular receptor class consisting of two subtypes: sigma-1 (σ 1 ) and sigma-2 (σ 2 ). The two receptor subtypes have specific drug actions, pharmacological profiles and molecular characteristics. The σ 2 receptor is overexpressed in several tumor cell lines, and its ligands are currently under investigation for their role in tumor diagnosis and treatment. The σ 2 receptor structure has not been disclosed, and researchers rely on σ 2 receptor radioligand binding assay to understand the receptor's pharmacological behavior and design new lead compounds. Here we present the sigma-2 Receptor Selective Ligands Database (S2RSLDB) a manually curated database of the σ 2 receptor selective ligands containing more than 650 compounds. The database is built with chemical structure information, radioligand binding affinity data, computed physicochemical properties, and experimental radioligand binding procedures. The S2RSLDB is freely available online without account login and having a powerful search engine the user may build complex queries, sort tabulated results, generate color coded 2D and 3D graphs and download the data for additional screening. The collection here reported is extremely useful for the development of new ligands endowed of σ 2 receptor affinity, selectivity, and appropriate physicochemical properties. The database will be updated yearly and in the near future, an online submission form will be available to help with keeping the database widely spread in the research community and continually updated. The database is available at http://www.researchdsf.unict.it/S2RSLDB.

  9. Quality control of EUVE databases

    NASA Technical Reports Server (NTRS)

    John, L. M.; Drake, J.

    1992-01-01

    The publicly accessible databases for the Extreme Ultraviolet Explorer include: the EUVE Archive mailserver; the CEA ftp site; the EUVE Guest Observer Mailserver; and the Astronomical Data System node. The EUVE Performance Assurance team is responsible for verifying that these public EUVE databases are working properly, and that the public availability of EUVE data contained therein does not infringe any data rights which may have been assigned. In this poster, we describe the Quality Assurance (QA) procedures we have developed from the approach of QA as a service organization, thus reflecting the overall EUVE philosophy of Quality Assurance integrated into normal operating procedures, rather than imposed as an external, post facto, control mechanism.

  10. Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2016-06-01

    Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.

  11. Quality assessment and improvement of nationwide cancer registration system in Taiwan: a review.

    PubMed

    Chiang, Chun-Ju; You, San-Lin; Chen, Chien-Jen; Yang, Ya-Wen; Lo, Wei-Cheng; Lai, Mei-Shu

    2015-03-01

    Cancer registration provides core information for cancer surveillance and control. The population-based Taiwan Cancer Registry was implemented in 1979. After the Cancer Control Act was promulgated in 2003, the completeness (97%) and data quality of cancer registry database has achieved at an excellent level. Hospitals with 50 or more beds, which provide outpatient and hospitalized cancer care, are recruited to report 20 items of information on all newly diagnosed cancers to the central registry office (called short-form database). The Taiwan Cancer Registry is organized and funded by the Ministry of Health and Welfare. The National Taiwan University has been contracted to operate the registry and organized an advisory board to standardize definitions of terminology, coding and procedures of the registry's reporting system since 1996. To monitor the cancer care patterns and evaluate the cancer treatment outcomes, central cancer registry has been reformed since 2002 to include detail items of the stage at diagnosis and the first course of treatment (called long-form database). There are 80 hospitals, which count for >90% of total cancer cases, involved in the long-form registration. The Taiwan Cancer Registry has run smoothly for >30 years, which provides essential foundation for academic research and cancer control policy in Taiwan. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Detection of co-eluted peptides using database search methods

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y; Kwok, Siwei; Wu, Wells W; Wang, Guanghui; Shen, Rong-Fong; Yu, Yi-Kuo

    2008-01-01

    Background Current experimental techniques, especially those applying liquid chromatography mass spectrometry, have made high-throughput proteomic studies possible. The increase in throughput however also raises concerns on the accuracy of identification or quantification. Most experimental procedures select in a given MS scan only a few relatively most intense parent ions, each to be fragmented (MS2) separately, and most other minor co-eluted peptides that have similar chromatographic retention times are ignored and their information lost. Results We have computationally investigated the possibility of enhancing the information retrieval during a given LC/MS experiment by selecting the two or three most intense parent ions for simultaneous fragmentation. A set of spectra is created via superimposing a number of MS2 spectra, each can be identified by all search methods tested with high confidence, to mimick the spectra of co-eluted peptides. The generated convoluted spectra were used to evaluate the capability of several database search methods – SEQUEST, Mascot, X!Tandem, OMSSA, and RAId_DbS – in identifying true peptides from superimposed spectra of co-eluted peptides. We show that using these simulated spectra, all the database search methods will gain eventually in the number of true peptides identified by using the compound spectra of co-eluted peptides. Open peer review Reviewed by Vlad Petyuk (nominated by Arcady Mushegian), King Jordan and Shamil Sunyaev. For the full reviews, please go to the Reviewers' comments section. PMID:18597684

  13. The use of DRG for identifying clinical trials centers with high recruitment potential: a feasability study.

    PubMed

    Aegerter, Philippe; Bendersky, Noelle; Tran, Thi-Chien; Ropers, Jacques; Taright, Namik; Chatellier, Gilles

    2014-01-01

    Recruitment of large samples of patients is crucial for evidence level and efficacy of clinical trials (CT). Clinical Trial Recruitment Support Systems (CTRSS) used to estimate patient recruitment are generally specific to Hospital Information Systems and few were evaluated on a large number of trials. Our aim was to assess, on a large number of CT, the usefulness of commonly available data as Diagnosis Related Groups (DRG) databases in order to estimate potential recruitment. We used the DRG database of a large French multicenter medical institution (1.2 million inpatient stays and 400 new trials each year). Eligibility criteria of protocols were broken down into in atomic entities (diagnosis, procedures, treatments...) then translated into codes and operators recorded in a standardized form. A program parsed the forms and generated requests on the DRG database. A large majority of selection criteria could be coded and final estimations of number of eligible patients were close to observed ones (median difference = 25). Such a system could be part of the feasability evaluation and center selection process before the start of the clinical trial.

  14. SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps

    NASA Astrophysics Data System (ADS)

    Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang

    2018-06-01

    SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.

  15. Tensor discriminant color space for face recognition.

    PubMed

    Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang

    2011-09-01

    Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.

  16. Australia's continental-scale acoustic tracking database and its automated quality control process

    NASA Astrophysics Data System (ADS)

    Hoenner, Xavier; Huveneers, Charlie; Steckenreuter, Andre; Simpfendorfer, Colin; Tattersall, Katherine; Jaine, Fabrice; Atkins, Natalia; Babcock, Russ; Brodie, Stephanie; Burgess, Jonathan; Campbell, Hamish; Heupel, Michelle; Pasquer, Benedicte; Proctor, Roger; Taylor, Matthew D.; Udyawer, Vinay; Harcourt, Robert

    2018-01-01

    Our ability to predict species responses to environmental changes relies on accurate records of animal movement patterns. Continental-scale acoustic telemetry networks are increasingly being established worldwide, producing large volumes of information-rich geospatial data. During the last decade, the Integrated Marine Observing System's Animal Tracking Facility (IMOS ATF) established a permanent array of acoustic receivers around Australia. Simultaneously, IMOS developed a centralised national database to foster collaborative research across the user community and quantify individual behaviour across a broad range of taxa. Here we present the database and quality control procedures developed to collate 49.6 million valid detections from 1891 receiving stations. This dataset consists of detections for 3,777 tags deployed on 117 marine species, with distances travelled ranging from a few to thousands of kilometres. Connectivity between regions was only made possible by the joint contribution of IMOS infrastructure and researcher-funded receivers. This dataset constitutes a valuable resource facilitating meta-analysis of animal movement, distributions, and habitat use, and is important for relating species distribution shifts with environmental covariates.

  17. Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.

    PubMed

    Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray

    2003-07-01

    The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.

  18. RaftProt: mammalian lipid raft proteome database.

    PubMed

    Shah, Anup; Chen, David; Boda, Akash R; Foster, Leonard J; Davis, Melissa J; Hill, Michelle M

    2015-01-01

    RaftProt (http://lipid-raft-database.di.uq.edu.au/) is a database of mammalian lipid raft-associated proteins as reported in high-throughput mass spectrometry studies. Lipid rafts are specialized membrane microdomains enriched in cholesterol and sphingolipids thought to act as dynamic signalling and sorting platforms. Given their fundamental roles in cellular regulation, there is a plethora of information on the size, composition and regulation of these membrane microdomains, including a large number of proteomics studies. To facilitate the mining and analysis of published lipid raft proteomics studies, we have developed a searchable database RaftProt. In addition to browsing the studies, performing basic queries by protein and gene names, searching experiments by cell, tissue and organisms; we have implemented several advanced features to facilitate data mining. To address the issue of potential bias due to biochemical preparation procedures used, we have captured the lipid raft preparation methods and implemented advanced search option for methodology and sample treatment conditions, such as cholesterol depletion. Furthermore, we have identified a list of high confidence proteins, and enabled searching only from this list of likely bona fide lipid raft proteins. Given the apparent biological importance of lipid raft and their associated proteins, this database would constitute a key resource for the scientific community. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Secure searching of biomarkers through hybrid homomorphic encryption scheme.

    PubMed

    Kim, Miran; Song, Yongsoo; Cheon, Jung Hee

    2017-07-26

    As genome sequencing technology develops rapidly, there has lately been an increasing need to keep genomic data secure even when stored in the cloud and still used for research. We are interested in designing a protocol for the secure outsourcing matching problem on encrypted data. We propose an efficient method to securely search a matching position with the query data and extract some information at the position. After decryption, only a small amount of comparisons with the query information should be performed in plaintext state. We apply this method to find a set of biomarkers in encrypted genomes. The important feature of our method is to encode a genomic database as a single element of polynomial ring. Since our method requires a single homomorphic multiplication of hybrid scheme for query computation, it has the advantage over the previous methods in parameter size, computation complexity, and communication cost. In particular, the extraction procedure not only prevents leakage of database information that has not been queried by user but also reduces the communication cost by half. We evaluate the performance of our method and verify that the computation on large-scale personal data can be securely and practically outsourced to a cloud environment during data analysis. It takes about 3.9 s to search-and-extract the reference and alternate sequences at the queried position in a database of size 4M. Our solution for finding a set of biomarkers in DNA sequences shows the progress of cryptographic techniques in terms of their capability can support real-world genome data analysis in a cloud environment.

  20. Introducing laparoscopic total gastrectomy for gastric cancer in general practice: a retrospective cohort study based on a nationwide registry database in Japan.

    PubMed

    Kodera, Yasuhiro; Yoshida, Kazuhiro; Kumamaru, Hiraku; Kakeji, Yoshihiro; Hiki, Naoki; Etoh, Tsuyoshi; Honda, Michitaka; Miyata, Hiroaki; Yamashita, Yuichi; Seto, Yasuyuki; Kitano, Seigo; Konno, Hiroyuki

    2018-02-09

    Although laparoscopic total gastrectomy (LTG) is considered a technically demanding procedure with safety issues, it has been performed in several hospitals in Japan. Data from a nationwide web-based data entry system for surgical procedures (NCD) that started enrollment in 2011 are now available for analysis. A retrospective cohort study was conducted using data from 32,144 patients who underwent total gastrectomy and were registered in the NCD database between January 2012 and December 2013. Mortality and morbidities were compared between patients who received LTG and those who underwent open total gastrectomy (OTG) in the propensity score-matched Stage I cohort and Stage II-IV cohort. There was no significant difference in mortality rate between LTG and OTG in both cohorts. Operating time was significantly longer in LTG while the blood loss was smaller. In the Stage I cohort, LTG, performed in 33.6% of the patients, was associated with significantly shorter hospital stay but significantly higher incidence of readmission, reoperation, and anastomotic leakage (5.4% vs. 3.6%, p < 0.01). In the Stage II-IV cohort, LTG was performed in only 8.8% of the patients and was associated with significantly higher incidence of leakage (5.7% vs. 3.6%, p < 0.02) although the hospital stay was shorter (15 days vs. 17 days, p < 0.001). LTG was more discreetly introduced than distal gastrectomy, but remained a technically demanding procedure as of 2013. This procedure should be performed only among the well-trained and informed laparoscopic team.

  1. Assessing the effects of changes in care commissioning guidelines at a tertiary centre in London on the provision of NHS-funded procedures of limited clinical effectiveness: an 11-year retrospective database analysis.

    PubMed

    Rahman, Shafiq; Langridge, Benjamin; Hachach-Haram, Nadine; Hansen, Esther; Bootle, Anna; Bystrzonowski, Nicola; Hamilton, Stephen; Mosahebi, Afshin

    2017-07-28

    The main objective of this study was to assess the impact of changes in care commissioning policies on National Health Service (NHS)-funded cosmetic procedures over an 11-year period at our centre. The setting was a tertiary care hospital in London regulated by the North Central London Hospitals NHS Trust care commissioning group. We included all patients logged on to our database at the time of the study which was 2087 but later excluded 61 from analysis due to insufficient information. The main outcome measures were the results of tribunal assessment for different cosmetic surgeries which were either accepted, rejected or inconclusive based on the panel meeting. There were a total of 2087 patient requests considered between 2004 and 2015, of which 715 (34%) were accepted, 1311 (63%) were declined and 61 (3%) had inconclusive results. The implementation of local care commissioning guidelines has reduced access to cosmetic surgeries. Within this period, the proportion of procedures accepted has fallen from 36% in 2004 to 21% in 2015 (χ 2 ; p<0.05, 95% CI). Local guidance on procedures of limited clinical effectiveness is a useful, although not evidence-based selection process to reduce access to cosmetic surgery in line with increasing financial constraints. However, patients with a physical impairment may not receive treatment in comparison to previous years, and this can have a negative impact on their quality of life. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Newborn screening healthcare information system based on service-oriented architecture.

    PubMed

    Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Chien, Yin-Hsiu; Weng, Yung-Ching; Hsu, Kai-Ping; Chen, Chi-Huang; Tu, Chien-Ming; Wang, Zhenyu; Lai, Feipei

    2010-08-01

    In this paper, we established a newborn screening system under the HL7/Web Services frameworks. We rebuilt the NTUH Newborn Screening Laboratory's original standalone architecture, having various heterogeneous systems operating individually, and restructured it into a Service-Oriented Architecture (SOA), distributed platform for further integrity and enhancements of sample collections, testing, diagnoses, evaluations, treatments or follow-up services, screening database management, as well as collaboration, communication among hospitals; decision supports and improving screening accuracy over the Taiwan neonatal systems are also addressed. In addition, the new system not only integrates the newborn screening procedures among phlebotomy clinics, referral hospitals, as well as the newborn screening center in Taiwan, but also introduces new models of screening procedures for the associated, medical practitioners. Furthermore, it reduces the burden of manual operations, especially the reporting services, those were heavily dependent upon previously. The new system can accelerate the whole procedures effectively and efficiently. It improves the accuracy and the reliability of the screening by ensuring the quality control during the processing as well.

  3. Reduction Mammoplasty: A Comparison Between Operations Performed by Plastic Surgery and General Surgery.

    PubMed

    Kordahi, Anthony M; Hoppe, Ian C; Lee, Edward S

    2015-01-01

    Reduction mammoplasty is an often-performed procedure by plastic surgeons and increasingly by general surgeons. The question has been posed in both general surgical literature and plastic surgical literature as to whether this procedure should remain the domain of surgical specialists. Some general surgeons are trained in breast reductions, whereas all plastic surgeons receive training in this procedure. The National Surgical Quality Improvement Project provides a unique opportunity to compare the 2 surgical specialties in an unbiased manner in terms of preoperative comorbidities and 30-day postoperative complications. The National Surgical Quality Improvement Project database was queried for the years 2005-2012. Patients were identified as having undergone a reduction mammoplasty by Current Procedural Terminology codes. RESULTS were refined to include only females with an International Classification of Diseases, Ninth Revision, code of 611.1 (hypertrophy of breasts). Information was collected regarding age, surgical specialty performing procedure, body mass index, and other preoperative variables. The outcomes utilized were presence of superficial surgical site infection, presence of deep surgical site infection, presence of wound dehiscence, postoperative respiratory compromise, pulmonary embolism, deep vein thrombosis, perioperative transfusion, operative time, reintubation, reoperation, and length of hospital stay. During this time period, there were 6239 reduction mammaplasties performed within the National Surgical Quality Improvement Project database: 339 by general surgery and 5900 by plastic surgery. No statistical differences were detected between the 2 groups with regard to superficial wound infections, deep wound infections, organ space infections, or wound dehiscence. There were no significant differences noted between within groups with regard to systemic postoperative complications. Patients undergoing a procedure by general surgery were more likely to experience a failure of skin flaps, necessitating a return to the operative room (P < .05). Operative time was longer in procedures performed by general surgery (P < .05). Several important differences appear to exist between reduction mammaplasties performed by general surgery and plastic surgery. A focused training in reduction mammoplasty appears to be beneficial to the patient. The limitations of this study include a lack of long-term follow-up with regard to aesthetic outcome, nipple malposition, nipple sensation, and late wound sequelae.

  4. Development of an open source laboratory information management system for 2-D gel electrophoresis-based proteomics workflow

    PubMed Central

    Morisawa, Hiraku; Hirota, Mikako; Toda, Tosifusa

    2006-01-01

    Background In the post-genome era, most research scientists working in the field of proteomics are confronted with difficulties in management of large volumes of data, which they are required to keep in formats suitable for subsequent data mining. Therefore, a well-developed open source laboratory information management system (LIMS) should be available for their proteomics research studies. Results We developed an open source LIMS appropriately customized for 2-D gel electrophoresis-based proteomics workflow. The main features of its design are compactness, flexibility and connectivity to public databases. It supports the handling of data imported from mass spectrometry software and 2-D gel image analysis software. The LIMS is equipped with the same input interface for 2-D gel information as a clickable map on public 2DPAGE databases. The LIMS allows researchers to follow their own experimental procedures by reviewing the illustrations of 2-D gel maps and well layouts on the digestion plates and MS sample plates. Conclusion Our new open source LIMS is now available as a basic model for proteome informatics, and is accessible for further improvement. We hope that many research scientists working in the field of proteomics will evaluate our LIMS and suggest ways in which it can be improved. PMID:17018156

  5. The economics of biobanking and pharmacogenetics databasing: the case of an adaptive platform on breast cancer.

    PubMed

    Huttin, Christine C; Liebman, Michael N

    2013-01-01

    This paper aims to discuss the economics of biobanking. Among the critical issues in evaluating potential ROI for creation of a bio-bank are: scale (e.g. local, national, international), centralized versus virtual/distributed, degree of sample annotation/QC procedures, targeted end-users and uses, types of samples, potential characterization, both of samples and annotations. The paper presents a review on cost models for an economic analysis of biobanking for different steps: data collection (e.g. biospecimens in different types of sites, storage, transport and distribution, information management for the different types of information (e.g. biological information such as cell, gene, and protein)). It also provides additional concepts to process biospecimens from laboratory to clinical practice and will help to identify how changing paradigms in translational medicine affect the economic modeling.

  6. Aquatic toxicity information retrieval data base: A technical support document. (Revised July 1992)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The AQUIRE (AQUatic toxicity Information REtrieval) database was established in 1981 by the United States Environmental Protection Agency (US EPA), Environmental Research Laboratory-Duluth (ERL-D). The purpose of AQUIRE is to provide quick access to a comprehensive, systematic, computerized compilation of aquatic toxic effects data. As of July 1992, AQUIRE consists of over 98,300 individual test results on computer file. These tests contain information for 5,500 chemicals and 2,300 organisms, extracted from over 6,300 publications. In addition, the ERL-D data file, prepared by the University of Wisconsin-Superior is now included in AQUIRE. The data file consists of acute toxicity test resultsmore » for the effects of 525 organic chemicals to fathead minnow. All AQUIRE data entries have been subjected to established quality assurance procedures.« less

  7. Tracheostomy decannulation methods and procedures in adults: a systematic scoping review protocol.

    PubMed

    Kutsukutsa, John; Mashamba-Thompson, Tivani Phosa; Saman, Yougan

    2017-12-04

    The indications for and the number of tracheostomy procedures has increased with advances in critical care. Studies are indicating likely continued increase in number of tracheostomies. Despite the important benefits of a tracheostomy, its presence is associated with adverse health complications and lowered patient quality of life. Hence, it must be decannulated as soon as it is no longer indicated in a safe and effective manner. There is, however, no agreed universal standard of care for tracheostomy decannulation (TD) in adults. The aims of our study are to systematically map the literature on the decannulation process, reveal knowledge gaps and inform further research. The search strategy of this systematic scoping review will involve the following electronic databases: PubMed/MEDLINE, Google Scholar, Union Catalogue of Theses and Dissertations (UCTD) via SABINET Online and WorldCat Dissertations and Theses via OCLC. Articles will also be searched through the "Cited by" search as well as citations included in the reference lists of included articles. Studies from the databases will be title screened and duplicates removed followed by a parallel two-independent reviewer screening of abstracts followed by full articles of selected studies both guided by eligibility criteria. We will extract data from the included studies and the emerging themes will be analysed. The relationship of the emerging themes to the research question will be critically examined. The quality of the included studies will be determined by Mixed Method Appraisal Tool (MMAT). We will use NVIVO version 10 to extract the relevant outcomes and thematic analysis of the studies. We anticipate to find studies that highlight evidence and preference as well as acceptability of TD methods and procedures. We hope to expose knowledge gaps and inform future research. Findings will be disseminated electronically, in print and through peer presentation, conferences and congresses. Our systematic review has been registered in PROSPERO: CRD42017072050 .

  8. Comparison of Online Agricultural Information Services.

    ERIC Educational Resources Information Center

    Reneau, Fred; Patterson, Richard

    1984-01-01

    Outlines major online agricultural information services--agricultural databases, databases with agricultural services, educational databases in agriculture--noting services provided, access to the database, and costs. Benefits of online agricultural database sources (availability of agricultural marketing, weather, commodity prices, management…

  9. WMC Database Evaluation. Case Study Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palounek, Andrea P. T

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  10. MIPS: analysis and annotation of proteins from whole genomes

    PubMed Central

    Mewes, H. W.; Amid, C.; Arnold, R.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Münsterkötter, M.; Pagel, P.; Strack, N.; Stümpflen, V.; Warfsmann, J.; Ruepp, A.

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein–protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de). PMID:14681354

  11. MIPS: analysis and annotation of proteins from whole genomes.

    PubMed

    Mewes, H W; Amid, C; Arnold, R; Frishman, D; Güldener, U; Mannhaupt, G; Münsterkötter, M; Pagel, P; Strack, N; Stümpflen, V; Warfsmann, J; Ruepp, A

    2004-01-01

    The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).

  12. Indicators for the automated analysis of drug prescribing quality.

    PubMed

    Coste, J; Séné, B; Milstein, C; Bouée, S; Venot, A

    1998-01-01

    Irrational and inconsistent drug prescription has considerable impact on morbidity, mortality, health service utilization, and community burden. However, few studies have addressed the methodology of processing the information contained in these drug orders used to study the quality of drug prescriptions and prescriber behavior. We present a comprehensive set of quantitative indicators for the quality of drug prescriptions which can be derived from a drug order. These indicators were constructed using explicit a priori criteria which were previously validated on the basis of scientific data. Automatic computation is straightforward, using a relational database system, such that large sets of prescriptions can be processed with minimal human effort. We illustrate the feasibility and value of this approach by using a large set of 23,000 prescriptions for several diseases, selected from a nationally representative prescriptions database. Our study may result in direct and wide applications in the epidemiology of medical practice and in quality control procedures.

  13. OVERSEER: An Expert System Monitor for the Psychiatric Hospital

    PubMed Central

    Bronzino, Joseph D.; Morelli, Ralph A.; Goethe, John W.

    1988-01-01

    In order to improve patient care, comply with regulatory guidelines and decrease potential liability, psychiatric hospitals and clinics have been searching for computer systems to monitor the management and treatment of patients. This paper describes OVERSEER: a knowledge based system that monitors the treatment of psychiatric patients in real time. Based on procedures and protocols developed in the psychiatric setting, OVERSEER monitors the clinical database and issues alerts when standard clinical practices are not followed or when laboratory results or other clinical indicators are abnormal. Written in PROLOG, OVERSEER is designed to interface directly with the hospital's database, and, thereby utilizes all available pharmacy and laboratory data. Moreover, unlike the interactive expert systems developed for the psychiatric clinic, OVERSEER does not require extensive data entry by the clinician. Consequently, the chief benefit of OVERSEER's monitoring approach is the unobtrusive manner in which it evaluates treatment and patient responses and provides information regarding patient management.

  14. Safety and Procedural Success of Left Atrial Appendage Exclusion With the Lariat Device: A Systematic Review of Published Reports and Analytic Review of the FDA MAUDE Database.

    PubMed

    Chatterjee, Saurav; Herrmann, Howard C; Wilensky, Robert L; Hirshfeld, John; McCormick, Daniel; Frankel, David S; Yeh, Robert W; Armstrong, Ehrin J; Kumbhani, Dharam J; Giri, Jay

    2015-07-01

    The Lariat device has received US Food and Drug Administration (FDA) 510(k) clearance for soft-tissue approximation and is being widely used off-label for left atrial appendage (LAA) exclusion. A comprehensive analysis of safety and effectiveness has not been reported. To perform a systematic review of published literature to assess safety and procedural success, defined as successful closure of the LAA during the index procedure, of the Lariat device. We performed a formal analytic review of the FDA MAUDE (Manufacturer and User Facility Device Experience) database to compile adverse event reports from real-world practice with the Lariat. For the systematic review, PubMed, EMBASE, CINAHL, and the Cochrane Library were searched from January 2007 through August 2014 to identify all studies reporting use of the Lariat device in 3 or more patients. The FDA MAUDE database was queried for adverse events reports related to Lariat use. Data were abstracted in duplicate by 2 physician reviewers. Events from published literature were pooled using a generic inverse variance weighting with a random effects model. Cumulative and individual adverse events were also reported using the FDA MAUDE data set. Procedural adverse events and procedural success. In the systematic review, 5 reports of Lariat device use in 309 participants were identified. Specific complications weighted for inverse of variance of individual studies were urgent need for cardiac surgery (2.3%; 7 of 309 procedures) and death (0.3%; 1 of 309 procedures). Procedural success was 90.3% (279 of 309 procedures). In the FDA MAUDE database, there were 35 unique reports of adverse events with use of the Lariat device. Among these, we identified 5 adverse event reports that noted pericardial effusion and death and an additional 23 reported urgent cardiac surgery without mention of death. This review of published reports and case reports identified risks of adverse events with off-label use of the Lariat device for LAA exclusion. Formal, controlled investigations into the safety and efficacy of the device for this indication are warranted.

  15. Effects of urban microcellular environments on ray-tracing-based coverage predictions.

    PubMed

    Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing

    2016-09-01

    The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.

  16. Implications of electronic health record downtime: an analysis of patient safety event reports.

    PubMed

    Larsen, Ethan; Fong, Allan; Wernz, Christian; Ratwani, Raj M

    2018-02-01

    We sought to understand the types of clinical processes, such as image and medication ordering, that are disrupted during electronic health record (EHR) downtime periods by analyzing the narratives of patient safety event report data. From a database of 80 381 event reports, 76 reports were identified as explicitly describing a safety event associated with an EHR downtime period. These reports were analyzed and categorized based on a developed code book to identify the clinical processes that were impacted by downtime. We also examined whether downtime procedures were in place and followed. The reports were coded into categories related to their reported clinical process: Laboratory, Medication, Imaging, Registration, Patient Handoff, Documentation, History Viewing, Delay of Procedure, and General. A majority of reports (48.7%, n = 37) were associated with lab orders and results, followed by medication ordering and administration (14.5%, n = 11). Incidents commonly involved patient identification and communication of clinical information. A majority of reports (46%, n = 35) indicated that downtime procedures either were not followed or were not in place. Only 27.6% of incidents (n = 21) indicated that downtime procedures were successfully executed. Patient safety report data offer a lens into EHR downtime-related safety hazards. Important areas of risk during EHR downtime periods were patient identification and communication of clinical information; these should be a focus of downtime procedure planning to reduce safety hazards. EHR downtime events pose patient safety hazards, and we highlight critical areas for downtime procedure improvement. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Quality-assurance plan for groundwater activities, U.S. Geological Survey, Washington Water Science Center

    USGS Publications Warehouse

    Kozar, Mark D.; Kahle, Sue C.

    2013-01-01

    This report documents the standard procedures, policies, and field methods used by the U.S. Geological Survey’s (USGS) Washington Water Science Center staff for activities related to the collection, processing, analysis, storage, and publication of groundwater data. This groundwater quality-assurance plan changes through time to accommodate new methods and requirements developed by the Washington Water Science Center and the USGS Office of Groundwater. The plan is based largely on requirements and guidelines provided by the USGS Office of Groundwater, or the USGS Water Mission Area. Regular updates to this plan represent an integral part of the quality-assurance process. Because numerous policy memoranda have been issued by the Office of Groundwater since the previous groundwater quality assurance plan was written, this report is a substantial revision of the previous report, supplants it, and contains significant additional policies not covered in the previous report. This updated plan includes information related to the organization and responsibilities of USGS Washington Water Science Center staff, training, safety, project proposal development, project review procedures, data collection activities, data processing activities, report review procedures, and archiving of field data and interpretative information pertaining to groundwater flow models, borehole aquifer tests, and aquifer tests. Important updates from the previous groundwater quality assurance plan include: (1) procedures for documenting and archiving of groundwater flow models; (2) revisions to procedures and policies for the creation of sites in the Groundwater Site Inventory database; (3) adoption of new water-level forms to be used within the USGS Washington Water Science Center; (4) procedures for future creation of borehole geophysics, surface geophysics, and aquifer-test archives; and (5) use of the USGS Multi Optional Network Key Entry System software for entry of routine water-level data collected as part of long-term water-level monitoring networks.

  18. 48 CFR 22.1304 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...-100A Report, the contracting officer may— (a) Query the Department of Labor's VETS-100 Database via the... proposed contractor represents that it has submitted the VETS-100 Report and is not listed in the database...

  19. 48 CFR 22.1304 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-100A Report, the contracting officer may— (a) Query the Department of Labor's VETS-100 Database via the... proposed contractor represents that it has submitted the VETS-100 Report and is not listed in the database...

  20. 48 CFR 22.1304 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-100A Report, the contracting officer may— (a) Query the Department of Labor's VETS-100 Database via the... proposed contractor represents that it has submitted the VETS-100 Report and is not listed in the database...

  1. A Public-Use, Full-Screen Interface for SPIRES Databases.

    ERIC Educational Resources Information Center

    Kriz, Harry M.

    This paper describes the techniques for implementing a full-screen, custom SPIRES interface for a public-use library database. The database-independent protocol that controls the system is described in detail. Source code for an entire working application using this interface is included. The protocol, with less than 170 lines of procedural code,…

  2. Three Library and Information Science Databases Revisited: Currency, Coverage and Overlap, Interindexing Consistency.

    ERIC Educational Resources Information Center

    Blackwell, Michael Lind

    This study evaluates the "Education Resources Information Center" (ERIC), "Library and Information Science Abstracts" (LISA), and "Library Literature" (LL) databases, determining how long the databases take to enter records (indexing delay), how much duplication of effort exists among the three databases (indexing…

  3. Graphical tool for navigation within the semantic network of the UMLS metathesaurus on a locally installed database.

    PubMed

    Frankewitsch, T; Prokosch, H U

    2000-01-01

    Knowledge in the environment of information technologies is bound to structured vocabularies. Medical data dictionaries are necessary for uniquely describing findings like diagnoses, procedures or functions. Therefore we decided to locally install a version of the Unified Medical Language System (UMLS) of the U.S. National Library of Medicine as a repository for defining entries of a medical multimedia database. Because of the requirement to extend the vocabulary in concepts and relations between existing concepts a graphical tool for appending new items to the database has been developed: Although the database is an instance of a semantic network the focus on single entries offers the opportunity of reducing the net to a tree within this detail. Based on the graph theorem, there are definitions of nodes of concepts and nodes of knowledge. The UMLS additionally offers the specification of sub-relations, which can be represented, too. Using this view it is possible to manage these 1:n-Relations in a simple tree view. On this background an explorer like graphical user interface has been realised to add new concepts and define new relationships between those and existing entries for adapting the UMLS for specific purposes such as describing medical multimedia objects.

  4. An Integrated Molecular Database on Indian Insects.

    PubMed

    Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil

    2018-01-01

    MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.

  5. Seismic Indexing System for Army Installations. Volume II. Seismic Hazard Priority-Ranking Procedure for Army Buildings: Basic Concept.

    DTIC Science & Technology

    1981-05-01

    factors that cause damage are discussed below. a. Architectural elements. Damage to architectural elements can result in both significant dollar losses...hazard priority- ranking procedure are: 1. To produce meaningful results which are as simple as possible, con- sidering the existing databases. 2. To...minimize the amount of data required for meaningful results , i.e., the database should contain only the most fundamental building characteris- tics. 3. To

  6. NetIntel: A Database for Manipulation of Rich Social Network Data

    DTIC Science & Technology

    2005-03-03

    between entities in a social or organizational system. For most of its history , social network analysis has operated on a notion of a dataset - a clearly...and procedural), as well as stored procedure and trigger capabilities. For the current implementation, we have chosen PostgreSQL [1] database. Of the...data and easy-to-use facilities for export of data into analysis tools as well as online browsing and data entry. References [1] Postgresql

  7. Development and evaluation of a web-based software for crash data collection, processing and analysis.

    PubMed

    Montella, Alfonso; Chiaradonna, Salvatore; Criscuolo, Giorgio; De Martino, Salvatore

    2017-02-05

    First step of the development of an effective safety management system is to create reliable crash databases since the quality of decision making in road safety depends on the quality of the data on which decisions are based. Improving crash data is a worldwide priority, as highlighted in the Global Plan for the Decade of Action for Road Safety adopted by the United Nations, which recognizes that the overall goal of the plan will be attained improving the quality of data collection at the national, regional and global levels. Crash databases provide the basic information for effective highway safety efforts at any level of government, but lack of uniformity among countries and among the different jurisdictions in the same country is observed. Several existing databases show significant drawbacks which hinder their effective use for safety analysis and improvement. Furthermore, modern technologies offer great potential for significant improvements of existing methods and procedures for crash data collection, processing and analysis. To address these issues, in this paper we present the development and evaluation of a web-based platform-independent software for crash data collection, processing and analysis. The software is designed for mobile and desktop electronic devices and enables a guided and automated drafting of the crash report, assisting police officers both on-site and in the office. The software development was based both on the detailed critical review of existing Australasian, EU, and U.S. crash databases and software as well as on the continuous consultation with the stakeholders. The evaluation was carried out comparing the completeness, timeliness, and accuracy of crash data before and after the use of the software in the city of Vico Equense, in south of Italy showing significant advantages. The amount of collected information increased from 82 variables to 268 variables, i.e., a 227% increase. The time saving was more than one hour per crash, i.e., a 36% reduction. The on-site data collection did not produce time saving, however this is a temporary weakness that will be annihilated very soon in the future after officers are more acquainted with the software. The phase of evaluation, processing and analysis carried out in the office was dramatically shortened, i.e., a 69% reduction. Another benefit was the standardization which allowed fast and consistent data analysis and evaluation. Even if all these benefits are remarkable, the most valuable benefit of the new procedure was the reduction of the police officers mistakes during the manual operations of survey and data evaluation. Because of these benefits, the satisfaction questionnaires administrated to the police officers after the testing phase showed very good acceptance of the procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  9. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  10. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  11. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  12. 47 CFR 69.120 - Line information database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...

  13. Building a QC Database of Meteorological Data From NASA KSC and the United States Air Force's Eastern Range

    NASA Technical Reports Server (NTRS)

    Brenton, James C.; Barbre, Robert E.; Orcutt, John M.; Decker, Ryan K.

    2018-01-01

    The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER is one of the most heavily instrumented sites in the United States measuring various atmospheric parameters on a continuous basis. An inherent challenge with the large databases that EV44 receives from the ER consists of ensuring erroneous data are removed from the databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments; however, no standard QC procedures for all databases currently exist resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build flags within the meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC checks are described. The flagged data points will be plotted in a graphical user interface (GUI) as part of a manual confirmation that the flagged data do indeed need to be removed from the archive. As the rate of launches increases with additional launch vehicle programs, more emphasis is being placed to continually update and check weather databases for data quality before use in launch vehicle design and certification analyses.

  14. The Protein Information Resource: an integrated public resource of functional annotation of proteins

    PubMed Central

    Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.

    2002-01-01

    The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247

  15. The ChArMEx database

    NASA Astrophysics Data System (ADS)

    Ferré, Hélène; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Cloché, Sophie; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mière, Arnaud; Ramage, Karim; Vermeulen, Anne; Boulanger, Damien

    2015-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters , intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between ICARE, IPSL and OMP data centers and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. ChArMEx data, either produced or used by the project, are documented and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The website offers the usual but user-friendly functionalities: data catalog, user registration procedure, search tool to select and access data... The metadata (data description) are standardized, and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A Digital Object Identifier (DOI) assignement procedure allows to automatically register the datasets, in order to make them easier to access, cite, reuse and verify. At present, the ChArMEx database contains about 120 datasets, including more than 80 in situ datasets (2012, 2013 and 2014 summer campaigns, background monitoring station of Ersa...), 25 model output sets (dust model intercomparison, MEDCORDEX scenarios...), a high resolution emission inventory over the Mediterranean... Many in situ datasets have been inserted in a relational database, in order to enable more accurate selection and download of different datasets in a shared format. Many dedicated satellite products (SEVIRI, TRIMM, PARASOL...) are processed and will soon be accessible through the database website. In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx campaigns, a day-to-day chart display website has been developed and operated: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods. Every scientist is invited to visit the ChArMEx websites, to register and to request data. Feel free to contact charmex-database@sedoo.fr for any question.

  16. A Web Geographic Information System to share data and explorative analysis tools: The application to West Nile disease in the Mediterranean basin.

    PubMed

    Savini, Lara; Tora, Susanna; Di Lorenzo, Alessio; Cioci, Daniela; Monaco, Federica; Polci, Andrea; Orsini, Massimiliano; Calistri, Paolo; Conte, Annamaria

    2018-01-01

    In the last decades an increasing number of West Nile Disease cases was observed in equines and humans in the Mediterranean basin and surveillance systems are set up in numerous countries to manage and control the disease. The collection, storage and distribution of information on the spread of the disease becomes important for a shared intervention and control strategy. To this end, a Web Geographic Information System has been developed and disease data, climatic and environmental remote sensed data, full genome sequences of selected isolated strains are made available. This paper describes the Disease Monitoring Dashboard (DMD) web system application, the tools available for the preliminary analysis on climatic and environmental factors and the other interactive tools for epidemiological analysis. WNV occurrence data are collected from multiple official and unofficial sources. Whole genome sequences and metadata of WNV strains are retrieved from public databases or generated in the framework of the Italian surveillance activities. Climatic and environmental data are provided by NASA website. The Geographical Information System is composed by Oracle 10g Database and ESRI ArcGIS Server 10.03; the web mapping client application is developed with the ArcGIS API for Javascript and Phylocanvas library to facilitate and optimize the mash-up approach. ESRI ArcSDE 10.1 has been used to store spatial data. The DMD application is accessible through a generic web browser at https://netmed.izs.it/networkMediterraneo/. The system collects data through on-line forms and automated procedures and visualizes data as interactive graphs, maps and tables. The spatial and temporal dynamic visualization of disease events is managed by a time slider that returns results on both map and epidemiological curve. Climatic and environmental data can be associated to cases through python procedures and downloaded as Excel files. The system compiles multiple datasets through user-friendly web tools; it integrates entomological, veterinary and human surveillance, molecular information on pathogens and environmental and climatic data. The principal result of the DMD development is the transfer and dissemination of knowledge and technologies to develop strategies for integrated prevention and control measures of animal and human diseases.

  17. [Epidemiological data for uterine fibroids in France in 2010-2012 in medical center--analysis from the French DRG-based information system (PMSI)].

    PubMed

    Fernandez, H; Chabbert-Buffet, N; Koskas, M; Nazac, A

    2014-10-01

    Uterine fibroids are a common disorder, responsible for menorrhagia/metrorrhagia and pelvic pain and remain the leading reason for hysterectomy in France. Although it is common disorder, French epidemiological data are locking. The objective of this study was to realize an epidemiological analysis from the medicalized information system program (PMSI). The diagnosis codes were selected from 10th version of the International Classification Disease. The medical procedures concerning uterine fibroids were selected (so called: procedures listed). A descriptive analysis was performed from hospitals stays, patients' characteristics and medical procedures (mean, standard distribution, median, range, quartile). In 2012, 46,126 patients (median age: 46 years old) were admitted in hospital (public or private hospitals) due to uterine fibroid corresponding to 47,690 hospital stays (hospital stays for surgery: 32,397). Diagnosis of anemia was reported in approximately 8% of patients and 7.1% of patients hospitalized in 2012 had already been hospitalized between 2004-2012. The median length of hospital stay was 4 days. In 2012, 16,070 hospital stays were reported for total or subtotal hysterectomy, 16,384 hospitals stays for myomectomy and 1376 hospital stays for embolization. In terms of management care, among 46,126 patients with uterine fibroids (principal or related diagnosis), 31,846 patients received a procedure listed in a surgical diagnostic related groups (DRG). To conclude, the study permits to update the epidemiological data concerning uterine fibroid management between 2010-2011-2012 in final. Because the PMSI collects partially information regarding epidemiological data, a clear epidemiological study is needed either with database from health insurance or with dedicated study. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  18. The integrated web service and genome database for agricultural plants with biotechnology information.

    PubMed

    Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.

  19. Institutional Review Board approval and innovation in urology: current practice and safety issues.

    PubMed

    Sundaram, Varun; Vemana, Goutham; Bhayani, Sam B

    2014-02-01

    To retrospectively review recent publications describing novel procedures/techniques, and describe the Institutional Review Board (IRB)/ethics approval process and potential ethical dilemmas in their reporting. We searched PubMed for papers about innovative or novel procedures/techniques between 2011 and August 2012. A query of titles/abstracts in the Journal of Urology, Journal of Endourology, European Urology, BJU International, and Urology identified relevant papers. These results were reviewed for human studies that described an innovative technique, procedure, approach, initial series, and/or used new technology. In all, 91 papers met criteria for inclusion; 25 from the Journal of Endourology, 14 from the Journal of Urology, nine from European Urology, 15 from the BJU International and 28 from Urology. IRB/ethics approval was given for an experimental procedure or database in 24% and 22%, respectively. IRB/ethics approval was not mentioned in 52.7% of studies. Published IRB/ethics approvals for innovative techniques are heterogeneous including database, retrospective, and prospective approvals. Given the concept that innovations are likely not in the legal or ethical standard of care, strong consideration should be given to obtaining IRB/ethics approval before the actual procedure, instead of approval to merely report database outcomes. © 2013 The Authors. BJU International © 2013 BJU International.

  20. 77 FR 21808 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-11

    ... and open source records and commercial database. EXEMPTIONS CLAIMED FOR THE SYSTEM: The Attorney... notification procedures, the record access procedures, the contesting record procedures, the record source..., confidential sources, and victims of crimes. The offenses and alleged offenses associated with the individuals...

  1. Engineering-Geological Data Model - The First Step to Build National Polish Standard for Multilevel Information Management

    NASA Astrophysics Data System (ADS)

    Ryżyński, Grzegorz; Nałęcz, Tomasz

    2016-10-01

    The efficient geological data management in Poland is necessary to support multilevel decision processes for government and local authorities in case of spatial planning, mineral resources and groundwater supply and the rational use of subsurface. Vast amount of geological information gathered in the digital archives and databases of Polish Geological Survey (PGS) is a basic resource for multi-scale national subsurface management. Data integration is the key factor to allow development of GIS and web tools for decision makers, however the main barrier for efficient geological information management is the heterogeneity of data in the resources of the Polish Geological Survey. Engineering-geological database is the first PGS thematic domain applied in the whole data integration plan. The solutions developed within this area will facilitate creation of procedures and standards for multilevel data management in PGS. Twenty years of experience in delivering digital engineering-geological mapping in 1:10 000 scale and archival geotechnical reports acquisition and digitisation allowed gathering of more than 300 thousands engineering-geological boreholes database as well as set of 10 thematic spatial layers (including foundation conditions map, depth to the first groundwater level, bedrock level, geohazards). Historically, the desktop approach was the source form of the geological-engineering data storage, resulting in multiple non-correlated interbase datasets. The need for creation of domain data model emerged and an object-oriented modelling (UML) scheme has been developed. The aim of the aforementioned development was to merge all datasets in one centralised Oracle server and prepare the unified spatial data structure for efficient web presentation and applications development. The presented approach will be the milestone toward creation of the Polish national standard for engineering-geological information management. The paper presents the approach and methodology of data unification, thematic vocabularies harmonisation, assumptions and results of data modelling as well as process of the integration of domain model with enterprise architecture implemented in PGS. Currently, there is no geological data standard in Poland. Lack of guidelines for borehole and spatial data management results in an increasing data dispersion as well as in growing barrier for multilevel data management and implementation of efficient decision support tools. Building the national geological data standard makes geotechnical information accessible to multiple institutions, universities, administration and research organisations and gather their data in the same, unified digital form according to the presented data model. Such approach is compliant with current digital trends and the idea of Spatial Data Infrastructure. Efficient geological data management is essential to support the sustainable development and the economic growth, as they allow implementation of geological information to assist the idea of Smart Cites, deliver information for Building Information Management (BIM) and support modern spatial planning. The engineering-geological domain data model presented in the paper is a scalable solution. Future implementation of developed procedures on other domains of PGS geological data is possible.

  2. 21 CFR 830.350 - Correction of information submitted to the Global Unique Device Identification Database.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Unique Device Identification Database. 830.350 Section 830.350 Food and Drugs FOOD AND DRUG... Global Unique Device Identification Database § 830.350 Correction of information submitted to the Global Unique Device Identification Database. (a) If FDA becomes aware that any information submitted to the...

  3. Design and Establishment of Quality Model of Fundamental Geographic Information Database

    NASA Astrophysics Data System (ADS)

    Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.

    2018-04-01

    In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.

  4. 12 CFR 1204.7 - Are there any exemptions from the Privacy Act?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... & Evaluative Files Database,” “FHFA-OIG Investigative & Evaluative MIS Database,” “FHFA-OIG Hotline Database... investigation or evaluation. (ii) From 5 U.S.C. 552a(d)(1), because release of investigative or evaluative... or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  5. 12 CFR 1204.7 - Are there any exemptions from the Privacy Act?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... & Evaluative Files Database,” “FHFA-OIG Investigative & Evaluative MIS Database,” “FHFA-OIG Hotline Database... investigation or evaluation. (ii) From 5 U.S.C. 552a(d)(1), because release of investigative or evaluative... or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  6. 12 CFR 1204.7 - Are there any exemptions from the Privacy Act?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... & Evaluative Files Database,” “FHFA-OIG Investigative & Evaluative MIS Database,” “FHFA-OIG Hotline Database... investigation or evaluation. (ii) From 5 U.S.C. 552a(d)(1), because release of investigative or evaluative... or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

  7. Evaluation of consumer drug information databases.

    PubMed

    Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J

    1999-01-01

    To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.

  8. The integrated web service and genome database for agricultural plants with biotechnology information

    PubMed Central

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  9. Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model

    DTIC Science & Technology

    2014-04-01

    time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest

  10. Freight Transportation Energy Use : Volume 3. Freight Network and Operations Database.

    DOT National Transportation Integrated Search

    1979-07-01

    The data sources, procedures, and assumptions used to generate the TSC national freight network and operations database are documented. National rail, highway, waterway, and pipeline networks are presented, and estimates of facility capacity, travel ...

  11. 48 CFR 22.1304 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-100A Report, the contracting officer may— (a) Query the Department of Labor's VETS-100 Database via the... the database. [66 FR 53488, Oct. 22, 2001, as amended at 71 FR 67779, Nov. 22, 2006; 75 FR 60251, Sept...

  12. 48 CFR 22.1304 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...-100A Report, the contracting officer may— (a) Query the Department of Labor's VETS-100 Database via the... the database. [66 FR 53488, Oct. 22, 2001, as amended at 71 FR 67779, Nov. 22, 2006; 75 FR 60251, Sept...

  13. Elsevier’s approach to the bioCADDIE 2016 Dataset Retrieval Challenge

    PubMed Central

    Scerri, Antony; Kuriakose, John; Deshmane, Amit Ajit; Stanger, Mark; Moore, Rebekah; Naik, Raj; de Waard, Anita

    2017-01-01

    Abstract We developed a two-stream, Apache Solr-based information retrieval system in response to the bioCADDIE 2016 Dataset Retrieval Challenge. One stream was based on the principle of word embeddings, the other was rooted in ontology based indexing. Despite encountering several issues in the data, the evaluation procedure and the technologies used, the system performed quite well. We provide some pointers towards future work: in particular, we suggest that more work in query expansion could benefit future biomedical search engines. Database URL: https://data.mendeley.com/datasets/zd9dxpyybg/1 PMID:29220454

  14. Ontology-oriented retrieval of putative microRNAs in Vitis vinifera via GrapeMiRNA: a web database of de novo predicted grape microRNAs.

    PubMed

    Lazzari, Barbara; Caprera, Andrea; Cestaro, Alessandro; Merelli, Ivan; Del Corvo, Marcello; Fontana, Paolo; Milanesi, Luciano; Velasco, Riccardo; Stella, Alessandra

    2009-06-29

    Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.

  15. [Survey of Literature on the Development of an Evidence Database for Hospital-prepared Drugs in Japan].

    PubMed

    Momo, Kenji

    2018-01-01

     Hospital-prepared drugs (HP), known as In'Naiseizai in Japan, are custom-prepared formulations which offer medical professionals an alternative administration pathway by changing the formulation of existing drugs according to a patients' needs. Preparing the HP is one of several roles of pharmacists in providing personalized medicine at hospitals in Japan. In 2012, the Japanese Society of Hospital Pharmacists provided guidelines for the appropriate use of "Hospital-prepared drugs". The following information was included in this guide: 1) documentation of the proper procedures, materials, prescription practices, etc., 2) required approval from the institutional review board of each HP on the risk-based classifications, and 3) to assess the stability, efficacy, and safety of each HP. However, several problems persist for pharmacists trying to prepare or use HP appropriately; the most common is insufficient manpower to both assess and prepare these drugs during routine hospital work. To resolve this problem, we are developing an evidence database for HP based on surveys of the current literature. This database has been developed for 109 drugs to date. Data-driven assessment of the stability of HP showed that 52 out of 109 drugs examined (47.7%). Notably, only 6 of the 109 HP (5.5%) in the database had all three characteristics of "stability", "safety", and "efficacy". In conclusion, the application of this database will save manpower hours for hospital pharmacists in the preparation of HP. In the near future, we will make this database available to the wider medical community via the web or through literature.

  16. A database of lotic invertebrate traits for North America

    USGS Publications Warehouse

    Vieira, Nicole K.M.; Poff, N. LeRoy; Carlisle, Daren M.; Moulton, Stephen R.; Koski, Marci L.; Kondratieff, Boris C.

    2006-01-01

    The assessment and study of stream communities may be enhanced if functional characteristics such as life-history, habitat preference, and reproductive strategy were more widely available for specific taxa. Species traits can be used to develop these functional indicators because many traits directly link functional roles of organisms with controlling environmental factors (for example, flow, substratum, temperature). In addition, some functional traits may not be constrained by taxonomy and are thus applicable at multiple spatial scales. Unfortunately, a comprehensive summary of traits for North American invertebrate taxa does not exist. Consequently, the U.S. Geological Survey's National Water-Quality Assessment Program in cooperation with Colorado State University compiled a database of traits for North American invertebrates. A total of 14,127 records for over 2,200 species, 1,165 genera, and 249 families have been entered into the database from 967 publications, texts and reports. Quality-assurance procedures indicated error rates of less than 3 percent in the data entry process. Species trait information was most complete for insect taxa. Traits describing resource acquisition and habitat preferences were most frequently reported, whereas those describing physiological tolerances and reproductive biology were the least frequently reported in the literature. The database is not exhaustive of the literature for North American invertebrates and is biased towards aquatic insects, but it represents a first attempt to compile traits in a web-accessible database. This report describes the database and discusses important decisions necessary for identifying ecologically relevant, environmentally sensitive, non-redundant, and statistically tractable traits for use in bioassessment programs.

  17. Design of a Multi Dimensional Database for the Archimed DataWarehouse.

    PubMed

    Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine

    2005-01-01

    The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.

  18. Validity of diagnoses, procedures, and laboratory data in Japanese administrative data.

    PubMed

    Yamana, Hayato; Moriwaki, Mutsuko; Horiguchi, Hiromasa; Kodan, Mariko; Fushimi, Kiyohide; Yasunaga, Hideo

    2017-10-01

    Validation of recorded data is a prerequisite for studies that utilize administrative databases. The present study evaluated the validity of diagnoses and procedure records in the Japanese Diagnosis Procedure Combination (DPC) data, along with laboratory test results in the newly-introduced Standardized Structured Medical Record Information Exchange (SS-MIX) data. Between November 2015 and February 2016, we conducted chart reviews of 315 patients hospitalized between April 2014 and March 2015 in four middle-sized acute-care hospitals in Shizuoka, Kochi, Fukuoka, and Saga Prefectures and used them as reference standards. The sensitivity and specificity of DPC data in identifying 16 diseases and 10 common procedures were identified. The accuracy of SS-MIX data for 13 laboratory test results was also examined. The specificity of diagnoses in the DPC data exceeded 96%, while the sensitivity was below 50% for seven diseases and variable across diseases. When limited to primary diagnoses, the sensitivity and specificity were 78.9% and 93.2%, respectively. The sensitivity of procedure records exceeded 90% for six procedures, and the specificity exceeded 90% for nine procedures. Agreement between the SS-MIX data and the chart reviews was above 95% for all 13 items. The validity of diagnoses and procedure records in the DPC data and laboratory results in the SS-MIX data was high in general, supporting their use in future studies. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  19. Human Factors Considerations for Area Navigation Departure and Arrival Procedures

    NASA Technical Reports Server (NTRS)

    Barhydt, Richard; Adams, Catherine A.

    2006-01-01

    Area navigation (RNAV) procedures are being implemented in the United States and around the world as part of a transition to a performance-based navigation system. These procedures are providing significant benefits and have also caused some human factors issues to emerge. Under sponsorship from the Federal Aviation Administration (FAA), the National Aeronautics and Space Administration (NASA) has undertaken a project to document RNAV-related human factors issues and propose areas for further consideration. The component focusing on RNAV Departure and Arrival Procedures involved discussions with expert users, a literature review, and a focused review of the NASA Aviation Safety Reporting System (ASRS) database. Issues were found to include aspects of air traffic control and airline procedures, aircraft systems, and procedure design. Major findings suggest the need for specific instrument procedure design guidelines that consider the effects of human performance. Ongoing industry and government activities to address air-ground communication terminology, design improvements, and chart-database commonality are strongly encouraged. A review of factors contributing to RNAV in-service errors would likely lead to improved system design and operational performance.

  20. Preliminary Geologic Map of the Topanga 7.5' Quadrangle, Southern California: A Digital Database

    USGS Publications Warehouse

    Yerkes, R.F.; Campbell, R.H.

    1995-01-01

    INTRODUCTION This Open-File report is a digital geologic map database. This pamphlet serves to introduce and describe the digital data. There is no paper map included in the Open-File report. This digital map database is compiled from previously published sources combined with some new mapping and modifications in nomenclature. The geologic map database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U. S. Geological Survey. For detailed descriptions of the units, their stratigraphic relations and sources of geologic mapping consult Yerkes and Campbell (1994). More specific information about the units may be available in the original sources. The content and character of the database and methods of obtaining it are described herein. The geologic map database itself, consisting of three ARC coverages and one base layer, can be obtained over the Internet or by magnetic tape copy as described below. The processes of extracting the geologic map database from the tar file, and importing the ARC export coverages (procedure described herein), will result in the creation of an ARC workspace (directory) called 'topnga.' The database was compiled using ARC/INFO version 7.0.3, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). It is stored in uncompressed ARC export format (ARC/INFO version 7.x) in a compressed UNIX tar (tape archive) file. The tar file was compressed with gzip, and may be uncompressed with gzip, which is available free of charge via the Internet from the gzip Home Page (http://w3.teaser.fr/~jlgailly/gzip). A tar utility is required to extract the database from the tar file. This utility is included in most UNIX systems, and can be obtained free of charge via the Internet from Internet Literacy's Common Internet File Formats Webpage http://www.matisse.net/files/formats.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView (version 1.0 for Windows 3.1 to 3.11 is available for free from ESRI's web site: http://www.esri.com). 1. Different base layer - The original digital database included separates clipped out of the Los Angeles 1:100,000 sheet. This release includes a vectorized scan of a scale-stable negative of the Topanga 7.5 minute quadrangle. 2. Map projection - The files in the original release were in polyconic projection. The projection used in this release is state plane, which allows for the tiling of adjacent quadrangles. 3. File compression - The files in the original release were compressed with UNIX compression. The files in this release are compressed with gzip.

  1. The development of digital library system for drug research information.

    PubMed

    Kim, H J; Kim, S R; Yoo, D S; Lee, S H; Suh, O K; Cho, J H; Shin, H T; Yoon, J P

    1998-01-01

    The sophistication of computer technology and information transmission on internet has made various cyber information repository available to information consumers. In the era of information super-highway, the digital library which can be accessed from remote sites at any time is considered the prototype of information repository. Using object-oriented DBMS, the very first model of digital library for pharmaceutical researchers and related professionals in Korea has been developed. The published research papers and researchers' personal information was included in the database. For database with research papers, 13 domestic journals were abstracted and scanned for full-text image files which can be viewed by Internet web browsers. The database with researchers' personal information was also developed and interlinked to the database with research papers. These database will be continuously updated and will be combined with world-wide information as the unique digital library in the field of pharmacy.

  2. 75 FR 49489 - Establishment of a New System of Records for Personal Information Collected by the Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-13

    ... information. Access to any such database system is limited to system administrators, individuals responsible... during the certification process. The above information will be contained in one or more databases (such as Lotus Notes) that reside on servers in EPA offices. The database(s) may be specific to one...

  3. Informed consent comprehension in African research settings.

    PubMed

    Afolabi, Muhammed O; Okebe, Joseph U; McGrath, Nuala; Larson, Heidi J; Bojang, Kalifa; Chandramohan, Daniel

    2014-06-01

    Previous reviews on participants' comprehension of informed consent information have focused on developed countries. Experience has shown that ethical standards developed on Western values may not be appropriate for African settings where research concepts are unfamiliar. We undertook this review to describe how informed consent comprehension is defined and measured in African research settings. We conducted a comprehensive search involving five electronic databases: Medline, Embase, Global Health, EthxWeb and Bioethics Literature Database (BELIT). We also examined African Index Medicus and Google Scholar for relevant publications on informed consent comprehension in clinical studies conducted in sub-Saharan Africa. 29 studies satisfied the inclusion criteria; meta-analysis was possible in 21 studies. We further conducted a direct comparison of participants' comprehension on domains of informed consent in all eligible studies. Comprehension of key concepts of informed consent varies considerably from country to country and depends on the nature and complexity of the study. Meta-analysis showed that 47% of a total of 1633 participants across four studies demonstrated comprehension about randomisation (95% CI 13.9-80.9%). Similarly, 48% of 3946 participants in six studies had understanding about placebo (95% CI 19.0-77.5%), while only 30% of 753 participants in five studies understood the concept of therapeutic misconception (95% CI 4.6-66.7%). Measurement tools for informed consent comprehension were developed with little or no validation. Assessment of comprehension was carried out at variable times after disclosure of study information. No uniform definition of informed consent comprehension exists to form the basis for development of an appropriate tool to measure comprehension in African participants. Comprehension of key concepts of informed consent is poor among study participants across Africa. There is a vital need to develop a uniform definition for informed consent comprehension in low literacy research settings in Africa. This will be an essential step towards developing appropriate tools that can adequately measure informed consent comprehension. This may consequently suggest adequate measures to improve the informed consent procedure. © 2014 John Wiley & Sons Ltd.

  4. Cybersecurity in healthcare: A systematic review of modern threats and trends.

    PubMed

    Kruse, Clemens Scott; Frederick, Benjamin; Jacobson, Taylor; Monticone, D Kyle

    2017-01-01

    The adoption of healthcare technology is arduous, and it requires planning and implementation time. Healthcare organizations are vulnerable to modern trends and threats because it has not kept up with threats. The objective of this systematic review is to identify cybersecurity trends, including ransomware, and identify possible solutions by querying academic literature. The reviewers conducted three separate searches through the CINAHL and PubMed (MEDLINE) and the Nursing and Allied Health Source via ProQuest databases. Using key words with Boolean operators, database filters, and hand screening, we identified 31 articles that met the objective of the review. The analysis of 31 articles showed the healthcare industry lags behind in security. Like other industries, healthcare should clearly define cybersecurity duties, establish clear procedures for upgrading software and handling a data breach, use VLANs and deauthentication and cloud-based computing, and to train their users not to open suspicious code. The healthcare industry is a prime target for medical information theft as it lags behind other leading industries in securing vital data. It is imperative that time and funding is invested in maintaining and ensuring the protection of healthcare technology and the confidentially of patient information from unauthorized access.

  5. NABIC marker database: A molecular markers information network of agricultural crops.

    PubMed

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  6. 77 FR 24925 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... CES Personnel Information System database of NIFA. This database is updated annually from data provided by 1862 and 1890 land-grant universities. This database is maintained by the Agricultural Research... reviewer. NIFA maintains a database of potential reviewers. Information in the database is used to match...

  7. Challenges of molecular nutrition research 6: the nutritional phenotype database to store, share and evaluate nutritional systems biology studies

    PubMed Central

    Bouwman, Jildau; Dragsted, Lars O.; Drevon, Christian A.; Elliott, Ruan; de Groot, Philip; Kaput, Jim; Mathers, John C.; Müller, Michael; Pepping, Fre; Saito, Jahn; Scalbert, Augustin; Radonjic, Marijana; Rocca-Serra, Philippe; Travis, Anthony; Wopereis, Suzan; Evelo, Chris T.

    2010-01-01

    The challenge of modern nutrition and health research is to identify food-based strategies promoting life-long optimal health and well-being. This research is complex because it exploits a multitude of bioactive compounds acting on an extensive network of interacting processes. Whereas nutrition research can profit enormously from the revolution in ‘omics’ technologies, it has discipline-specific requirements for analytical and bioinformatic procedures. In addition to measurements of the parameters of interest (measures of health), extensive description of the subjects of study and foods or diets consumed is central for describing the nutritional phenotype. We propose and pursue an infrastructural activity of constructing the “Nutritional Phenotype database” (dbNP). When fully developed, dbNP will be a research and collaboration tool and a publicly available data and knowledge repository. Creation and implementation of the dbNP will maximize benefits to the research community by enabling integration and interrogation of data from multiple studies, from different research groups, different countries and different—omics levels. The dbNP is designed to facilitate storage of biologically relevant, pre-processed—omics data, as well as study descriptive and study participant phenotype data. It is also important to enable the combination of this information at different levels (e.g. to facilitate linkage of data describing participant phenotype, genotype and food intake with information on study design and—omics measurements, and to combine all of this with existing knowledge). The biological information stored in the database (i.e. genetics, transcriptomics, proteomics, biomarkers, metabolomics, functional assays, food intake and food composition) is tailored to nutrition research and embedded in an environment of standard procedures and protocols, annotations, modular data-basing, networking and integrated bioinformatics. The dbNP is an evolving enterprise, which is only sustainable if it is accepted and adopted by the wider nutrition and health research community as an open source, pre-competitive and publicly available resource where many partners both can contribute and profit from its developments. We introduce the Nutrigenomics Organisation (NuGO, http://www.nugo.org) as a membership association responsible for establishing and curating the dbNP. Within NuGO, all efforts related to dbNP (i.e. usage, coordination, integration, facilitation and maintenance) will be directed towards a sustainable and federated infrastructure. PMID:21052526

  8. Monitoring caustic injuries from emergency department databases using automatic keyword recognition software.

    PubMed

    Vignally, P; Fondi, G; Taggi, F; Pitidis, A

    2011-03-31

    In Italy the European Union Injury Database reports the involvement of chemical products in 0.9% of home and leisure accidents. The Emergency Department registry on domestic accidents in Italy and the Poison Control Centres record that 90% of cases of exposure to toxic substances occur in the home. It is not rare for the effects of chemical agents to be observed in hospitals, with a high potential risk of damage - the rate of this cause of hospital admission is double the domestic injury average. The aim of this study was to monitor the effects of injuries caused by caustic agents in Italy using automatic free-text recognition in Emergency Department medical databases. We created a Stata software program to automatically identify caustic or corrosive injury cases using an agent-specific list of keywords. We focused attention on the procedure's sensitivity and specificity. Ten hospitals in six regions of Italy participated in the study. The program identified 112 cases of injury by caustic or corrosive agents. Checking the cases by quality controls (based on manual reading of ED reports), we assessed 99 cases as true positive, i.e. 88.4% of the patients were automatically recognized by the software as being affected by caustic substances (99% CI: 80.6%- 96.2%), that is to say 0.59% (99% CI: 0.45%-0.76%) of the whole sample of home injuries, a value almost three times as high as that expected (p < 0.0001) from European codified information. False positives were 11.6% of the recognized cases (99% CI: 5.1%- 21.5%). Our automatic procedure for caustic agent identification proved to have excellent product recognition capacity with an acceptable level of excess sensitivity. Contrary to our a priori hypothesis, the automatic recognition system provided a level of identification of agents possessing caustic effects that was significantly much greater than was predictable on the basis of the values from current codifications reported in the European Database.

  9. 78 FR 65293 - Collection of Information; Proposed Extension of Approval; Comment Request-Publicly Available...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... Extension of Approval; Comment Request--Publicly Available Consumer Product Safety Information Database... Publicly Available Consumer Product Safety Information Database. The Commission will consider all comments... intention to seek extension of approval of a collection of information for a database on the safety of...

  10. 78 FR 18232 - Amendment of VOR Federal Airway V-233, Springfield, IL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... it matches the information contained in the FAA's aeronautical database, matches the depiction on the... description did not match the airway information contained in the FAA's aeronautical database or the charted... information that should have been used. The FAA aeronautical database contains the correct radial information...

  11. Provincial Variation of Cochlear Implantation Surgical Volumes and Cost in Canada.

    PubMed

    Crowson, Matthew G; Chen, Joseph M; Tucci, Debara

    2017-01-01

    Objectives To investigate provincial cochlear implantation (CI) annual volume and cost trends. Study Design Database analysis. Setting National surgical volume and cost database. Subjects and Methods Aggregate-level provincial CI volumes and cost data for adult and pediatric CI surgery from 2005 to 2014 were obtained from the Canadian Institute for Health Information. Population-level aging forecast estimates were obtained from the Ontario Ministry of Finance and Statistics Canada. Linear fit, analysis of variance, and Tukey's analyses were utilized to compare variances and means. Results The national volume of annual CI procedures is forecasted to increase by <30 per year ( R 2 = 0.88). Ontario has the highest mean annual CI volume (282; 95% confidence interval, 258-308), followed by Alberta (92.0; 95% confidence interval, 66.3-118), which are significantly higher than all other provinces ( P < .05 for each). Ontario's annual CI procedure volume is forecasted to increase by <11 per year ( R 2 = 0.62). Newfoundland and Nova Scotia have the highest CI procedures per 100,000 residents as compared with all other provinces ( P < .05). Alberta, Newfoundland, and Manitoba have the highest estimated implantation cost of all provinces ( P < .05). Conclusions Historical trends of CI forecast modest national volume growth. Potential bottlenecks include provincial funding and access to surgical expertise. The proportion of older adult patients who may benefit from a CI will rise, and there may be insufficient capacity to meet this need. Delayed access to CI for pediatric patients is also a concern, given recent reports of long wait times for CI surgery.

  12. Compiling an Open Database of Dam Inundation Areas on the Irrawaddy, Salween, Mekong, and Red River Basins

    NASA Astrophysics Data System (ADS)

    Cutter, P. G.; Walcutt, A.; O'Neil-Dunne, J.; Geheb, K.; Troy, A.; Saah, D. S.; Ganz, D.

    2016-12-01

    Dam construction in mainland Southeast Asia has increased substantially in recent years with extensive regional impacts including alterations to water regimes, the loss and degradation of natural forests and biodiversity, and reductions in soil and water quality. The CGIAR Water Land Ecosystem program (WLE) and partners maintain a comprehensive database of locations and other data relating to existing, planned, and proposed dams in the region's major transboundary rivers spanning areas in Thailand, Cambodia, Laos, Vietnam, Myanmar, and China. A recent regional needs assessment and specific stakeholder requests revealed the need for a dataset reflecting the inundation areas of these dams for use in measuring impacts to river ecology, analyzing disaster risk, monitoring land cover and land use change, evaluating carbon emissions, and assessing the actual and potential impacts to communities. In conjunction with WLE and other partners, SERVIR-Mekong, a regional hub of the USAID and NASA-supported SERVIR program, formulated an explicit procedure to produce this dataset. The procedure includes leveraging data from OpenStreetMap and other sources, creating polygons based on surface water classification procedures achieved via Google Earth Engine, manual digitizing, and modeling of planned/proposed dams based on a DEM and the location and planned height of dams. A quality assurance step ensures that all polygons conform to spatial data quality standards agreed upon by a wide range of production partners. When complete, the dataset will be made publicly available to encourage greater understanding and more informed decisions related to the actual and potential impacts of dams in the region.

  13. Social Communication Questionnaire scoring procedures for autism spectrum disorder and the prevalence of potential social communication disorder in ASD.

    PubMed

    Barnard-Brak, Lucy; Richman, David M; Chesnut, Steven Randall; Little, Todd D

    2016-12-01

    In analyzing data from the National Database for Autism Research, we utilized Mokken scaling techniques as a means of creating a more effective and efficient screening procedure for autism spectrum disorder (ASD) via the Social Communication Questionnaire (SCQ). With a sample of 1,040, approximately 80% (n = 827) of the sample were males while approximately 20% (n = 213) were females. In regard to ethnicity, approximately 68% of the sample were White/Caucasian, while 7% were African American, 16% were Hispanic, 4% were Asian, and 1% were Native American or American Indian. As the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) states that, "individuals with a well-established DSM-IV diagnosis of autistic disorder, Asperger's disorder, or pervasive developmental disorder not otherwise specified should be given the diagnosis of autism spectrum disorder," (American Psychiatric Association, 2013, p. 51), the primary labeling difference between the DSM-IV and the DSM-5 would appear to be in identifying social communication disorder as a newly introduced disorder in the DSM-5, which we discuss. Though school psychologists are not dependent on the DSM to the same extent as clinical psychologists to provide services, school psychology is invested in the effective and efficient assessment of ASD. The current study demonstrates how Mokken scaling procedures may be utilized with respect to ASD identification via the SCQ as well as providing information regarding the prevalence of potential social communication disorder as a new disorder and its discrimination with ASD. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Current status of cardiovascular surgery in Japan 2013 and 2014: A report based on the Japan Cardiovascular Surgery Database. 2: Congenital heart surgery.

    PubMed

    Hirata, Yasutaka; Hirahara, Norimichi; Murakami, Arata; Motomura, Noboru; Miyata, Hiroaki; Takamoto, Shinichi

    2018-01-01

    We analyzed the mortality and morbidity of congenital heart surgery in Japan using the Japan Cardiovascular Surgery Database (JCVSD). Data regarding congenital heart surgery performed between January 2013 and December 2014 were obtained from JCVSD. The 20 most frequent procedures were selected and the mortality rates and major morbidities were analyzed. The mortality rates of atrial septal defect repair and ventricular septal defect repair were less than 1%, and the mortality rates of tetralogy of Fallot repair, complete atrioventricular septal defect repair, bidirectional Glenn, and total cavopulmonary connection were less than 2%. The mortality rates of the Norwood procedure and total anomalous pulmonary venous connection repair were more than 10%. The rates of unplanned reoperation, pacemaker implantation, chylothorax, deep sternal infection, phrenic nerve injury, and neurological deficit were shown for each procedure. Using JCVSD, the national data for congenital heart surgery, including postoperative complications, were analyzed. Further improvements of the database and feedback for clinical practice are required.

  15. Database Management: Building, Changing and Using Databases. Collected Papers and Abstracts of the Mid-Year Meeting of the American Society for Information Science (15th, Portland, Oregon, May 1986).

    ERIC Educational Resources Information Center

    American Society for Information Science, Washington, DC.

    This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…

  16. French database of children and adolescents with Prader-Willi syndrome

    PubMed Central

    Molinas, Catherine; Cazals, Laurent; Diene, Gwenaelle; Glattard, Melanie; Arnaud, Catherine; Tauber, Maithe

    2008-01-01

    Background Prader-Willi syndrome (PWS) is a rare multisystem genetic disease leading to severe complications mainly related to obesity. We strongly lack information on the natural history of this complex disease and on what factors are involved in its evolution and its outcome. One of the objectives of the French reference centre for Prader-Willi syndrome set-up in 2004 was to set-up a database in order to make the inventory of Prader-Willi syndrome cases and initiate a national cohort study in the area covered by the centre. Description the database includes medical data of children and adolescents with Prader-Willi syndrome, details about their management, socio-demographic data on their families, psychological data and quality of life of the parents. The tools and organisation used to ensure data collection and data quality in respect of good clinical practice procedures are discussed, and main characteristics of our Prader-Willi population at inclusion are presented. Conclusion this database covering all the aspects of PWS clinical, psychological and social profiles, including familial psychological and quality of life will be a powerful tool for retrospective studies concerning this complex and multi factorial disease and could be a basis for the design of future prospective multicentric studies. The complete database and the Stata.do files are available to any researcher wishing to use them for non-commercial purposes and can be provided upon request to the corresponding author. PMID:18831731

  17. Quantitative Analysis of Gender Stereotypes and Information Aggregation in a National Election

    PubMed Central

    Tumminello, Michele; Miccichè, Salvatore; Varho, Jan; Piilo, Jyrki; Mantegna, Rosario N.

    2013-01-01

    By analyzing a database of a questionnaire answered by a large majority of candidates and elected in a parliamentary election, we quantitatively verify that (i) female candidates on average present political profiles which are more compassionate and more concerned with social welfare issues than male candidates and (ii) the voting procedure acts as a process of information aggregation. Our results show that information aggregation proceeds with at least two distinct paths. In the first case candidates characterize themselves with a political profile aiming to describe the profile of the majority of voters. This is typically the case of candidates of political parties which are competing for the center of the various political dimensions. In the second case, candidates choose a political profile manifesting a clear difference from opposite political profiles endorsed by candidates of a political party positioned at the opposite extreme of some political dimension. PMID:23555606

  18. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  19. User Generated Spatial Content Sources for Land Use/Land Cover Validation Purposes: Suitability Analysis and Integration Model

    NASA Astrophysics Data System (ADS)

    Estima, Jacinto Paulo Simoes

    Traditional geographic information has been produced by mapping agencies and corporations, using high skilled people as well as expensive precision equipment and procedures, in a very costly approach. The production of land use and land cover databases are just one example of such traditional approach. On the other side, The amount of Geographic Information created and shared by citizens through the Web has been increasing exponentially during the last decade, resulting from the emergence and popularization of technologies such as the Web 2.0, cloud computing, GPS, smart phones, among others. Such comprehensive amount of free geographic data might have valuable information to extract and thus opening great possibilities to improve significantly the production of land use and land cover databases. In this thesis we explored the feasibility of using geographic data from different user generated spatial content initiatives in the process of land use and land cover database production. Data from Panoramio, Flickr and OpenStreetMap were explored in terms of their spatial and temporal distribution, and their distribution over the different land use and land cover classes. We then proposed a conceptual model to integrate data from suitable user generated spatial content initiatives based on identified dissimilarities among a comprehensive list of initiatives. Finally we developed a prototype implementing the proposed integration model, which was then validated by using the prototype to solve four identified use cases. We concluded that data from user generated spatial content initiatives has great value but should be integrated to increase their potential. The possibility of integrating data from such initiatives in an integration model was proved. Using the developed prototype, the relevance of the integration model was also demonstrated for different use cases. None None None

  20. Evaluation models and criteria of the quality of hospital websites: a systematic review study

    PubMed Central

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-01-01

    Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. Results To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page’s attractions have been mentioned in the majority of studies. Conclusion The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential. PMID:28465807

  1. Evaluation models and criteria of the quality of hospital websites: a systematic review study.

    PubMed

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-02-01

    Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page's attractions have been mentioned in the majority of studies. The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential.

  2. Clinical Databases and Registries in Congenital and Pediatric Cardiac Surgery, Cardiology, Critical Care, and Anesthesiology Worldwide.

    PubMed

    Vener, David F; Gaies, Michael; Jacobs, Jeffrey P; Pasquali, Sara K

    2017-01-01

    The growth in large-scale data management capabilities and the successful care of patients with congenital heart defects have coincidentally paralleled each other for the last three decades, and participation in multicenter congenital heart disease databases and registries is now a fundamental component of cardiac care. This manuscript attempts for the first time to consolidate in one location all of the relevant databases worldwide, including target populations, specialties, Web sites, and participation information. Since at least 1,992 cardiac surgeons and cardiologists began leveraging this burgeoning technology to create multi-institutional data collections addressing a variety of specialties within this field. Pediatric heart diseases are particularly well suited to this methodology because each individual care location has access to only a relatively limited number of diagnoses and procedures in any given calendar year. Combining multiple institutions data therefore allows for a far more accurate contemporaneous assessment of treatment modalities and adverse outcomes. Additionally, the data can be used to develop outcome benchmarks by which individual institutions can measure their progress against the field as a whole and focus quality improvement efforts in a more directed fashion, and there is increasing utilization combining clinical research efforts within existing data structures. Efforts are ongoing to support better collaboration and integration across data sets, to improve efficiency, further the utility of the data collection infrastructure and information collected, and to enhance return on investment for participating institutions.

  3. Annual Review of Database Developments: 1993.

    ERIC Educational Resources Information Center

    Basch, Reva

    1993-01-01

    Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…

  4. Mock-juror evaluations of traditional and ratings-based eyewitness identification evidence.

    PubMed

    Sauer, James D; Palmer, Matthew A; Brewer, Neil

    2017-08-01

    Compared to categorical identifications, culprit likelihood ratings (having the witness rate, for each lineup member, the likelihood that the individual is the culprit) provide a promising alternative for assessing a suspect's likely guilt. Four experiments addressed 2 broad questions about the use of culprit likelihood ratings evidence by mock-jurors. First, are mock-jurors receptive to noncategorical forms of identification evidence? Second, does the additional information provided by ratings (relating to discrimination) affect jurors' evaluations of the identification evidence? Experiments 1 and 1A manipulated confidence (90% vs. 50%) and discrimination (good, poor, no information) between participants. Evaluations were influenced by confidence, but not discrimination. However, a within-participant manipulation of discrimination (Experiment 2) demonstrated that evidence of good discrimination enhanced the persuasiveness of moderate levels of confidence, while poor discrimination reduced the persuasiveness of high levels of confidence. Thus, participants can interpret ratings-based evidence, but may not intuit the discrimination information when evaluating ratings for a single identification procedure. Providing detailed instructions about interpreting ratings produced clear discrimination effects when evaluating a single identification procedure (Experiment 3). Across 4 experiments, we found no evidence that mock-jurors perceived noncategorical identification evidence to be less informative than categorical evidence. However, jurors will likely benefit from instruction when interpreting ratings provided by a single witness. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based proceduremore » system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.« less

  6. Spectroscopic data for an astronomy database

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Smith, Peter L.

    1995-01-01

    Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.

  7. Variations in data collection methods between national databases affect study results: a comparison of the nationwide inpatient sample and national surgical quality improvement program databases for lumbar spine fusion procedures.

    PubMed

    Bohl, Daniel D; Russo, Glenn S; Basques, Bryce A; Golinvaux, Nicholas S; Fu, Michael C; Long, William D; Grauer, Jonathan N

    2014-12-03

    There has been an increasing use of national databases to conduct orthopaedic research. Questions regarding the validity and consistency of these studies have not been fully addressed. The purpose of this study was to test for similarity in reported measures between two national databases commonly used for orthopaedic research. A retrospective cohort study of patients undergoing lumbar spinal fusion procedures during 2009 to 2011 was performed in two national databases: the Nationwide Inpatient Sample and the National Surgical Quality Improvement Program. Demographic characteristics, comorbidities, and inpatient adverse events were directly compared between databases. The total numbers of patients included were 144,098 from the Nationwide Inpatient Sample and 8434 from the National Surgical Quality Improvement Program. There were only small differences in demographic characteristics between the two databases. There were large differences between databases in the rates at which specific comorbidities were documented. Non-morbid obesity was documented at rates of 9.33% in the Nationwide Inpatient Sample and 36.93% in the National Surgical Quality Improvement Program (relative risk, 0.25; p < 0.05). Peripheral vascular disease was documented at rates of 2.35% in the Nationwide Inpatient Sample and 0.60% in the National Surgical Quality Improvement Program (relative risk, 3.89; p < 0.05). Similarly, there were large differences between databases in the rates at which specific inpatient adverse events were documented. Sepsis was documented at rates of 0.38% in the Nationwide Inpatient Sample and 0.81% in the National Surgical Quality Improvement Program (relative risk, 0.47; p < 0.05). Acute kidney injury was documented at rates of 1.79% in the Nationwide Inpatient Sample and 0.21% in the National Surgical Quality Improvement Program (relative risk, 8.54; p < 0.05). As database studies become more prevalent in orthopaedic surgery, authors, reviewers, and readers should view these studies with caution. This study shows that two commonly used databases can identify demographically similar patients undergoing a common orthopaedic procedure; however, the databases document markedly different rates of comorbidities and inpatient adverse events. The differences are likely the result of the very different mechanisms through which the databases collect their comorbidity and adverse event data. Findings highlight concerns regarding the validity of orthopaedic database research. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  8. An estimation of the cost per visit of nursing home care services.

    PubMed

    Ryu, Ho-Sihn

    2009-01-01

    Procedures used for analyzing the cost of providing home care nursing services through hospital-based home care agencies (HCAs) was the focus of this study. A cross-sectional descriptive study design was used to analyze the workload and caseload of 36 home care nurses from ten HCAs. In addition, information obtained from a national health insurance database, including 54,639 home care claim cases from a total of 185 HCAs during a 6-month period, were analyzed. The findings provide a foundation for improving the alternative home care billing and reimbursement system by using the actual amount of time invested in providing home care when calculating the cost of providing home care nursing services. Further, this study provides a procedure for calculating nursing service costs by analyzing actual data. The results have great potential for use in nursing service cost analysis methodology, which is an essential step in developing a policy for providing home care.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vonach, H.; Tagesen, S.

    Starting with a discussion of the requirements and goals for high quality general-purpose evaluations the paper will describe the procedures chosen in our evaluation work for JEFF for producing new general evaluations with complete covariance information for all cross sections (file 3 data). Key problems essential for the goal of making the best possible use of the existing theoretical and experimental knowledge on neutron interactions with the respective nuclide will be addressed, especially the problem of assigning covariances to calculated cross sections, necessary checking procedures for all experimental data and various possibilities to amend the experimental database beyond the obviousmore » use of EXFOR data for the respective cross sections. In this respect both, the use of elemental cross sections in isotopic evaluations and the use of implicit cross-section data (that is data which can be converted into cross sections by simple methods) will be discussed in some detail.« less

  10. A Web-GIS Procedure Based on Satellite Multi-Spectral and Airborne LIDAR Data to Map the Road blockage Due to seismic Damages of Built-Up Urban Areas

    NASA Astrophysics Data System (ADS)

    Costanzo, Antonio; Montuori, Antonio; Silva, Juan Pablo; Silvestri, Malvina; Musacchio, Massimo; Buongiorno, Maria Fabrizia; Stramondo, Salvatore

    2016-08-01

    In this work, a web-GIS procedure to map the risk of road blockage in urban environments through the combined use of space-borne and airborne remote sensing sensors is presented. The methodology concerns (1) the provision of a geo-database through the integration of space-borne multispectral images and airborne LiDAR data products; (2) the modeling of building vulnerability, based on the corresponding 3D geometry and construction time information; (3) the GIS-based mapping of road closure due to seismic- related building collapses based on the building characteristic height and the width of the road. Experimental results, gathered for the Cosenza urban area, allow demonstrating the benefits of both the proposed approach and the GIS-based integration of multi-platforms remote sensing sensors and techniques for seismic road assessment purposes.

  11. 16 CFR 1102.42 - Disclaimers.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...

  12. 16 CFR 1102.42 - Disclaimers.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...

  13. 16 CFR 1102.42 - Disclaimers.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Notice and Disclosure... of the contents of the Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database...

  14. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases

    PubMed Central

    2012-01-01

    Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples. PMID:23050842

  15. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases.

    PubMed

    Lluveras-Tenorio, Anna; Mazurek, Joy; Restivo, Annalaura; Colombini, Maria Perla; Bonaduce, Ilaria

    2012-10-10

    Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of identifying polysaccharide binders in paint samples.

  16. [Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].

    PubMed

    Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu

    2015-09-01

    By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.

  17. Biomine: predicting links between biological entities using network models of heterogeneous databases.

    PubMed

    Eronen, Lauri; Toivonen, Hannu

    2012-06-06

    Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.

  18. Field Validation of Food Service Listings: A Comparison of Commercial and Online Geographic Information System Databases

    PubMed Central

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-01-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases. PMID:23066385

  19. Field validation of food service listings: a comparison of commercial and online geographic information system databases.

    PubMed

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-08-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.

  20. [The Brazilian Hospital Information System and the acute myocardial infarction hospital care].

    PubMed

    Escosteguy, Claudia Caminha; Portela, Margareth Crisóstomo; Medronho, Roberto de Andrade; de Vasconcellos, Maurício Teixeira Leite

    2002-08-01

    To analyze the applicability of the Brazilian Unified Health System's national hospital database to evaluate the quality of acute myocardial infarction hospital care. It was evaluated 1,936 hospital admission forms having acute myocardial infarction (AMI) as primary diagnosis in the municipal district of Rio de Janeiro, Brazil, in 1997. Data was collected from the national hospital database. A stratified random sampling of 391 medical records was also evaluated. AMI diagnosis agreement followed the literature criteria. Variable accuracy analysis was performed using kappa index agreement. The quality of AMI diagnosis registered in hospital admission forms was satisfactory according to the gold standard of the literature. In general, the accuracy of the variables demographics (sex, age group), process (medical procedures and interventions), and outcome (hospital death) was satisfactory. The accuracy of demographics and outcome variables was higher than the one of process variables. Under registration of secondary diagnosis was high in the forms and it was the main limiting factor. Given the study findings and the widespread availability of the national hospital database, it is pertinent its use as an instrument in the evaluation of the quality of AMI medical care.

  1. Fuel conditioning facility zone-to-zone transfer administrative controls.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C. L.

    2000-06-21

    The administrative controls associated with transferring containers from one criticality hazard control zone to another in the Argonne National Laboratory (ANL) Fuel Conditioning Facility (FCF) are described. FCF, located at the ANL-West site near Idaho Falls, Idaho, is used to remotely process spent sodium bonded metallic fuel for disposition. The process involves nearly forty widely varying material forms and types, over fifty specific use container types, and over thirty distinct zones where work activities occur. During 1999, over five thousand transfers from one zone to another were conducted. Limits are placed on mass, material form and type, and container typesmore » for each zone. Ml material and containers are tracked using the Mass Tracking System (MTG). The MTG uses an Oracle database and numerous applications to manage the database. The database stores information specific to the process, including material composition and mass, container identification number and mass, transfer history, and the operators involved in each transfer. The process is controlled using written procedures which specify the zone, containers, and material involved in a task. Transferring a container from one zone to another is called a zone-to-zone transfer (ZZT). ZZTs consist of four distinct phases, select, request, identify, and completion.« less

  2. Establishment of an Italian chronic migraine database: a multicenter pilot study.

    PubMed

    Barbanti, Piero; Fofi, L; Cevoli, S; Torelli, P; Aurilia, C; Egeo, G; Grazzi, L; D'Amico, D; Manzoni, G C; Cortelli, P; Infarinato, F; Vanacore, N

    2018-05-01

    To optimize chronic migraine (CM) ascertainment and phenotype definition, provide adequate clinical management and health care procedures, and rationalize economic resources allocation, we performed an exploratory multicenter pilot study aimed at establishing a CM database, the first step for developing a future Italian CM registry. We enrolled 63 consecutive CM patients in four tertiary headache centers screened with face-to-face interviews using an ad hoc dedicated semi-structured questionnaire gathering detailed information on life-style, behavioral and socio-demographic factors, comorbidities, and migraine features before and after chronicization and healthcare resource use. Our pilot study provided useful insights revealing that CM patients (1) presented in most cases symptoms of peripheral trigeminal sensitization, a relatively unexpected feature which could be useful to unravel different CM endophenotypes and to predict trigeminal-targeted treatments' responsiveness; (2) had been frequently admitted to emergency departments; (3) had undergone, sometime repeatedly, unnecessary or inappropriate investigations; (4) got rarely illness benefit exemption or disability allowance only. We deem that the expansion of the database-shortly including many other Italian headache centers-will contribute to more precisely outline CM endophenotypes, hence improving management, treatment, and economic resource allocation, ultimately reducing CM burden on both patients and health system.

  3. 48 CFR 2419.803-70 - Procedures for simplified acquisitions under the partnership agreement.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Business Administration Section (8)(a) Program 2419.803-70 Procedures for simplified acquisitions under the... are required. (2) The contracting officer will use the Central Contractor Registration (CCR) database...

  4. 48 CFR 2419.803-70 - Procedures for simplified acquisitions under the partnership agreement.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Business Administration Section (8)(a) Program 2419.803-70 Procedures for simplified acquisitions under the... are required. (2) The contracting officer will use the Central Contractor Registration (CCR) database...

  5. Onco-STS: a web-based laboratory information management system for sample and analysis tracking in oncogenomic experiments.

    PubMed

    Gavrielides, Mike; Furney, Simon J; Yates, Tim; Miller, Crispin J; Marais, Richard

    2014-01-01

    Whole genomes, whole exomes and transcriptomes of tumour samples are sequenced routinely to identify the drivers of cancer. The systematic sequencing and analysis of tumour samples, as well other oncogenomic experiments, necessitates the tracking of relevant sample information throughout the investigative process. These meta-data of the sequencing and analysis procedures include information about the samples and projects as well as the sequencing centres, platforms, data locations, results locations, alignments, analysis specifications and further information relevant to the experiments. The current work presents a sample tracking system for oncogenomic studies (Onco-STS) to store these data and make them easily accessible to the researchers who work with the samples. The system is a web application, which includes a database and a front-end web page that allows the remote access, submission and updating of the sample data in the database. The web application development programming framework Grails was used for the development and implementation of the system. The resulting Onco-STS solution is efficient, secure and easy to use and is intended to replace the manual data handling of text records. Onco-STS allows simultaneous remote access to the system making collaboration among researchers more effective. The system stores both information on the samples in oncogenomic studies and details of the analyses conducted on the resulting data. Onco-STS is based on open-source software, is easy to develop and can be modified according to a research group's needs. Hence it is suitable for laboratories that do not require a commercial system.

  6. 16 CFR 1102.24 - Designation of confidential information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...

  7. 16 CFR § 1102.42 - Disclaimers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...

  8. 49 CFR 535.8 - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... information. (2) Manufacturers must submit information electronically through the EPA database system as the... year 2012 the agencies are not prepared to receive information through the EPA database system... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...

  9. MIPS: analysis and annotation of proteins from whole genomes in 2005

    PubMed Central

    Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839

  10. MIPS: analysis and annotation of proteins from whole genomes in 2005.

    PubMed

    Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V

    2006-01-01

    The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).

  11. Automating Information Discovery Within the Invisible Web

    NASA Astrophysics Data System (ADS)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  12. Ten years' work on the World Organisation for Animal Health (OIE) Worldwide Animal Disease Notification System.

    PubMed

    Jebara, Karim Ben; Cáceres, Paula; Berlingieri, Francesco; Weber-Vintzel, Laure

    2012-12-01

    This article gives an overview of the World Organisation for Animal Health (OIE) Worldwide Animal Disease Notification System and highlights the major achievements during the past decade. It describes the different types of disease notification reports received and processed by the OIE. It also evaluates the three strategies implemented by the OIE in the recent years aimed at improving disease notification: introduction and use of a secure online notification system World Animal Health Information System (WAHIS) and its database interface World Animal Health Information Database (WAHID); implementation of active search and verification procedures for non-official information; and enhanced building of capacity for animal disease notification to the OIE by Members Countries. The improvements are evidenced by the increasing number of reports submitted on an annual basis and the reduction in submission time together with an improvement in the quality and quantity of the immediate notifications and follow-up reports, six-monthly and annual reports submitted by Veterinary Authorities. In the recent years, the OIE's notification system provides an early warning system more sensitive and global. Consequently, there is a greater knowledge of animal diseases' distribution worldwide. As a result, it is possible to ensure better prevention, more accurate risk assessment and evaluation by diminishing the spread of known or newly emerging pathogens. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Fisher information framework for time series modeling

    NASA Astrophysics Data System (ADS)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  14. Burnet Project

    PubMed Central

    Masellis, A.; Atiyeh, B.

    2009-01-01

    Summary The BurNet project, a pilot project of the Eumedis initiative, has become true. The Eumedis (EUro MEDiterranean Information Society) initiative is part of the MEDA programme of the EU to develop the Information Society in the Mediterranean area. In the health care sector, the objective of Eumedis is: the deployment of network-based solutions to interconnect - using userfriendly and affordable solutions - the actors at all levels of the "health care system" of the Euro-Mediterranean region. The Bur Net project interconnects 17 Burn Centres (BC) in the Mediterranean Area through an information network both to standardize courses of action in the field of prevention, treatment, and functional and psychological rehabilitation of burn patients and to coordinate interactions between BC and emergency rooms in peripheral hospitals using training/information activities and telemedicine to optimize first aid provided to burn patients before referral to a BC. Shared procedure protocols for prevention and the care and rehabilitation of patients, both at individual and mass level, will help to create an international specialized database and a Webbased teleconsultation system. PMID:21991176

  15. SORTEZ: a relational translator for NCBI's ASN.1 database.

    PubMed

    Hart, K W; Searls, D B; Overton, G C

    1994-07-01

    The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.

  16. EPA Facility Registry Service (FRS): RCRA

    EPA Pesticide Factsheets

    This web feature service contains location and facility identification information from EPA's Facility Registry Service (FRS) for the subset of hazardous waste facilities that link to the Resource Conservation and Recovery Act Information System (RCRAInfo). EPA's comprehensive information system in support of the Resource Conservation and Recovery Act (RCRA) of 1976 and the Hazardous and Solid Waste Amendments (HSWA) of 1984, RCRAInfo tracks many types of information about generators, transporters, treaters, storers, and disposers of hazardous waste. FRS identifies and geospatially locates facilities, sites or places subject to environmental regulations or of environmental interest. Using vigorous verification and data management procedures, FRS integrates facility data from EPA's national program systems, other federal agencies, and State and tribal master facility records and provides EPA with a centrally managed, single source of comprehensive and authoritative information on facilities. This data set contains the subset of FRS integrated facilities that link to RCRAInfo hazardous waste facilities once the RCRAInfo data has been integrated into the FRS database. Additional information on FRS is available at the EPA website https://www.epa.gov/enviro/facility-registry-service-frs

  17. Development and Implementation of Kumamoto Technopolis Regional Database T-KIND

    NASA Astrophysics Data System (ADS)

    Onoue, Noriaki

    T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.

  18. The IEO Data Center Management System: Tools for quality control, analysis and access marine data

    NASA Astrophysics Data System (ADS)

    Casas, Antonia; Garcia, Maria Jesus; Nikouline, Andrei

    2010-05-01

    Since 1994 the Data Centre of the Spanish Oceanographic Institute develops system for archiving and quality control of oceanographic data. The work started in the frame of the European Marine Science & Technology Programme (MAST) when a consortium of several Mediterranean Data Centres began to work on the MEDATLAS project. Along the years, old software modules for MS DOS were rewritten, improved and migrated to Windows environment. Oceanographic data quality control includes now not only vertical profiles (mainly CTD and bottles observations) but also time series of currents and sea level observations. New powerful routines for analysis and for graphic visualization were added. Data presented originally in ASCII format were organized recently in an open source MySQL database. Nowadays, the IEO, as part of SeaDataNet Infrastructure, has designed and developed a new information system, consistent with the ISO 19115 and SeaDataNet standards, in order to manage the large and diverse marine data and information originated in Spain by different sources, and to interoperate with SeaDataNet. The system works with data stored in ASCII files (MEDATLAS, ODV) as well as data stored within the relational database. The components of the system are: 1.MEDATLAS Format and Quality Control - QCDAMAR: Quality Control of Marine Data. Main set of tools for working with data presented as text files. Includes extended quality control (searching for duplicated cruises and profiles, checking date, position, ship velocity, constant profiles, spikes, density inversion, sounding, acceptable data, impossible regional values,...) and input/output filters. - QCMareas: A set of procedures for the quality control of tide gauge data according to standard international Sea Level Observing System. These procedures include checking for unexpected anomalies in the time series, interpolation, filtering, computation of basic statistics and residuals. 2. DAMAR: A relational data base (MySql) designed to manage the wide variety of marine information as common vocabularies, Catalogues (CSR & EDIOS), Data and Metadata. 3.Other tools for analysis and data management - Import_DB: Script to import data and metadata from the Medatlas ASCII files into the database. - SelDamar/Selavi: interface with the database for local and web access. Allows selective retrievals applying the criteria introduced by the user, as geographical bounds, data responsible, cruises, platform, time periods, etc. Includes also statistical reference values calculation, plotting of original and mean profiles together with vertical interpolation. - ExtractDAMAR: Script to extract data when they are archived in ASCII files that meet the criteria upon an user request through SelDamar interface and export them in ODV format, making also a unit conversion.

  19. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  20. Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)

    USGS Publications Warehouse

    Granato, Gregory E.

    2004-01-01

    In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of the WSSMP database. All WSSMP information in the database is, ultimately, linked to the individual water suppliers and to a WSSMP 'cycle' (which is currently a 5-year planning cycle for compiling WSSMP information). The database file contains 172 tables - 47 data tables, 61 association tables, 61 domain tables, and 3 example import-link tables. This database is currently implemented in the Microsoft Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. Design documentation facilitates current use and potential modification for future use of the database. Information within the structure of the WSSMP database file (WSSMPv01.mdb), a data dictionary file (WSSMPDD1.pdf), a detailed database-design diagram (WSSMPPL1.pdf), and this database-design report (OFR2004-1231.pdf) documents the design of the database. This report includes a discussion of each WSSMP data structure with an accompanying database-design diagram. Appendix 1 of this report is an index of the diagrams in the report and on the plate; this index is organized by table name in alphabetical order. Each of these products is included in digital format on the enclosed CD-ROM to facilitate use or modification of the database.

  1. User assumptions about information retrieval systems: Ethical concerns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Froehlich, T.J.

    Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases;more » that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.« less

  2. Managing security and privacy concerns over data storage in healthcare research.

    PubMed

    Mackenzie, Isla S; Mantay, Brian J; McDonnell, Patrick G; Wei, Li; MacDonald, Thomas M

    2011-08-01

    Issues surrounding data security and privacy are of great importance when handling sensitive health-related data for research. The emphasis in the past has been on balancing the risks to individuals with the benefit to society of the use of databases for research. However, a new way of looking at such issues is that by optimising procedures and policies regarding security and privacy of data to the extent that there is no appreciable risk to the privacy of individuals, we can create a 'win-win' situation in which everyone benefits, and pharmacoepidemiological research can flourish with public support. We discuss holistic measures, involving both information technology and people, taken to improve the security and privacy of data storage. After an internal review, we commissioned an external audit by an independent consultant with a view to optimising our data storage and handling procedures. Improvements to our policies and procedures were implemented as a result of the audit. By optimising our storage of data, we hope to inspire public confidence and hence cooperation with the use of health care data in research. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Evaluation of Factors Influencing Accuracy of Principal Procedure Coding Based on ICD-9-CM: An Iranian Study

    PubMed Central

    Farzandipour, Mehrdad; Sheikhtaheri, Abbas

    2009-01-01

    To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647

  4. Formal ontology for natural language processing and the integration of biomedical databases.

    PubMed

    Simon, Jonathan; Dos Santos, Mariana; Fielding, James; Smith, Barry

    2006-01-01

    The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting natural language processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for Natural Language Processing, 2003. http://www.landcglobal.com/].

  5. A New Approach To Secure Federated Information Bases Using Agent Technology.

    ERIC Educational Resources Information Center

    Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang

    2003-01-01

    Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…

  6. Database Changes (Post-Publication). ERIC Processing Manual, Section X.

    ERIC Educational Resources Information Center

    Brandhorst, Ted, Ed.

    The purpose of this section is to specify the procedure for making changes to the ERIC database after the data involved have been announced in the abstract journals RIE or CIJE. As a matter of general ERIC policy, a document or journal article is not re-announced or re-entered into the database as a new accession for the purpose of accomplishing a…

  7. Database for earthquake strong motion studies in Italy

    USGS Publications Warehouse

    Scasserra, G.; Stewart, J.P.; Kayen, R.E.; Lanzo, G.

    2009-01-01

    We describe an Italian database of strong ground motion recordings and databanks delineating conditions at the instrument sites and characteristics of the seismic sources. The strong motion database consists of 247 corrected recordings from 89 earthquakes and 101 recording stations. Uncorrected recordings were drawn from public web sites and processed on a record-by-record basis using a procedure utilized in the Next-Generation Attenuation (NGA) project to remove instrument resonances, minimize noise effects through low- and high-pass filtering, and baseline correction. The number of available uncorrected recordings was reduced by 52% (mostly because of s-triggers) to arrive at the 247 recordings in the database. The site databank includes for every recording site the surface geology, a measurement or estimate of average shear wave velocity in the upper 30 m (Vs30), and information on instrument housing. Of the 89 sites, 39 have on-site velocity measurements (17 of which were performed as part of this study using SASW techniques). For remaining sites, we estimate Vs30 based on measurements on similar geologic conditions where available. Where no local velocity measurements are available, correlations with surface geology are used. Source parameters are drawn from databanks maintained (and recently updated) by Istituto Nazionale di Geofisica e Vulcanologia and include hypocenter location and magnitude for small events (M< ??? 5.5) and finite source parameters for larger events. ?? 2009 A.S. Elnashai & N.N. Ambraseys.

  8. Building Inventory Database on the Urban Scale Using GIS for Earthquake Risk Assessment

    NASA Astrophysics Data System (ADS)

    Kaplan, O.; Avdan, U.; Guney, Y.; Helvaci, C.

    2016-12-01

    The majority of the existing buildings are not safe against earthquakes in most of the developing countries. Before a devastating earthquake, existing buildings need to be assessed and the vulnerable ones must be determined. Determining the seismic performance of existing buildings which is usually made with collecting the attributes of existing buildings, making the analysis and the necessary queries, and producing the result maps is very hard and complicated procedure that can be simplified with Geographic Information System (GIS). The aim of this study is to produce a building inventory database using GIS for assessing the earthquake risk of existing buildings. In this paper, a building inventory database for 310 buildings, located in Eskisehir, Turkey, was produced in order to assess the earthquake risk of the buildings. The results from this study show that 26% of the buildings have high earthquake risk, 33% of the buildings have medium earthquake risk and the 41% of the buildings have low earthquake risk. The produced building inventory database can be very useful especially for governments in dealing with the problem of determining seismically vulnerable buildings in the large existing building stocks. With the help of this kind of methods, determination of the buildings, which may collapse and cause life and property loss during a possible future earthquake, will be very quick, cheap and reliable.

  9. A Database of Historical Information on Landslides and Floods in Italy

    NASA Astrophysics Data System (ADS)

    Guzzetti, F.; Tonelli, G.

    2003-04-01

    For the past 12 years we have maintained and updated a database of historical information on landslides and floods in Italy, known as the National Research Council's AVI (Damaged Urban Areas) Project archive. The database was originally designed to respond to a specific request of the Minister of Civil Protection, and was aimed at helping the regional assessment of landslide and flood risk in Italy. The database was first constructed in 1991-92 to cover the period 1917 to 1990. Information of damaging landslide and flood event was collected by searching archives, by screening thousands of newspaper issues, by reviewing the existing technical and scientific literature on landslides and floods in Italy, and by interviewing landslide and flood experts. The database was then updated chiefly through the analysis of hundreds of newspaper articles, and it now covers systematically the period 1900 to 1998, and non-systematically the periods 1900 to 1916 and 1999 to 2002. Non systematic information on landslide and flood events older than 20th century is also present in the database. The database currently contains information on more than 32,000 landslide events occurred at more than 25,700 sites, and on more than 28,800 flood events occurred at more than 15,600 sites. After a brief outline of the history and evolution of the AVI Project archive, we present and discuss: (a) the present structure of the database, including the hardware and software solutions adopted to maintain, manage, use and disseminate the information stored in the database, (b) the type and amount of information stored in the database, including an estimate of its completeness, and (c) examples of recent applications of the database, including a web-based GIS systems to show the location of sites historically affected by landslides and floods, and an estimate of geo-hydrological (i.e., landslide and flood) risk in Italy based on the available historical information.

  10. Mining a human transcriptome database for Nrf2 modulators

    EPA Science Inventory

    Nuclear factor erythroid-2 related factor 2 (Nrf2) is a key transcription factor important in the protection against oxidative stress. We developed computational procedures to enable the identification of chemical, genetic and environmental modulators of Nrf2 in a large database ...

  11. 77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... Database AGENCY: Office of the National Coordinator for Health Information Technology, HHS. ACTION: Notice of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has launched a new Health Information Technology Federal Advisory Committee Workgroup Application Database...

  12. Lasers and losers in the eyes of the law: liability for head and neck procedures.

    PubMed

    Svider, Peter F; Carron, Michael A; Zuliani, Giancarlo F; Eloy, Jean Anderson; Setzen, Michael; Folbe, Adam J

    2014-01-01

    Although some have noted that malpractice litigation may be "plateauing," defensive medical practices are pervasive and make up a considerable proportion of the "indirect" costs medicolegal issues contribute toward our health care system. Accordingly, these trends have spurred considerable interest in characterizing factors that play a role in alleged medical negligence, along with outcomes and awards. To conduct a focused examination of malpractice litigation regarding laser procedures in the head and neck and to determine the reasons for initiating litigation as well as outcomes and awards. Retrospective analysis of the WestlawNext legal database, encompassing publicly available federal and state court records, to identify malpractice cases involving laser procedures in the head and neck. Outcomes, awards, defendant specialty, and other allegations. Most cases (28 [82%]) included in this analysis involved female plaintiffs. Of 34 cases, 19 (56%) were resolved with a defendant verdict. The median indemnity was $150 000, and dermatologists, otolaryngologists, and plastic surgeons were the most commonly named defendants. The most common procedures were performed for age-related changes, acne scarring, hair removal, and vascular lesions, although there were also several rhinologic and airway cases. Of all cases, 25 (74%) involved cutaneous procedures, and common allegations noted included permanent injury (24 cases [71%]), disfigurement/scarring (23 [68%]), inadequate informed consent (17 [50%]), unnecessary/inappropriate procedure (15 [44%]), and burns (11 [32%]). Noncutaneous procedures had higher trending median payments ($600 000 vs $103 000), although this comparison did not reach statistical significance (P = .09). Procedures using lasers represent a potential target for malpractice litigation should an adverse event occur. Although cutaneous/cosmetic procedures were noted among cases included in this analysis, as well as other head and neck interventions, otolaryngologists were more likely to be named as defendants in the latter category. Although cases had modest indemnities compared with prior analyses, the potential for significant amounts was present. Inclusion into the informed consent process of specific factors detailed in this analysis may potentially decrease liability. In addition, physicians and patients should undergo comprehensive discussion regarding expectations as well as contingencies should adverse events occur. 4.

  13. E-MSD: an integrated data resource for bioinformatics.

    PubMed

    Velankar, S; McNeil, P; Mittard-Runte, V; Suarez, A; Barrell, D; Apweiler, R; Henrick, K

    2005-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the 'Structure Integration with Function, Taxonomy and Sequences (SIFTS)' initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group.

  14. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  15. 75 FR 29155 - Publicly Available Consumer Product Safety Information Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ...The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is issuing a notice of proposed rulemaking that would establish a publicly available consumer product safety information database (``database''). Section 212 of the Consumer Product Safety Improvement Act of 2008 (``CPSIA'') amended the Consumer Product Safety Act (``CPSA'') to require the Commission to establish and maintain a publicly available, searchable database on the safety of consumer products, and other products or substances regulated by the Commission. The proposed rule would interpret various statutory requirements pertaining to the information to be included in the database and also would establish provisions regarding submitting reports of harm; providing notice of reports of harm to manufacturers; publishing reports of harm and manufacturer comments in the database; and dealing with confidential and materially inaccurate information.

  16. The CIS Database: Occupational Health and Safety Information Online.

    ERIC Educational Resources Information Center

    Siegel, Herbert; Scurr, Erica

    1985-01-01

    Describes document acquisition, selection, indexing, and abstracting and discusses online searching of the CIS database, an online system produced by the International Occupational Safety and Health Information Centre. This database comprehensively covers information in the field of occupational health and safety. Sample searches and search…

  17. Resident Cosmetic Clinic: Practice Patterns, Safety, and Outcomes at an Academic Plastic Surgery Institution.

    PubMed

    Qureshi, Ali A; Parikh, Rajiv P; Myckatyn, Terence M; Tenenbaum, Marissa M

    2016-10-01

    Comprehensive aesthetic surgery education is an integral part of plastic surgery residency training. Recently, the ACGME increased minimum requirements for aesthetic procedures in residency. To expand aesthetic education and prepare residents for independent practice, our institution has supported a resident cosmetic clinic for over 25 years. To evaluate the safety of procedures performed through a resident clinic by comparing outcomes to benchmarked national aesthetic surgery outcomes and to provide a model for resident clinics in academic plastic surgery institutions. We identified a consecutive cohort of patients who underwent procedures through our resident cosmetic clinic between 2010 and 2015. Major complications, as defined by CosmetAssure database, were recorded and compared to published aesthetic surgery complication rates from the CosmetAssure database for outcomes benchmarking. Fisher's exact test was used to compare sample proportions. Two hundred and seventy-one new patients were evaluated and 112 patients (41.3%) booked surgery for 175 different aesthetic procedures. There were 55 breast, 19 head and neck, and 101 trunk or extremity aesthetic procedures performed. The median number of preoperative and postoperative visits was 2 and 4 respectively with a mean follow-up time of 35 weeks. There were 3 major complications (2 hematomas and 1 infection requiring IV antibiotics) with an overall complication rate of 1.7% compared to 2.0% for patients in the CosmetAssure database (P = .45). Surgical outcomes for procedures performed through a resident cosmetic clinic are comparable to national outcomes for aesthetic surgery procedures, suggesting this experience can enhance comprehensive aesthetic surgery education without compromising patient safety or quality of care. 4 Risk. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  18. Combining new technologies for effective collection development: a bibliometric study using CD-ROM and a database management program.

    PubMed Central

    Burnham, J F; Shearer, B S; Wall, J C

    1992-01-01

    Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424

  19. 15 years of monitoring occupational exposure to respirable dust and quartz within the European industrial minerals sector.

    PubMed

    Zilaout, Hicham; Vlaanderen, Jelle; Houba, Remko; Kromhout, Hans

    2017-07-01

    In 2000, a prospective Dust Monitoring Program (DMP) was started in which measurements of worker's exposure to respirable dust and quartz are collected in member companies from the European Industrial Minerals Association (IMA-Europe). After 15 years, the resulting IMA-DMP database allows a detailed overview of exposure levels of respirable dust and quartz over time within this industrial sector. Our aim is to describe the IMA-DMP and the current state of the corresponding database which due to continuation of the IMA-DMP is still growing. The future use of the database will also be highlighted including its utility for the industrial minerals producing sector. Exposure data are being obtained following a common protocol including a standardized sampling strategy, standardized sampling and analytical methods and a data management system. Following strict quality control procedures, exposure data are consequently added to a central database. The data comprises personal exposure measurements including auxiliary information on work and other conditions during sampling. Currently, the IMA-DMP database consists of almost 28,000 personal measurements which have been performed from 2000 until 2015 representing 29 half-yearly sampling campaigns. The exposure data have been collected from 160 different worksites owned by 35 industrial mineral companies and comes from 23 European countries and approximately 5000 workers. The IMA-DMP database provides the European minerals sector with reliable data regarding worker personal exposures to respirable dust and quartz. The database can be used as a powerful tool to address outstanding scientific issues on long-term exposure trends and exposure variability, and importantly, as a surveillance tool to evaluate exposure control measures. The database will be valuable for future epidemiological studies on respiratory health effects and will allow for estimation of quantitative exposure response relationships. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.

  20. [The 'Beijing clinical database' on severe acute respiratory syndrome patients: its design, process, quality control and evaluation].

    PubMed

    2004-04-01

    To develop a large database on clinical presentation, treatment and prognosis of all clinical diagnosed severe acute respiratory syndrome (SARS) cases in Beijing during the 2003 "crisis", in order to conduct further clinical studies. The database was designed by specialists, under the organization of the Beijing Commanding Center for SARS Treatment and Cure, including 686 data items in six sub-databases: primary medical-care seeking, vital signs, common symptoms and signs, treatment, laboratory and auxiliary test, and cost. All hospitals having received SARS inpatients were involved in the project. Clinical data was transferred and coded by trained doctors and data entry was carried out by trained nurses, according to a uniformed protocol. A series of procedures had been taken before the database was finally established which included programmed logic checking, digit-by-digit check on 5% random sample, data linkage for transferred cases, coding of characterized information, database structure standardization, case reviewe by computer program according to SARS Clinical Diagnosis Criteria issued by the Ministry of Health, and exclusion of unqualified patients. The database involved 2148 probable SARS cases in accordant with the clinical diagnosis criteria, including 1291 with complete records. All cases and record-complete cases showed an almost identical distribution in sex, age, occupation, residence areas and time of onset. The completion rate of data was not significantly different between the two groups except for some items on primary medical-care seeking. Specifically, the data completion rate was 73% - 100% in primary medical-care seeking, 90% in common symptoms and signs, 100% for treatment, 98% for temperature, 90% for pulse, 100% for outcomes and 98% for costs in hospital. The number of cases collected in the Beijing Clinical Database of SARS Patients was fairly complete. Cases with complete records showed that they could serve as excellent representatives of all cases. The completeness of data was quite satisfactory with primary clinical items which allowed for further clinical studies.

  1. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  2. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  3. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  4. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  5. 48 CFR 804.1102 - Vendor Information Pages (VIP) Database.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...

  6. Alternative treatment technology information center computer database system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, D.

    1995-10-01

    The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less

  7. COM1/348: Design and Implementation of a Portal for the Market of the Medical Equipment (MEDICOM)

    PubMed Central

    Palamas, S; Vlachos, I; Panou-Diamandi, O; Marinos, G; Kalivas, D; Zeelenberg, C; Nimwegen, C; Koutsouris, D

    1999-01-01

    Introduction The MEDICOM system provides the electronic means for medical equipment manufacturers to communicate online with their customers supporting the Purchasing Process and the Post Market Surveillance. The MEDICOM service will be provided over the Internet by the MEDICOM Portal, and by a set of distributed subsystems dedicated to handle structured information related to medical devices. There are three kinds of these subsystems, the Hypermedia Medical Catalogue (HMC), Virtual Medical Exhibition (VME), which contains information in a form of Virtual Models, and the Post Market Surveillance system (PMS). The Universal Medical Devices Nomenclature System (UMDNS) is used to register all products. This work was partially funded by the ESPRIT Project 25289 (MEDICOM). Methods The Portal provides the end user interface operating as the MEDICOM Portal, acts as the yellow pages for finding both products and providers, providing links to the providers servers, implements the system management and supports the subsystem database compatibility. The Portal hosts a database system composed of two parts: (a) the Common Database, which describes a set of encoded parameters (like Supported Languages, Geographic Regions, UMDNS Codes, etc) common to all subsystems and (b) the Short Description Database, which contains summarised descriptions of medical devices, including a text description, the codes of the manufacturer, UMDNS code, attribute values and links to the corresponding HTML pages of the HMC, VME and PMS servers. The Portal provides the MEDICOM user interface including services like end user profiling and registration, end user query forms, creation and hosting of newsgroups, links to online libraries, end user subscription to manufacturers' mailing lists, online information for the MEDICOM system and special messages or advertisements from manufacturers. Results Platform independence and interoperability characterise the system design. A general purpose RDBMS is used for the implementation of the databases. The end user interface is implemented using HTML and Java applets, while the subsystem administration applications are developed using Java. The JDBC interface is used in order to provide database access to these applications. The communication between subsystems is implemented using CORBA objects and Java servlets are used in subsystem servers for the activation of remote operations. Discussion In the second half of 1999, the MEDICOM Project will enter the phase of evaluation and pilot operation. The benefits of the MEDICOM system are expected to be the establishment of a world wide accessible marketplace between providers and health care professionals. The latter will achieve the provision of up-to-date and high quality products information in an easy and friendly way, and the enhancement of the marketing procedures and after sales support efficiency.

  8. Integrative neuroscience: the role of a standardized database.

    PubMed

    Gordon, E; Cooper, N; Rennie, C; Hermens, D; Williams, L M

    2005-04-01

    Most brain related databases bring together specialized information, with a growing number that include neuroimaging measures. This article outlines the potential use and insights from the first entirely standardized and centralized database, which integrates information from neuroimaging measures (EEG, event related potential (ERP), structural/functional MRI), arousal (skin conductance responses (SCR)s, heart rate, respiration), neuropsychological and personality tests, genomics and demographics: The Brain Resource International Database. It comprises data from over 2000 "normative" subjects and a growing number of patients with neurological and psychiatric illnesses, acquired from over 50 laboratories (in the U.S.A, United Kingdom, Holland, South Africa, Israel and Australia), all with identical equipment and experimental procedures. Three primary goals of this database are to quantify individual differences in normative brain function, to compare an individual's performance to their database peers, and to provide a robust normative framework for clinical assessment and treatment prediction. We present three example demonstrations in relation to these goals. First, we show how consistent age differences may be quantified when large subject numbers are available, using EEG and ERP data from nearly 2000 stringently screened. normative subjects. Second, the use of a normalization technique provides a means to compare clinical subjects (50 ADHD subjects in this study) to the normative database with the effects of age and gender taken into account. Third, we show how a profile of EEG/ERP and autonomic measures potentially provides a means to predict treatment response in ADHD subjects. The example data consists of EEG under eyes open and eyes closed and ERP data for auditory oddball, working memory and Go-NoGo paradigms. Autonomic measures of skin conductance (tonic skin conductance level, SCL, and phasic skin conductance responses, SCRs) were acquired simultaneously with central EEG/ERP measures. The findings show that the power of large samples, tested using standardized protocols, allows for the quantification of individual differences that can subsequently be used to control such variation and to enhance the sensitivity and specificity of comparisons between normative and clinical groups. In terms of broader significance, the combination of size and multidimensional measures tapping the brain's core cognitive competencies, may provide a normative and evidence-based framework for individually-based assessments in "Personalized Medicine."

  9. Potentials of Advanced Database Technology for Military Information Systems

    DTIC Science & Technology

    2001-04-01

    UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010866 TITLE: Potentials of Advanced Database Technology for Military... Technology for Military Information Systems Sunil Choennia Ben Bruggemanb a National Aerospace Laboratory, NLR, P.O. Box 90502, 1006 BM Amsterdam...application of advanced information tech- nology, including database technology , as underpin- actions X and Y as dangerous or not? ning is

  10. The Dutch Hospital Standardised Mortality Ratio (HSMR) method and cardiac surgery: benchmarking in a national cohort using hospital administration data versus a clinical database

    PubMed Central

    Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H

    2014-01-01

    Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377

  11. 48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...

  12. 48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...

  13. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    ERIC Educational Resources Information Center

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  14. 77 FR 47690 - 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-09

    ... DEPARTMENT OF STATE [Public Notice 7976] 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096.... Title of Information Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB...

  15. Integrated Primary Care Information Database (IPCI)

    Cancer.gov

    The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.

  16. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  17. Malpractice risk and cost are significantly reduced after tort reform.

    PubMed

    Stewart, Ronald M; Geoghegan, Kathy; Myers, John G; Sirinek, Kenneth R; Corneille, Michael G; Mueller, Deborah; Dent, Daniel L; Wolf, Steven E; Pruitt, Basil A

    2011-04-01

    Rising medical malpractice premiums have reached a crisis point in many areas of the United States. In 2003 the Texas legislature passed a comprehensive package of tort reform laws that included a cap at $250,000 on noneconomic damages in most medical malpractice cases. We hypothesized that tort reform laws significantly reduce the risk of malpractice lawsuit in an academic medical center. We compared malpractice prevalence, incidence, and liability costs before and after comprehensive state tort reform measures were implemented. Two prospectively maintained institutional databases were used to calculate and characterize malpractice risk: a surgical operation database and a risk management and malpractice database. Risk groups were divided into pretort reform (1992 to 2004) and post-tort reform groups (2004 to the present). Operative procedures were included for elective, urgent, and emergency general surgery procedures. During the study period, 98,513 general surgical procedures were performed. A total of 28 lawsuits (25 pre-reform, 3 postreform) were filed, naming general surgery faculty or residents. The prevalence of lawsuits filed/100,000 procedures performed is as follows: before reform, 40 lawsuits/100,000 procedures, and after reform, 8 lawsuits/100,000 procedures (p < 0.01, relative risk 0.21 [95% CI 0.063 to 0.62]). Virtually all of the liability and defense cost was in the pretort reform period: $595,000/year versus $515/year in the postreform group (p < 0.01). Implementation of comprehensive tort reform in Texas was associated with a significant decrease in the prevalence and cost of surgical malpractice lawsuits at one academic medical center. Copyright © 2011. Published by Elsevier Inc.

  18. The Danish Fracture Database can monitor quality of fracture-related surgery, surgeons' experience level and extent of supervision.

    PubMed

    Andersen, Morten Jon; Gromov, Kiril; Brix, Michael; Troelsen, Anders

    2014-06-01

    The importance of supervision and of surgeons' level of experience in relation to patient outcome have been demonstrated in both hip fracture and arthroplasty surgery. The aim of this study was to describe the surgeons' experience level and the extent of supervision for: 1) fracture-related surgery in general; 2) the three most frequent primary operations and reoperations; and 3) primary operations during and outside regular working hours. A total of 9,767 surgical procedures were identified from the Danish Fracture Database (DFDB). Procedures were grouped based on the surgeons' level of experience, extent of supervision, type (primary, planned secondary or reoperation), classification (AO Müller), and whether they were performed during or outside regular hours. Interns and junior residents combined performed 46% of all procedures. A total of 90% of surgeries by interns were performed under supervision, whereas 32% of operations by junior residents were unsupervised. Supervision was absent in 14-16% and 22-33% of the three most frequent primary procedures and reoperations when performed by interns and junior residents, respectively. The proportion of unsupervised procedures by junior residents grew from 30% during to 40% (p < 0.001) outside regular hours. Interns and junior residents together performed almost half of all fracture-related surgery. The extent of supervision was generally high; however, a third of the primary procedures performed by junior residents were unsupervised. The extent of unsupervised surgery performed by junior residents was significantly higher outside regular hours. not relevant. The Danish Fracture Database ("Dansk Frakturdatabase") was approved by the Danish Data Protection Agency ID: 01321.

  19. Analysis of archaeological triacylglycerols by high resolution nanoESI, FT-ICR MS and IRMPD MS/MS: Application to 5th century BC-4th century AD oil lamps from Olbia (Ukraine)

    NASA Astrophysics Data System (ADS)

    Garnier, Nicolas; Rolando, Christian; Høtje, Jakob Munk; Tokarski, Caroline

    2009-07-01

    This work presents the precise identification of triacylglycerols (TAGs) extracted from archaeological samples using a methodology based on nanoelectrospray and Fourier transform mass spectrometry. The archaeological TAG identification needs adapted sample preparation protocols to trace samples in advanced degradation state. More precisely, the proposed preparation procedure includes extraction of the lipid components from finely grinded ceramic using dichloromethane/methanol mixture with additional ultrasonication treatment, and TAG purification by solid phase extraction on a diol cartridge. Focusing on the analytical approach, the implementation of "in-house" species-dependent TAG database was investigated using MS and InfraRed Multiphoton Dissociation (IRMPD) MS/MS spectra; several vegetal oils, dairy products and animal fats were studied. The high mass accuracy of the Fourier transform analyzer ([Delta]m below 2.5 ppm) provides easier data interpretation, and allows distinction between products of different origins. In details, the IRMPD spectra of the lithiated TAGs reveal fragmentation reactions including loss of free neutral fatty acid and loss of fatty acid as [alpha],[beta]-unsaturated moieties. Based on the developed preparation procedure and on the constituted database, TAG extracts from 5th century BC to 4th century AD Olbia lamps were analyzed. The structural information obtained succeeds in identifying that bovine/ovine fats were used as fuel used in these archaeological Olbia lamps.

  20. 76 FR 51869 - Privacy Act Implementation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-19

    ... & Evaluative Files Database,'' ``FHFA-OIG Investigative & Evaluative MIS Database,'' ``FHFA-OIG Hotline... or evaluative records to an individual who is the subject of an investigation or evaluation could... investigative or evaluative techniques and procedures. (iii) From 5 U.S.C. 552a(d)(2), because amendment or...

Top