Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
NASA Technical Reports Server (NTRS)
Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William
2010-01-01
A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.
Expert database system for quality control
NASA Astrophysics Data System (ADS)
Wang, Anne J.; Li, Zhi-Cheng
1993-09-01
There are more competitors today. Markets are not homogeneous they are fragmented into increasingly focused niches requiring greater flexibility in the product mix shorter manufacturing production runs and above allhigher quality. In this paper the author identified a real-time expert system as a way to improve plantwide quality management. The quality control expert database system (QCEDS) by integrating knowledge of experts in operations quality management and computer systems use all information relevant to quality managementfacts as well as rulesto determine if a product meets quality standards. Keywords: expert system quality control data base
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Human Variome Project Quality Assessment Criteria for Variation Databases.
Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter
2016-06-01
Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.
A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.
ERIC Educational Resources Information Center
Anderson, Clarita S.
1991-01-01
Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…
High-throughput STR analysis for DNA database using direct PCR.
Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan
2013-07-01
Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.
Guidelines for establishing and maintaining construction quality databases.
DOT National Transportation Integrated Search
2006-11-01
The main objective of this study was to develop and present guidelines for State highway agencies (SHAs) in establishing and maintaining database systems geared towards construction quality issues for asphalt and concrete paving projects. To accompli...
R2 Water Quality Portal Monitoring Stations
The Water Quality Data Portal (WQP) provides an easy way to access data stored in various large water quality databases. The WQP provides various input parameters on the form including location, site, sampling, and date parameters to filter and customize the returned results. The The Water Quality Portal (WQP) is a cooperative service sponsored by the United States Geological Survey (USGS), the Environmental Protection Agency (EPA) and the National Water Quality Monitoring Council (NWQMC) that integrates publicly available water quality data from the USGS National Water Information System (NWIS) the EPA STOrage and RETrieval (STORET) Data Warehouse, and the USDA ARS Sustaining The Earth??s Watersheds - Agricultural Research Database System (STEWARDS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, D
Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visualmore » basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.« less
Nørgaard, M; Johnsen, S P
2016-02-01
In Denmark, the need for monitoring of clinical quality and patient safety with feedback to the clinical, administrative and political systems has resulted in the establishment of a network of more than 60 publicly financed nationwide clinical quality databases. Although primarily devoted to monitoring and improving quality of care, the potential of these databases as data sources in clinical research is increasingly being recognized. In this review, we describe these databases focusing on their use as data sources for clinical research, including their strengths and weaknesses as well as future concerns and opportunities. The research potential of the clinical quality databases is substantial but has so far only been explored to a limited extent. Efforts related to technical, legal and financial challenges are needed in order to take full advantage of this potential. © 2016 The Association for the Publication of the Journal of Internal Medicine.
ERIC Educational Resources Information Center
Nworji, Alexander O.
2013-01-01
Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…
Implementation of Three Text to Speech Systems for Kurdish Language
NASA Astrophysics Data System (ADS)
Bahrampour, Anvar; Barkhoda, Wafa; Azami, Bahram Zahir
Nowadays, concatenative method is used in most modern TTS systems to produce artificial speech. The most important challenge in this method is choosing appropriate unit for creating database. This unit must warranty smoothness and high quality speech, and also, creating database for it must reasonable and inexpensive. For example, syllable, phoneme, allophone, and, diphone are appropriate units for all-purpose systems. In this paper, we implemented three synthesis systems for Kurdish language based on syllable, allophone, and diphone and compare their quality using subjective testing.
A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network
NASA Astrophysics Data System (ADS)
Lussana, C.; Ranci, M.; Uboldi, F.
2012-04-01
In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.
Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana
2002-01-01
Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.
EPA U.S. Nine-region MARKAL DATABASE, DATABASE DOCUMENTATION
The evolution of the energy system in the United States is an important factor in future environmental outcomes including air quality and climate change. Given this, decision makers need to understand how a changing energy landscape will impact future air quality and contribute ...
An Autonomic Framework for Integrating Security and Quality of Service Support in Databases
ERIC Educational Resources Information Center
Alomari, Firas
2013-01-01
The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…
Kuhn, Stefan; Schlörer, Nils E
2015-08-01
nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann
2008-01-01
Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.
Data Auditor: Analyzing Data Quality Using Pattern Tableaux
NASA Astrophysics Data System (ADS)
Srivastava, Divesh
Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.
The Air Quality System (AQS) database contains measurements of air pollutant concentrations from throughout the United States and its territories. The measurements include both criteria air pollutants and hazardous air pollutants.
Haile, Michael; Anderson, Kim; Evans, Alex; Crawford, Angela
2012-01-01
In part 1 of this series, we outlined the rationale behind the development of a centralized electronic database used to maintain nonsterile compounding formulation records in the Mission Health System, which is a union of several independent hospitals and satellite and regional pharmacies that form the cornerstone of advanced medical care in several areas of western North Carolina. Hospital providers in many healthcare systems require compounded formulations to meet the needs of their patients (in particular, pediatric patients). Before a centralized electronic compounding database was implemented in the Mission Health System, each satellite or regional pharmacy affiliated with that system had a specific set of formulation records, but no standardized format for those records existed. In this article, we describe the quality control, database platform selection, description, implementation, and execution of our intranet database system, which is designed to maintain, manage, and disseminate nonsterile compounding formulation records in the hospitals and affiliated pharmacies of the Mission Health System. The objectives of that project were to standardize nonsterile compounding formulation records, create a centralized computerized database that would increase healthcare staff members' access to formulation records, establish beyond-use dates based on published stability studies, improve quality control, reduce the potential for medication errors related to compounding medications, and (ultimately) improve patient safety.
CRN5EXP: Expert system for statistical quality control
NASA Technical Reports Server (NTRS)
Hentea, Mariana
1991-01-01
The purpose of the Expert System CRN5EXP is to assist in checking the quality of the coils at two very important mills: Hot Rolling and Cold Rolling in a steel plant. The system interprets the statistical quality control charts, diagnoses and predicts the quality of the steel. Measurements of process control variables are recorded in a database and sample statistics such as the mean and the range are computed and plotted on a control chart. The chart is analyzed through patterns using the C Language Integrated Production System (CLIPS) and a forward chaining technique to reach a conclusion about the causes of defects and to take management measures for the improvement of the quality control techniques. The Expert System combines the certainty factors associated with the process control variables to predict the quality of the steel. The paper presents the approach to extract data from the database, the reason to combine certainty factors, the architecture and the use of the Expert System. However, the interpretation of control charts patterns requires the human expert's knowledge and lends to Expert Systems rules.
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
NASA Astrophysics Data System (ADS)
Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe
1999-07-01
In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.
Marchewka, Artur; Zurawski, Łukasz; Jednoróg, Katarzyna; Grabowska, Anna
2014-06-01
Selecting appropriate stimuli to induce emotional states is essential in affective research. Only a few standardized affective stimulus databases have been created for auditory, language, and visual materials. Numerous studies have extensively employed these databases using both behavioral and neuroimaging methods. However, some limitations of the existing databases have recently been reported, including limited numbers of stimuli in specific categories or poor picture quality of the visual stimuli. In the present article, we introduce the Nencki Affective Picture System (NAPS), which consists of 1,356 realistic, high-quality photographs that are divided into five categories (people, faces, animals, objects, and landscapes). Affective ratings were collected from 204 mostly European participants. The pictures were rated according to the valence, arousal, and approach-avoidance dimensions using computerized bipolar semantic slider scales. Normative ratings for the categories are presented for each dimension. Validation of the ratings was obtained by comparing them to ratings generated using the Self-Assessment Manikin and the International Affective Picture System. In addition, the physical properties of the photographs are reported, including luminance, contrast, and entropy. The new database, with accompanying ratings and image parameters, allows researchers to select a variety of visual stimulus materials specific to their experimental questions of interest. The NAPS system is freely accessible to the scientific community for noncommercial use by request at http://naps.nencki.gov.pl .
Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study
2015-01-16
evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and
Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J
A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Fazio, Simone; Garraín, Daniel; Mathieux, Fabrice; De la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda
2015-01-01
Under the framework of the European Platform on Life Cycle Assessment, the European Reference Life-Cycle Database (ELCD - developed by the Joint Research Centre of the European Commission), provides core Life Cycle Inventory (LCI) data from front-running EU-level business associations and other sources. The ELCD contains energy-related data on power and fuels. This study describes the methods to be used for the quality analysis of energy data for European markets (available in third-party LC databases and from authoritative sources) that are, or could be, used in the context of the ELCD. The methodology was developed and tested on the energy datasets most relevant for the EU context, derived from GaBi (the reference database used to derive datasets for the ELCD), Ecoinvent, E3 and Gemis. The criteria for the database selection were based on the availability of EU-related data, the inclusion of comprehensive datasets on energy products and services, and the general approval of the LCA community. The proposed approach was based on the quality indicators developed within the International Reference Life Cycle Data System (ILCD) Handbook, further refined to facilitate their use in the analysis of energy systems. The overall Data Quality Rating (DQR) of the energy datasets can be calculated by summing up the quality rating (ranging from 1 to 5, where 1 represents very good, and 5 very poor quality) of each of the quality criteria indicators, divided by the total number of indicators considered. The quality of each dataset can be estimated for each indicator, and then compared with the different databases/sources. The results can be used to highlight the weaknesses of each dataset and can be used to guide further improvements to enhance the data quality with regard to the established criteria. This paper describes the application of the methodology to two exemplary datasets, in order to show the potential of the methodological approach. The analysis helps LCA practitioners to evaluate the usefulness of the ELCD datasets for their purposes, and dataset developers and reviewers to derive information that will help improve the overall DQR of databases.
Quality control of EUVE databases
NASA Technical Reports Server (NTRS)
John, L. M.; Drake, J.
1992-01-01
The publicly accessible databases for the Extreme Ultraviolet Explorer include: the EUVE Archive mailserver; the CEA ftp site; the EUVE Guest Observer Mailserver; and the Astronomical Data System node. The EUVE Performance Assurance team is responsible for verifying that these public EUVE databases are working properly, and that the public availability of EUVE data contained therein does not infringe any data rights which may have been assigned. In this poster, we describe the Quality Assurance (QA) procedures we have developed from the approach of QA as a service organization, thus reflecting the overall EUVE philosophy of Quality Assurance integrated into normal operating procedures, rather than imposed as an external, post facto, control mechanism.
Systematic review for geo-authentic Lonicerae Japonicae Flos.
Yang, Xingyue; Liu, Yali; Hou, Aijuan; Yang, Yang; Tian, Xin; He, Liyun
2017-06-01
In traditional Chinese medicine, Lonicerae Japonicae Flos is commonly used as anti-inflammatory, antiviral, and antipyretic herbal medicine, and geo-authentic herbs are believed to present the highest quality among all samples from different regions. To discuss the current situation and trend of geo-authentic Lonicerae Japonicae Flos, we searched Chinese Biomedicine Literature Database, Chinese Journal Full-text Database, Chinese Scientific Journal Full-text Database, Cochrane Central Register of Controlled Trials, Wanfang, and PubMed. We investigated all studies up to November 2015 pertaining to quality assessment, discrimination, pharmacological effects, planting or processing, or ecological system of geo-authentic Lonicerae Japonicae Flos. Sixty-five studies mainly discussing about chemical fingerprint, component analysis, planting and processing, discrimination between varieties, ecological system, pharmacological effects, and safety were systematically reviewed. By analyzing these studies, we found that the key points of geo-authentic Lonicerae Japonicae Flos research were quality and application. Further studies should focus on improving the quality by selecting the more superior of all varieties and evaluating clinical effectiveness.
77 FR 45965 - Determination of Attainment for the Paul Spur/Douglas PM10
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-02
... plans, and based on the findings of our technical system audit report, ADEQ's monitoring network meets... to EPA's Air Quality System (AQS) database as quality- assured. Next, we reviewed the ambient PM 10...
Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2017-07-11
The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.
One approach to design of speech emotion database
NASA Astrophysics Data System (ADS)
Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav
2016-05-01
This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.
Implementation of an open adoption research data management system for clinical studies.
Müller, Jan; Heiss, Kirsten Ingmar; Oberhoffer, Renate
2017-07-06
Research institutions need to manage multiple studies with individual data sets, processing rules and different permissions. So far, there is no standard technology that provides an easy to use environment to create databases and user interfaces for clinical trials or research studies. Therefore various software solutions are being used-from custom software, explicitly designed for a specific study, to cost intensive commercial Clinical Trial Management Systems (CTMS) up to very basic approaches with self-designed Microsoft ® databases. The technology applied to conduct those studies varies tremendously from study to study, making it difficult to evaluate data across various studies (meta-analysis) and keeping a defined level of quality in database design, data processing, displaying and exporting. Furthermore, the systems being used to collect study data are often operated redundantly to systems used in patient care. As a consequence the data collection in studies is inefficient and data quality may suffer from unsynchronized datasets, non-normalized database scenarios and manually executed data transfers. With OpenCampus Research we implemented an open adoption software (OAS) solution on an open source basis, which provides a standard environment for state-of-the-art research database management at low cost.
Schulz, Erich; Barrett, James W.; Price, Colin
1998-01-01
As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with “business rules” declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short. PMID:9670131
Read Code quality assurance: from simple syntax to semantic stability.
Schulz, E B; Barrett, J W; Price, C
1998-01-01
As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with "business rules" declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short.
The measurement of quality of care in the Veterans Health Administration.
Halpern, J
1996-03-01
The Veterans Health Administration (VHA) is committed to continual refinement of its system of quality measurement. The VHA organizational structure for quality measurement has three levels. At the national level, the Associate Chief Medical Director for Quality Management provides leadership, sets policy, furnishes measurement tools, develops and distributes measures of quality, and delivers educational programs. At the intermediate level, VHA has four regional offices with staff responsible for reviewing risk management data, investigating quality problems, and ensuring compliance with accreditation requirements. At the hospital level, staff reporting directly to the chief of staff or the hospital director are responsible for implementing VHA quality management policy. The Veterans Health Administration's philosophy of quality measurement recognizes the agency's moral imperative to provide America's veterans with care that meets accepted standards. Because the repair of faulty systems is more efficient than the identification of poor performers, VHA has integrated the techniques of total quality into a multifaceted improvement program that also includes the accreditation program and traditional quality assurance activities. VHA monitors its performance by maintaining adverse incident databases, conducting patient satisfaction surveys, contracting for external peer review of 50,000 records per year, and comparing process and outcome rates internally and when possible with external benchmarks. The near-term objectives of VHA include providing medical centers with a quality matrix that will permit local development of quality indicators, construction of a report card for VHA's customers, and implementing the Malcolm W. Baldrige system for quality improvement as the road map for systemwide continuous improvement. Other goals include providing greater access to data, creating a patient-centered database, providing real-time clinical decision support, and expanding the databases.
A database for spectral image quality
NASA Astrophysics Data System (ADS)
Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve
2015-01-01
We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.
User’s manual to update the National Wildlife Refuge System Water Quality Information System (WQIS)
Chojnacki, Kimberly A.; Vishy, Chad J.; Hinck, Jo Ellen; Finger, Susan E.; Higgins, Michael J.; Kilbride, Kevin
2013-01-01
National Wildlife Refuges may have impaired water quality resulting from historic and current land uses, upstream sources, and aerial pollutant deposition. National Wildlife Refuge staff have limited time available to identify and evaluate potential water quality issues. As a result, water quality–related issues may not be resolved until a problem has already arisen. The National Wildlife Refuge System Water Quality Information System (WQIS) is a relational database developed for use by U.S. Fish and Wildlife Service staff to identify existing water quality issues on refuges in the United States. The WQIS database relies on a geospatial overlay analysis of data layers for ownership, streams and water quality. The WQIS provides summary statistics of 303(d) impaired waters and total maximum daily loads for the National Wildlife Refuge System at the national, regional, and refuge level. The WQIS allows U.S. Fish and Wildlife Service staff to be proactive in addressing water quality issues by identifying and understanding the current extent and nature of 303(d) impaired waters and subsequent total maximum daily loads. Water quality data are updated bi-annually, making it necessary to refresh the WQIS to maintain up-to-date information. This manual outlines the steps necessary to update the data and reports in the WQIS.
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
Clinical Databases for Chest Physicians.
Courtwright, Andrew M; Gabriel, Peter E
2018-04-01
A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Coordinating Council. First Meeting: NASA/RECON database
NASA Technical Reports Server (NTRS)
1990-01-01
A Council of NASA Headquarters, American Institute of Aeronautics and Astronautics (AIAA), and the NASA Scientific and Technical Information (STI) Facility management met (1) to review and discuss issues of NASA concern, and (2) to promote new and better ways to collect and disseminate scientific and technical information. Topics mentioned for study and discussion at subsequent meetings included the pros and cons of transferring the NASA/RECON database to the commercial sector, the quality of the database, and developing ways to increase foreign acquisitions. The input systems at AIAA and the STI Facility were described. Also discussed were the proposed RECON II retrieval system, the transmittal of document orders received by the Facility and sent to AIAA, and the handling of multimedia input by the Departments of Defense and Commerce. A second meeting was scheduled for six weeks later to discuss database quality and international foreign input.
Hinton, W; Liyanage, H; McGovern, A; Liaw, S-T; Kuziemsky, C; Munro, N; de Lusignan, S
2017-08-01
Background: The Institute of Medicine framework defines six dimensions of quality for healthcare systems: (1) safety, (2) effectiveness, (3) patient centeredness, (4) timeliness of care, (5) efficiency, and (6) equity. Large health datasets provide an opportunity to assess quality in these areas. Objective: To perform an international comparison of the measurability of the delivery of these aims, in people with type 2 diabetes mellitus (T2DM) from large datasets. Method: We conducted a survey to assess healthcare outcomes data quality of existing databases and disseminated this through professional networks. We examined the data sources used to collect the data, frequency of data uploads, and data types used for identifying people with T2DM. We compared data completeness across the six areas of healthcare quality, using selected measures pertinent to T2DM management. Results: We received 14 responses from seven countries (Australia, Canada, Italy, the Netherlands, Norway, Portugal, Turkey and the UK). Most databases reported frequent data uploads and would be capable of near real time analysis of healthcare quality.The majority of recorded data related to safety (particularly medication adverse events) and treatment efficacy (glycaemic control and microvascular disease). Data potentially measuring equity was less well recorded. Recording levels were lowest for patient-centred care, timeliness of care, and system efficiency, with the majority of databases containing no data in these areas. Databases using primary care sources had higher data quality across all areas measured. Conclusion: Data quality could be improved particularly in the areas of patient-centred care, timeliness, and efficiency. Primary care derived datasets may be most suited to healthcare quality assessment. Georg Thieme Verlag KG Stuttgart.
Shoberg, Thomas G.; Stoddard, Paul R.
2013-01-01
The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.
ERIC Educational Resources Information Center
Hughes, Norm
The Distance Education Center (DEC) of the University of Southern Queensland (Australia) has developed a unique materials database system which is used to monitor pre-production, design and development, production and post-production planning, scheduling, and distribution of all types of materials including courses offered only on the Internet. In…
Corbellini, Carlo; Andreoni, Bruno; Ansaloni, Luca; Sgroi, Giovanni; Martinotti, Mario; Scandroglio, Ildo; Carzaniga, Pierluigi; Longoni, Mauro; Foschi, Diego; Dionigi, Paolo; Morandi, Eugenio; Agnello, Mauro
2018-01-01
Measurement and monitoring of the quality of care using a core set of quality measures are increasing in health service research. Although administrative databases include limited clinical data, they offer an attractive source for quality measurement. The purpose of this study, therefore, was to evaluate the completeness of different administrative data sources compared to a clinical survey in evaluating rectal cancer cases. Between May 2012 and November 2014, a clinical survey was done on 498 Lombardy patients who had rectal cancer and underwent surgical resection. These collected data were compared with the information extracted from administrative sources including Hospital Discharge Dataset, drug database, daycare activity data, fee-exemption database, and regional screening program database. The agreement evaluation was performed using a set of 12 quality indicators. Patient complexity was a difficult indicator to measure for lack of clinical data. Preoperative staging was another suboptimal indicator due to the frequent missing administrative registration of tests performed. The agreement between the 2 data sources regarding chemoradiotherapy treatments was high. Screening detection, minimally invasive techniques, length of stay, and unpreventable readmissions were detected as reliable quality indicators. Postoperative morbidity could be a useful indicator but its agreement was lower, as expected. Healthcare administrative databases are large and real-time collected repositories of data useful in measuring quality in a healthcare system. Our investigation reveals that the reliability of indicators varies between them. Ideally, a combination of data from both sources could be used in order to improve usefulness of less reliable indicators.
Asadi, S S; Vuppala, Padmaja; Reddy, M Anji
2005-01-01
A preliminary survey of area under Zone-III of MCH was undertaken to assess the ground water quality, demonstrate its spatial distribution and correlate with the land use patterns using advance techniques of remote sensing and geographical information system (GIS). Twenty-seven ground water samples were collected and their chemical analysis was done to form the attribute database. Water quality index was calculated from the measured parameters, based on which the study area was classified into five groups with respect to suitability of water for drinking purpose. Thematic maps viz., base map, road network, drainage and land use/land cover were prepared from IRS ID PAN + LISS III merged satellite imagery forming the spatial database. Attribute database was integrated with spatial sampling locations map in Arc/Info and maps showing spatial distribution of water quality parameters were prepared in Arc View. Results indicated that high concentrations of total dissolved solids (TDS), nitrates, fluorides and total hardness were observed in few industrial and densely populated areas indicating deteriorated water quality while the other areas exhibited moderate to good water quality.
Glossary | STORET Legacy Data Center | US EPA
2014-06-06
The U.S. Environmental Protection Agency (EPA) maintains two data management systems containing water quality information for the nation's waters: the Legacy Data Center (LDC), and STORET. The LDC is a static, archived database and STORET is an operational system actively being populated with water quality data.
Organizations - I | STORET Legacy Data Center | US EPA
2007-05-16
The U.S. Environmental Protection Agency (EPA) maintains two data management systems containing water quality information for the nation's waters: the Legacy Data Center (LDC), and STORET. The LDC is a static, archived database and STORET is an operational system actively being populated with water quality data.
Glossary | STORET Legacy Data Center | US EPA
2011-02-14
The U.S. Environmental Protection Agency (EPA) maintains two data management systems containing water quality information for the nation's waters: the Legacy Data Center (LDC), and STORET. The LDC is a static, archived database and STORET is an operational system actively being populated with water quality data.
Contacts | STORET Legacy Data Center | US EPA
2007-05-16
The U.S. Environmental Protection Agency (EPA) maintains two data management systems containing water quality information for the nation's waters: the Legacy Data Center (LDC), and STORET. The LDC is a static, archived database and STORET is an operational system actively being populated with water quality data.
Databases as policy instruments. About extending networks as evidence-based policy.
de Bont, Antoinette; Stoevelaar, Herman; Bal, Roland
2007-12-07
This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively). We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national) bodies of professionals, resulting in restrictive prescription behavior amongst physicians. The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.
A comprehensive clinical research database based on CDISC ODM and i2b2.
Meineke, Frank A; Stäubert, Sebastian; Löbe, Matthias; Winter, Alfred
2014-01-01
We present a working approach for a clinical research database as part of an archival information system. The CDISC ODM standard is target for clinical study and research relevant routine data, thus decoupling the data ingest process from the access layer. The presented research database is comprehensive as it covers annotating, mapping and curation of poorly annotated source data. Besides a conventional relational database the medical data warehouse i2b2 serves as main frontend for end-users. The system we developed is suitable to support patient recruitment, cohort identification and quality assurance in daily routine.
Development of water environment information management and water pollution accident response system
NASA Astrophysics Data System (ADS)
Zhang, J.; Ruan, H.
2009-12-01
In recent years, many water pollution accidents occurred with the rapid economical development. In this study, water environment information management and water pollution accident response system are developed based on geographic information system (GIS) techniques. The system integrated spatial database, attribute database, hydraulic model, and water quality model under a user-friendly interface in a GIS environment. System ran in both Client/Server (C/S) and Browser/Server (B/S) platform which focused on model and inquiry respectively. System provided spatial and attribute data inquiry, water quality evaluation, statics, water pollution accident response case management (opening reservoir etc) and 2D and 3D visualization function, and gave assistant information to make decision on water pollution accident response. Polluted plume in Huaihe River were selected to simulate the transport of pollutes.
A comprehensive global genotype-phenotype database for rare diseases.
Trujillano, Daniel; Oprea, Gabriela-Elena; Schmitz, Yvonne; Bertoli-Avella, Aida M; Abou Jamra, Rami; Rolfs, Arndt
2017-01-01
The ability to discover genetic variants in a patient runs far ahead of the ability to interpret them. Databases with accurate descriptions of the causal relationship between the variants and the phenotype are valuable since these are critical tools in clinical genetic diagnostics. Here, we introduce a comprehensive and global genotype-phenotype database focusing on rare diseases. This database (CentoMD ® ) is a browser-based tool that enables access to a comprehensive, independently curated system utilizing stringent high-quality criteria and a quickly growing repository of genetic and human phenotype ontology (HPO)-based clinical information. Its main goals are to aid the evaluation of genetic variants, to enhance the validity of the genetic analytical workflow, to increase the quality of genetic diagnoses, and to improve evaluation of treatment options for patients with hereditary diseases. The database software correlates clinical information from consented patients and probands of different geographical backgrounds with a large dataset of genetic variants and, when available, biomarker information. An automated follow-up tool is incorporated that informs all users whenever a variant classification has changed. These unique features fully embedded in a CLIA/CAP-accredited quality management system allow appropriate data quality and enhanced patient safety. More than 100,000 genetically screened individuals are documented in the database, resulting in more than 470 million variant detections. Approximately, 57% of the clinically relevant and uncertain variants in the database are novel. Notably, 3% of the genetic variants identified and previously reported in the literature as being associated with a particular rare disease were reclassified, based on internal evidence, as clinically irrelevant. The database offers a comprehensive summary of the clinical validity and causality of detected gene variants with their associated phenotypes, and is a valuable tool for identifying new disease genes through the correlation of novel genetic variants with specific, well-defined phenotypes.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
..., tribal entities, environmental groups, academic institutions, industrial groups) use the ambient air... System (AQS) database. Quality assurance/quality control records and monitoring network documentation are...
Legacy STORET Level 5 | STORET Legacy Data Center | US ...
2007-05-16
The U.S. Environmental Protection Agency (EPA) maintains two data management systems containing water quality information for the nation's waters: the Legacy Data Center (LDC), and STORET. The LDC is a static, archived database and STORET is an operational system actively being populated with water quality data.
Participation in a national nursing outcomes database: monitoring outcomes over time.
Loan, Lori A; Patrician, Patricia A; McCarthy, Mary
2011-01-01
The current and future climates in health care require increased accountability of health care organizations for the quality of the care they provide. Never before in the history of health care in America has this focus on quality been so critical. The imperative to measure nursing's impact without fully developed and tested monitoring systems is a critical issue for nurse executives and managers alike. This article describes a project to measure nursing structure, process, and outcomes in the military health system, the Military Nursing Outcomes Database project. Here we review the effectiveness of this project in monitoring changes over time, in satisfying expectations of nurse leaders in participating hospitals, and evaluate the potential budgetary impacts of such a system. We conclude that the Military Nursing Outcomes Database did meet the needs of a monitoring system that is sensitive to changes over time in outcomes, provides interpretable data for nurse leaders, and could result in cost benefits and patient care improvements in organizations.
Exploring the feasibility of traditional image querying tasks for industrial radiographs
NASA Astrophysics Data System (ADS)
Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.
2015-08-01
Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
Smirani, Rawen; Truchetet, Marie-Elise; Poursac, Nicolas; Naveau, Adrien; Schaeverbeke, Thierry; Devillard, Raphaël
2018-06-01
Oropharyngeal features are frequent and often understated in the treatment clinical guidelines of systemic sclerosis in spite of important consequences on comfort, esthetics, nutrition and daily life. The aim of this systematic review was to assess a correlation between the oropharyngeal manifestations of systemic sclerosis and patients' health-related quality of life. A systematic search was conducted using four databases [PubMed ® , Cochrane Database ® , Dentistry & Oral Sciences Source ® , and SCOPUS ® ] up to January 2018, according to the Preferred reporting items for systematic reviews and meta analyses. Grey literature and hand search were also included. Study selection, risk bias assessment (Newcastle-Ottawa scale) and data extraction were performed by two independent reviewers. The review protocol was registered on PROSPERO database with the code CRD42018085994. From 375 screened studies, 6 cross-sectional studies were included in the systematic review. The total number of patients included per study ranged from 84 to 178. These studies reported a statistically significant association between oropharyngeal manifestations of systemic sclerosis (mainly assessed by maximal mouth opening and the mouth handicap in systemic sclerosis scale) and an impaired quality of life (measured by different scales). Studies were unequal concerning risk of bias mostly because of low level of evidence, different recruiting sources of samples, and different scales to assess the quality of life. This systematic review demonstrates a correlation between oropharyngeal manifestations of systemic sclerosis and impaired quality of life, despite the low level of evidence of included studies. Large-scaled studies are needed to provide stronger evidence of this association. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
Organizing a breast cancer database: data management.
Yi, Min; Hunt, Kelly K
2016-06-01
Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.
Database Performance Monitoring for the Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.
The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website.more » To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yung, J; Stefan, W; Reeve, D
2015-06-15
Purpose: Phantom measurements allow for the performance of magnetic resonance (MR) systems to be evaluated. Association of Physicists in Medicine (AAPM) Report No. 100 Acceptance Testing and Quality Assurance Procedures for MR Imaging Facilities, American College of Radiology (ACR) MR Accreditation Program MR phantom testing, and ACR MRI quality control (QC) program documents help to outline specific tests for establishing system performance baselines as well as system stability over time. Analyzing and processing tests from multiple systems can be time-consuming for medical physicists. Besides determining whether tests are within predetermined limits or criteria, monitoring longitudinal trends can also help preventmore » costly downtime of systems during clinical operation. In this work, a semi-automated QC program was developed to analyze and record measurements in a database that allowed for easy access to historical data. Methods: Image analysis was performed on 27 different MR systems of 1.5T and 3.0T field strengths from GE and Siemens manufacturers. Recommended measurements involved the ACR MRI Accreditation Phantom, spherical homogenous phantoms, and a phantom with an uniform hole pattern. Measurements assessed geometric accuracy and linearity, position accuracy, image uniformity, signal, noise, ghosting, transmit gain, center frequency, and magnetic field drift. The program was designed with open source tools, employing Linux, Apache, MySQL database and Python programming language for the front and backend. Results: Processing time for each image is <2 seconds. Figures are produced to show regions of interests (ROIs) for analysis. Historical data can be reviewed to compare previous year data and to inspect for trends. Conclusion: A MRI quality assurance and QC program is necessary for maintaining high quality, ACR MRI Accredited MR programs. A reviewable database of phantom measurements assists medical physicists with processing and monitoring of large datasets. Longitudinal data can reveal trends that although are within passing criteria indicate underlying system issues.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson Khosah
2007-07-31
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project was conducted in two phases. Phase One included the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two involved the development of a platform for on-line data analysis. Phase Two included the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now technically completed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paige, Karen Schultz; Gomez, Penelope E.
This document describes the approach Waste and Environmental Services - Environmental Data and Analysis plans to take to resolve the issues presented in a recent audit of the WES-EDA Environmental Database relative to the RACER database. A majority of the issues discovered in the audit will be resolved in May 2011 when the WES-EDA Environmental Database, along with other LANL databases, are integrated and moved to a new vendor providing an Environmental Information Management (EIM) system that allows reporting capabilities for all users directly from the database. The EIM system will reside in a publicly accessible LANL cloud-based software system.more » When this transition occurs, the data quality, completeness, and access will change significantly. In the remainder of this document, this new structure will be referred to as the LANL Cloud System In general, our plan is to address the issues brought up in this audit in three ways: (1) Data quality issues such as units and detection status, which impinge upon data usability, will be resolved as soon possible so that data quality is maintained. (2) Issues requiring data cleanup, such as look up tables, legacy data, locations, codes, and significant data discrepancies, will be addressed as resources permit. (3) Issues associated with data feed problems will be eliminated by the LANL Cloud System, because there will be no data feed. As discussed in the paragraph above, in the future the data will reside in a publicly accessible system. Note that report writers may choose to convert, adapt, or simplify the information they receive officially through our data base, thereby introducing data discrepancies between the data base and the public report. It is not always possible to incorporate and/or correct these errors when they occur. Issues in the audit will be discussed in the order in which they are presented in the audit report. Clarifications will also be noted as the audit report was a draft document, at the time of this response.« less
PERFORMANCE AUDITING OF A HUMAN AIR POLLUTION EXPOSURE SYSTEM FOR PM2.5
Databases derived from human health effects research play a vital role in setting environmental standards. An underlying assumption in using these databases for standard setting purposes is that they are of adequate quality. The performance auditing program described in this ma...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-14
... in EPA's Air Quality System (AQS) database. To account for missing data, the procedures found in... determination is based upon complete, certified, quality-assured ambient air quality monitoring data for the... proposing? II. What is the background for this proposed action? III. What is EPA's analysis of data for...
Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems
NASA Astrophysics Data System (ADS)
Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald
A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.
NASA Astrophysics Data System (ADS)
Sprintall, J.; Cowley, R.; Palmer, M. D.; Domingues, C. M.; Suzuki, T.; Ishii, M.; Boyer, T.; Goni, G. J.; Gouretski, V. V.; Macdonald, A. M.; Thresher, A.; Good, S. A.; Diggs, S. C.
2016-02-01
Historical ocean temperature profile observations provide a critical element for a host of ocean and climate research activities. These include providing initial conditions for seasonal-to-decadal prediction systems, evaluating past variations in sea level and Earth's energy imbalance, ocean state estimation for studying variability and change, and climate model evaluation and development. The International Quality controlled Ocean Database (IQuOD) initiative represents a community effort to create the most globally complete temperature profile dataset, with (intelligent) metadata and assigned uncertainties. With an internationally coordinated effort organized by oceanographers, with data and ocean instrumentation expertise, and in close consultation with end users (e.g., climate modelers), the IQuOD initiative will assess and maximize the potential of an irreplaceable collection of ocean temperature observations (tens of millions of profiles collected at a cost of tens of billions of dollars, since 1772) to fulfil the demand for a climate-quality global database that can be used with greater confidence in a vast range of climate change related research and services of societal benefit. Progress towards version 1 of the IQuOD database, ongoing and future work will be presented. More information on IQuOD is available at www.iquod.org.
Databases derived from human health effects research play a vital role in setting environmental standards. An underlying assumption in using these databases for standard setting purposes is that they are of adequate quality. The performance auditing program described in this ma...
Application of cloud database in the management of clinical data of patients with skin diseases.
Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning
2015-04-01
To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.
Design and implementation of an audit trail in compliance with US regulations.
Jiang, Keyuan; Cao, Xiang
2011-10-01
Audit trails have been used widely to ensure quality of study data and have been implemented in computerized clinical trials data systems. Increasingly, there is a need to audit access to study participant identifiable information to provide assurance that study participant privacy is protected and confidentiality is maintained. In the United States, several federal regulations specify how the audit trail function should be implemented. To describe the development and implementation of a comprehensive audit trail system that meets the regulatory requirements of assuring data quality and integrity and protecting participant privacy and that is also easy to implement and maintain. The audit trail system was designed and developed after we examined regulatory requirements, data access methods, prevailing application architecture, and good security practices. Our comprehensive audit trail system was developed and implemented at the database level using a commercially available database management software product. It captures both data access and data changes with the correct user identifier. Documentation of access is initiated automatically in response to either data retrieval or data change at the database level. Currently, our system has been implemented only on one commercial database management system. Although our audit trail algorithm does not allow for logging aggregate operations, aggregation does not reveal sensitive private participant information. Careful consideration must be given to data items selected for monitoring because selection of all data items using our system can dramatically increase the requirements for computer disk space. Evaluating the criticality and sensitivity of individual data items selected can control the storage requirements for clinical trial audit trail records. Our audit trail system is capable of logging data access and data change operations to satisfy regulatory requirements. Our approach is applicable to virtually any data that can be stored in a relational database.
A review of data quality assessment methods for public health information systems.
Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping
2014-05-14
High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users' concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process.
A Review of Data Quality Assessment Methods for Public Health Information Systems
Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping
2014-01-01
High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Jain, Anil K; Feng, Jianjiang
2011-01-01
Latent fingerprint identification is of critical importance to law enforcement agencies in identifying suspects: Latent fingerprints are inadvertent impressions left by fingers on surfaces of objects. While tremendous progress has been made in plain and rolled fingerprint matching, latent fingerprint matching continues to be a difficult problem. Poor quality of ridge impressions, small finger area, and large nonlinear distortion are the main difficulties in latent fingerprint matching compared to plain or rolled fingerprint matching. We propose a system for matching latent fingerprints found at crime scenes to rolled fingerprints enrolled in law enforcement databases. In addition to minutiae, we also use extended features, including singularity, ridge quality map, ridge flow map, ridge wavelength map, and skeleton. We tested our system by matching 258 latents in the NIST SD27 database against a background database of 29,257 rolled fingerprints obtained by combining the NIST SD4, SD14, and SD27 databases. The minutiae-based baseline rank-1 identification rate of 34.9 percent was improved to 74 percent when extended features were used. In order to evaluate the relative importance of each extended feature, these features were incrementally used in the order of their cost in marking by latent experts. The experimental results indicate that singularity, ridge quality map, and ridge flow map are the most effective features in improving the matching accuracy.
Quality Attribute-Guided Evaluation of NoSQL Databases: An Experience Report
2014-10-18
detailed technical evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study... big data , software systems [Agarwal 2011]. Internet-born organizations such as Google and Amazon are at the cutting edge of this revolution...Chang 2008], along with those of numerous other big data innovators, have made a variety of open source and commercial data management technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuszak, M; Anderson, C; Lee, C
Purpose: With electronic medical records, patient information for the treatment planning process has become disseminated across multiple applications with limited quality control and many associated failure modes. We present the development of a single application with a centralized database to manage the planning process. Methods: The system was designed to replace current functionalities of (i) static directives representing the physician intent for the prescription and planning goals, localization information for delivery, and other information, (ii) planning objective reports, (iii) localization and image guidance documents and (iv) the official radiation therapy prescription in the medical record. Using the Eclipse Scripting Applicationmore » Programming Interface, a plug-in script with an associated domain-specific SQL Server database was created to manage the information in (i)–(iv). The system’s user interface and database were designed by a team of physicians, clinical physicists, database experts, and software engineers to ensure usability and robustness for clinical use. Results: The resulting system has been fully integrated within the TPS via a custom script and database. Planning scenario templates, version control, approvals, and logic-based quality control allow this system to fully track and document the planning process as well as physician approval of tradeoffs while improving the consistency of the data. Multiple plans and prescriptions are supported along with non-traditional dose objectives and evaluation such as biologically corrected models, composite dose limits, and management of localization goals. User-specific custom views were developed for the attending physician review, physicist plan checks, treating therapists, and peer review in chart rounds. Conclusion: A method was developed to maintain cohesive information throughout the planning process within one integrated system by using a custom treatment planning management application that interfaces directly with the TPS. Future work includes quantifying the improvements in quality, safety and efficiency that are possible with the routine clinical use of this system. Supported in part by NIH-P01-CA-059827.« less
A novel processed food classification system applied to Australian food composition databases.
O'Halloran, S A; Lacy, K E; Grimes, C A; Woods, J; Campbell, K J; Nowson, C A
2017-08-01
The extent of food processing can affect the nutritional quality of foodstuffs. Categorising foods by the level of processing emphasises the differences in nutritional quality between foods within the same food group and is likely useful for determining dietary processed food consumption. The present study aimed to categorise foods within Australian food composition databases according to the level of food processing using a processed food classification system, as well as assess the variation in the levels of processing within food groups. A processed foods classification system was applied to food and beverage items contained within Australian Food and Nutrient (AUSNUT) 2007 (n = 3874) and AUSNUT 2011-13 (n = 5740). The proportion of Minimally Processed (MP), Processed Culinary Ingredients (PCI) Processed (P) and Ultra Processed (ULP) by AUSNUT food group and the overall proportion of the four processed food categories across AUSNUT 2007 and AUSNUT 2011-13 were calculated. Across the food composition databases, the overall proportions of foods classified as MP, PCI, P and ULP were 27%, 3%, 26% and 44% for AUSNUT 2007 and 38%, 2%, 24% and 36% for AUSNUT 2011-13. Although there was wide variation in the classifications of food processing within the food groups, approximately one-third of foodstuffs were classified as ULP food items across both the 2007 and 2011-13 AUSNUT databases. This Australian processed food classification system will allow researchers to easily quantify the contribution of processed foods within the Australian food supply to assist in assessing the nutritional quality of the dietary intake of population groups. © 2017 The British Dietetic Association Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-29
..., environmental groups, academic institutions, industrial groups) use the ambient air quality data for many... requested in this ICR submit these data electronically to the U.S. EPA's Air Quality System (AQS) database...
Selby, Luke V; Sjoberg, Daniel D; Cassella, Danielle; Sovel, Mindy; Weiser, Martin R; Sepkowitz, Kent; Jones, David R; Strong, Vivian E
2015-06-15
Surgical quality improvement requires accurate tracking and benchmarking of postoperative adverse events. We track surgical site infections (SSIs) with two systems; our in-house surgical secondary events (SSE) database and the National Surgical Quality Improvement Project (NSQIP). The SSE database, a modification of the Clavien-Dindo classification, categorizes SSIs by their anatomic site, whereas NSQIP categorizes by their level. Our aim was to directly compare these different definitions. NSQIP and the SSE database entries for all surgeries performed in 2011 and 2012 were compared. To match NSQIP definitions, and while blinded to NSQIP results, entries in the SSE database were categorized as either incisional (superficial or deep) or organ space infections. These categorizations were compared with NSQIP records; agreement was assessed with Cohen kappa. The 5028 patients in our cohort had a 6.5% SSI in the SSE database and a 4% rate in NSQIP, with an overall agreement of 95% (kappa = 0.48, P < 0.0001). The rates of categorized infections were similarly well matched; incisional rates of 4.1% and 2.7% for the SSE database and NSQIP and organ space rates of 2.6% and 1.5%. Overall agreements were 96% (kappa = 0.36, P < 0.0001) and 98% (kappa = 0.55, P < 0.0001), respectively. Over 80% of cases recorded by the SSE database but not NSQIP did not meet NSQIP criteria. The SSE database is an accurate, real-time record of postoperative SSIs. Institutional databases that capture all surgical cases can be used in conjunction with NSQIP with excellent concordance. Copyright © 2015 Elsevier Inc. All rights reserved.
HIV quality report cards: impact of case-mix adjustment and statistical methods.
Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N
2014-10-15
There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Dong, Yi; Fang, Kun; Wang, Xin; Chen, Shengdi; Liu, Xueyuan; Zhao, Yuwu; Guan, Yangtai; Cai, Dingfang; Li, Gang; Liu, Jianmin; Liu, Jianren; Zhuang, Jianhua; Wang, Panshi; Chen, Xin; Shen, Haipeng; Wang, David Z; Xian, Ying; Feng, Wuwei; Campbell, Bruce Cv; Parsons, Mark; Dong, Qiang
2018-07-01
Background Several stroke outcome and quality control projects have demonstrated the success in stroke care quality improvement through structured process. However, Chinese health-care systems are challenged with its overwhelming numbers of patients, limited resources, and large regional disparities. Aim To improve quality of stroke care to address regional disparities through process improvement. Method and design The Shanghai Stroke Service System (4S) is established as a regional network for stroke care quality improvement in the Shanghai metropolitan area. The 4S registry uses a web-based database that automatically extracts data from structured electronic medical records. Site-specific education and training program will be designed and administrated according to their baseline characteristics. Both acute reperfusion therapies including thrombectomy and thrombolysis in the acute phase and subsequent care were measured and monitored with feedback. Primary outcome is to evaluate the differences in quality metrics between baseline characteristics (including rate of thrombolysis in acute stroke and key performance indicators in secondary prevention) and post-intervention. Conclusions The 4S system is a regional stroke network that monitors the ongoing stroke care quality in Shanghai. This project will provide the opportunity to evaluate the spectrum of acute stroke care and design quality improvement processes for better stroke care. A regional stroke network model for quality improvement will be explored and might be expanded to other large cities in China. Clinical Trial Registration-URL http://www.clinicaltrials.gov . Unique identifier: NCT02735226.
Irwin, Jodi A; Saunier, Jessica L; Strouss, Katharine M; Sturk, Kimberly A; Diegoli, Toni M; Just, Rebecca S; Coble, Michael D; Parson, Walther; Parsons, Thomas J
2007-06-01
In an effort to increase the quantity, breadth and availability of mtDNA databases suitable for forensic comparisons, we have developed a high-throughput process to generate approximately 5000 control region sequences per year from regional US populations, global populations from which the current US population is derived and global populations currently under-represented in available forensic databases. The system utilizes robotic instrumentation for all laboratory steps from pre-extraction through sequence detection, and a rigorous eight-step, multi-laboratory data review process with entirely electronic data transfer. Over the past 3 years, nearly 10,000 control region sequences have been generated using this approach. These data are being made publicly available and should further address the need for consistent, high-quality mtDNA databases for forensic testing.
An evaluation of information retrieval accuracy with simulated OCR output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, W.B.; Harding, S.M.; Taghva, K.
Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents canmore » result in significant degradation.« less
The Danish Cardiac Rehabilitation Database.
Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole
2016-01-01
The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.
An Integrated Decision Support System for Water Quality Management of Songhua River Basin
NASA Astrophysics Data System (ADS)
Zhang, Haiping; Yin, Qiuxiao; Chen, Ling
2010-11-01
In the Songhua River Basin of China, many water resource and water environment conflicts interact. A Decision Support System (DSS) for the water quality management has been established for the Basin. The System is featured by the incorporation of a numerical water quality model system into a conventional water quality management system which usually consists of geographic information system (GIS), WebGIS technology, database system and network technology. The model system is built based on DHI MIKE software comprising of a basin rainfall-runoff module, a basin pollution load evaluation module, a river hydrodynamic module and a river water quality module. The DSS provides a friendly graphical user interface that enables the rapid and transparent calculation of various water quality management scenarios, and also enables the convenient access and interpretation of the modeling results to assist the decision-making.
[Data validation methods and discussion on Chinese materia medica resource survey].
Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing
2013-07-01
From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.
TopoCad - A unified system for geospatial data and services
NASA Astrophysics Data System (ADS)
Felus, Y. A.; Sagi, Y.; Regev, R.; Keinan, E.
2013-10-01
"E-government" is a leading trend in public sector activities in recent years. The Survey of Israel set as a vision to provide all of its services and datasets online. The TopoCad system is the latest software tool developed in order to unify a number of services and databases into one on-line and user friendly system. The TopoCad system is based on Web 1.0 technology; hence the customer is only a consumer of data. All data and services are accessible for the surveyors and geo-information professional in an easy and comfortable way. The future lies in Web 2.0 and Web 3.0 technologies through which professionals can upload their own data for quality control and future assimilation with the national database. A key issue in the development of this complex system was to implement a simple and easy (comfortable) user experience (UX). The user interface employs natural language dialog box in order to understand the user requirements. The system then links spatial data with alpha-numeric data in a flawless manner. The operation of the TopoCad requires no user guide or training. It is intuitive and self-taught. The system utilizes semantic engines and machine understanding technologies to link records from diverse databases in a meaningful way. Thus, the next generation of TopoCad will include five main modules: users and projects information, coordinates transformations and calculations services, geospatial data quality control, linking governmental systems and databases, smart forms and applications. The article describes the first stage of the TopoCad system and gives an overview of its future development.
Modernized Techniques for Dealing with Quality Data and Derived Products
NASA Astrophysics Data System (ADS)
Neiswender, C.; Miller, S. P.; Clark, D.
2008-12-01
"I just want a picture of the ocean floor in this area" is expressed all too often by researchers, educators, and students in the marine geosciences. As more sophisticated systems are developed to handle data collection and processing, the demand for quality data, and standardized products continues to grow. Data management is an invisible bridge between science and researchers/educators. The SIOExplorer digital library presents more than 50 years of ocean-going research. Prior to publication, all data is checked for quality using standardized criterion developed for each data stream. Despite the evolution of data formats and processing systems, SIOExplorer continues to present derived products in well- established formats. Standardized products are published for each cruise, and include a cruise report, MGD77 merged data, multi-beam flipbook, and underway profiles. Creation of these products is made possible by processing scripts, which continue to change with ever-evolving data formats. We continue to explore the potential of database-enabled creation of standardized products, such as the metadata-rich MGD77 header file. Database-enabled, automated processing produces standards-compliant metadata for each data and derived product. Metadata facilitates discovery and interpretation of published products. This descriptive information is stored both in an ASCII file, and a searchable digital library database. SIOExplorer's underlying technology allows focused search and retrieval of data and products. For example, users can initiate a search of only multi-beam data, which includes data-specific parameters. This customization is made possible with a synthesis of database, XML, and PHP technology. The combination of standardized products and digital library technology puts quality data and derived products in the hands of scientists. Interoperable systems enable distribution these published resources using technology such as web services. By developing modernized strategies to deal with data, Scripps Institution of Oceanography is able to produce and distribute well-formed, and quality-tested derived products, which aid research, understanding, and education.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-24
... recorded in EPA's Air Quality System (AQS) database. To account for missing data, the procedures found in... three-year period and then adjusts for missing data. In short, if the three-year average expected... ambient air quality monitoring data for the 2001-2003 monitoring period showing that the area had an...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Frank T. Alex
2007-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-eighth month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
2006-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-second month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its eleventh month of Phase 1 development activities.« less
Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M
1995-01-01
Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.
Space Launch System Ascent Static Aerodynamic Database Development
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T.; Bennett, David W.; Blevins, John A.; Erickson, Gary E.; Favaregh, Noah M.; Houlden, Heather P.; Tomek, William G.
2014-01-01
This paper describes the wind tunnel testing work and data analysis required to characterize the static aerodynamic environment of NASA's Space Launch System (SLS) ascent portion of flight. Scaled models of the SLS have been tested in transonic and supersonic wind tunnels to gather the high fidelity data that is used to build aerodynamic databases. A detailed description of the wind tunnel test that was conducted to produce the latest version of the database is presented, and a representative set of aerodynamic data is shown. The wind tunnel data quality remains very high, however some concerns with wall interference effects through transonic Mach numbers are also discussed. Post-processing and analysis of the wind tunnel dataset are crucial for the development of a formal ascent aerodynamics database.
Hadley, Heidi K.
2000-01-01
Selected nitrogen and phosphorus (nutrient), suspended-sediment and total suspended-solids surface-water data were compiled from January 1980 through December 1995 within the Great Salt Lake Basins National Water-Quality Assessment study unit, which extends from southeastern Idaho to west-central Utah and from Great Salt Lake to the Wasatch and western Uinta Mountains. The data were retrieved from the U.S. Geological Survey National Water Information System and the State of Utah, Department of Environmental Quality, Division of Water Quality database. The Division of Water Quality database includes data that are submitted to the U.S. Environmental Protection Agency STOrage and RETrieval system. Water-quality data included in this report were selected for surface-water sites (rivers, streams, and canals) that had three or more nutrient, suspended-sediment, or total suspended-solids analyses. Also, 33 percent or more of the measurements at a site had to include discharge, and, for non-U.S. Geological Survey sites, there had to be 2 or more years of data. Ancillary data for parameters such as water temperature, pH, specific conductance, streamflow (discharge), dissolved oxygen, biochemical oxygen demand, alkalinity, and turbidity also were compiled, as available. The compiled nutrient database contains 13,511 samples from 191 selected sites. The compiled suspended-sediment and total suspended-solids database contains 11,642 samples from 142 selected sites. For the nutrient database, the median (50th percentile) sample period for individual sites is 6 years, and the 75th percentile is 14 years. The median number of samples per site is 52 and the 75th percentile is 110 samples. For the suspended-sediment and total suspended-solids database, the median sample period for individual sites is 9 years, and the 75th percentile is 14 years. The median number of samples per site is 76 and the 75th percentile is 120 samples. The compiled historical data are being used in the basinwide sampling strategy to characterize the broad-scale geographic and seasonal water-quality conditions in relation to major contaminant sources and background conditions. Data for this report are stored on a compact disc.
Identification of suitable fundus images using automated quality assessment methods.
Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet
2014-04-01
Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
Guidelines for establishing and maintaining construction quality databases : tech brief.
DOT National Transportation Integrated Search
2006-12-01
Construction quality databases contain a variety of construction-related data that characterize the quality of materials and workmanship. The primary purpose of construction quality databases is to help State highway agencies (SHAs) assess the qualit...
Jantzen, Rodolphe; Rance, Bastien; Katsahian, Sandrine; Burgun, Anita; Looten, Vincent
2018-01-01
Open data available largely and with minimal constraints to the general public and journalists are needed to help rebuild trust between citizens and the health system. By opening data, we can expect to increase the democratic accountability, the self-empowerment of citizens. This article aims at assessing the quality and reusability of the Transparency - Health database with regards to the FAIR principles. More specifically, we observe the quality of the identity of the French medical doctors in the Transp-db. This study shows that the quality of the data in the Transp-db does not allow to identity with certainty those who benefit from an advantage or remuneration to be confirmed, reducing noticeably the impact of the open data effort.
Impact of medical director certification on nursing home quality of care.
Rowland, Frederick N; Cowles, Mick; Dickstein, Craig; Katz, Paul R
2009-07-01
This study tests the research hypothesis that certified medical directors are able to use their training, education, and knowledge to positively influence quality of care in US nursing homes. F-tag numbers were identified within the State Operations Manual that reflect dimensions of quality thought to be impacted by the medical director. A weighting system was developed based on the "scope and severity" level at which the nursing homes were cited for these specific tag numbers. Then homes led by certified medical directors were compared with homes led by medical directors not known to be certified. DATA/PARTICIPANTS: Data were obtained from the Centers for Medicare & Medicaid Services' Online Survey Certification and Reporting database for nursing homes. Homes with a certified medical director (547) were identified from the database of the American Medical Directors Association. The national survey database was used to compute a "standardized quality score" (zero representing best possible score and 1.0 representing average score) for each home, and the homes with certified medical directors compared with the other homes in the database. Regression analysis was then used to attempt to identify the most important contributors to measured quality score differences between the homes. The standardized quality score of facilities with certified medical directors (n=547) was 0.8958 versus 1.0037 for facilities without certified medical directors (n=15,230) (lower number represents higher quality). When nursing facility characteristics were added to the regression equation, the presence of a certified medical director accounted for up to 15% improvement in quality. The presence of certified medical directors is an independent predictor of quality in US nursing homes.
Geotherm: the U.S. geological survey geothermal information system
Bliss, J.D.; Rapport, A.
1983-01-01
GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey. Information in the system is available to the public on request. ?? 1983.
1983-07-01
Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video
[The Brazilian Hospital Information System and the acute myocardial infarction hospital care].
Escosteguy, Claudia Caminha; Portela, Margareth Crisóstomo; Medronho, Roberto de Andrade; de Vasconcellos, Maurício Teixeira Leite
2002-08-01
To analyze the applicability of the Brazilian Unified Health System's national hospital database to evaluate the quality of acute myocardial infarction hospital care. It was evaluated 1,936 hospital admission forms having acute myocardial infarction (AMI) as primary diagnosis in the municipal district of Rio de Janeiro, Brazil, in 1997. Data was collected from the national hospital database. A stratified random sampling of 391 medical records was also evaluated. AMI diagnosis agreement followed the literature criteria. Variable accuracy analysis was performed using kappa index agreement. The quality of AMI diagnosis registered in hospital admission forms was satisfactory according to the gold standard of the literature. In general, the accuracy of the variables demographics (sex, age group), process (medical procedures and interventions), and outcome (hospital death) was satisfactory. The accuracy of demographics and outcome variables was higher than the one of process variables. Under registration of secondary diagnosis was high in the forms and it was the main limiting factor. Given the study findings and the widespread availability of the national hospital database, it is pertinent its use as an instrument in the evaluation of the quality of AMI medical care.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Decision Support Systems for Research and Management in Advanced Life Support
NASA Technical Reports Server (NTRS)
Rodriquez, Luis F.
2004-01-01
Decision support systems have been implemented in many applications including strategic planning for battlefield scenarios, corporate decision making for business planning, production planning and control systems, and recommendation generators like those on Amazon.com(Registered TradeMark). Such tools are reviewed for developing a similar tool for NASA's ALS Program. DSS are considered concurrently with the development of the OPIS system, a database designed for chronicling of research and development in ALS. By utilizing the OPIS database, it is anticipated that decision support can be provided to increase the quality of decisions by ALS managers and researchers.
Horsch, Alexander; Hapfelmeier, Alexander; Elter, Matthias
2011-11-01
Breast cancer is globally a major threat for women's health. Screening and adequate follow-up can significantly reduce the mortality from breast cancer. Human second reading of screening mammograms can increase breast cancer detection rates, whereas this has not been proven for current computer-aided detection systems as "second reader". Critical factors include the detection accuracy of the systems and the screening experience and training of the radiologist with the system. When assessing the performance of systems and system components, the choice of evaluation methods is particularly critical. Core assets herein are reference image databases and statistical methods. We have analyzed characteristics and usage of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM) from the University of South Florida, in literature indexed in Medline, IEEE Xplore, SpringerLink, and SPIE, with respect to type of computer-aided diagnosis (CAD) (detection, CADe, or diagnostics, CADx), selection of database subsets, choice of evaluation method, and quality of descriptions. 59 publications presenting 106 evaluation studies met our selection criteria. In 54 studies (50.9%), the selection of test items (cases, images, regions of interest) extracted from the DDSM was not reproducible. Only 2 CADx studies, not any CADe studies, used the entire DDSM. The number of test items varies from 100 to 6000. Different statistical evaluation methods are chosen. Most common are train/test (34.9% of the studies), leave-one-out (23.6%), and N-fold cross-validation (18.9%). Database-related terminology tends to be imprecise or ambiguous, especially regarding the term "case". Overall, both the use of the DDSM as data source for evaluation of mammography CAD systems, and the application of statistical evaluation methods were found highly diverse. Results reported from different studies are therefore hardly comparable. Drawbacks of the DDSM (e.g. varying quality of lesion annotations) may contribute to the reasons. But larger bias seems to be caused by authors' own decisions upon study design. RECOMMENDATIONS/CONCLUSION: For future evaluation studies, we derive a set of 13 recommendations concerning the construction and usage of a test database, as well as the application of statistical evaluation methods.
Draft secure medical database standard.
Pangalos, George
2002-01-01
Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.
Does Public Sector Control Reduce Variance in School Quality?
ERIC Educational Resources Information Center
Pritchett, Lant; Viarengo, Martina
2015-01-01
Does the government control of school systems facilitate equality in school quality? Whether centralized or localized control produces more equality depends not only on what "could" happen in principle, but also on what does happen in practice. We use the Programme for International Student Assessment (PISA) database to examine the…
Li, Qing-na; Huang, Xiu-ling; Gao, Rui; Lu, Fang
2012-08-01
Data management has significant impact on the quality control of clinical studies. Every clinical study should have a data management plan to provide overall work instructions and ensure that all of these tasks are completed according to the Good Clinical Data Management Practice (GCDMP). Meanwhile, the data management plan (DMP) is an auditable document requested by regulatory inspectors and must be written in a manner that is realistic and of high quality. The significance of DMP, the minimum standards and the best practices provided by GCDMP, the main contents of DMP based on electronic data capture (EDC) and some key factors of DMP influencing the quality of clinical study were elaborated in this paper. Specifically, DMP generally consists of 15 parts, namely, the approval page, the protocol summary, role and training, timelines, database design, creation, maintenance and security, data entry, data validation, quality control and quality assurance, the management of external data, serious adverse event data reconciliation, coding, database lock, data management reports, the communication plan and the abbreviated terms. Among them, the following three parts are regarded as the key factors: designing a standardized database of the clinical study, entering data in time and cleansing data efficiently. In the last part of this article, the authors also analyzed the problems in clinical research of traditional Chinese medicine using the EDC system and put forward some suggestions for improvement.
The IEO Data Center Management System: Tools for quality control, analysis and access marine data
NASA Astrophysics Data System (ADS)
Casas, Antonia; Garcia, Maria Jesus; Nikouline, Andrei
2010-05-01
Since 1994 the Data Centre of the Spanish Oceanographic Institute develops system for archiving and quality control of oceanographic data. The work started in the frame of the European Marine Science & Technology Programme (MAST) when a consortium of several Mediterranean Data Centres began to work on the MEDATLAS project. Along the years, old software modules for MS DOS were rewritten, improved and migrated to Windows environment. Oceanographic data quality control includes now not only vertical profiles (mainly CTD and bottles observations) but also time series of currents and sea level observations. New powerful routines for analysis and for graphic visualization were added. Data presented originally in ASCII format were organized recently in an open source MySQL database. Nowadays, the IEO, as part of SeaDataNet Infrastructure, has designed and developed a new information system, consistent with the ISO 19115 and SeaDataNet standards, in order to manage the large and diverse marine data and information originated in Spain by different sources, and to interoperate with SeaDataNet. The system works with data stored in ASCII files (MEDATLAS, ODV) as well as data stored within the relational database. The components of the system are: 1.MEDATLAS Format and Quality Control - QCDAMAR: Quality Control of Marine Data. Main set of tools for working with data presented as text files. Includes extended quality control (searching for duplicated cruises and profiles, checking date, position, ship velocity, constant profiles, spikes, density inversion, sounding, acceptable data, impossible regional values,...) and input/output filters. - QCMareas: A set of procedures for the quality control of tide gauge data according to standard international Sea Level Observing System. These procedures include checking for unexpected anomalies in the time series, interpolation, filtering, computation of basic statistics and residuals. 2. DAMAR: A relational data base (MySql) designed to manage the wide variety of marine information as common vocabularies, Catalogues (CSR & EDIOS), Data and Metadata. 3.Other tools for analysis and data management - Import_DB: Script to import data and metadata from the Medatlas ASCII files into the database. - SelDamar/Selavi: interface with the database for local and web access. Allows selective retrievals applying the criteria introduced by the user, as geographical bounds, data responsible, cruises, platform, time periods, etc. Includes also statistical reference values calculation, plotting of original and mean profiles together with vertical interpolation. - ExtractDAMAR: Script to extract data when they are archived in ASCII files that meet the criteria upon an user request through SelDamar interface and export them in ODV format, making also a unit conversion.
DeAngelo, Jacob
1983-01-01
GEOTHERM is a comprehensive system of public databases and software used to store, locate, and evaluate information on the geology, geochemistry, and hydrology of geothermal systems. Three main databases address the general characteristics of geothermal wells and fields, and the chemical properties of geothermal fluids; the last database is currently the most active. System tasks are divided into four areas: (1) data acquisition and entry, involving data entry via word processors and magnetic tape; (2) quality assurance, including the criteria and standards handbook and front-end data-screening programs; (3) operation, involving database backups and information extraction; and (4) user assistance, preparation of such items as application programs, and a quarterly newsletter. The principal task of GEOTHERM is to provide information and research support for the conduct of national geothermal-resource assessments. The principal users of GEOTHERM are those involved with the Geothermal Research Program of the U.S. Geological Survey.
The LSST Data Mining Research Agenda
NASA Astrophysics Data System (ADS)
Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.
2008-12-01
We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.
Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice
2015-01-01
The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
...'' system, and EPA will not know your identity or contact information unless you provide it in the body of... System) is an EPA database of ambient air quality. **The 24-hour PM10 standard is met when the 3-year... note that winds blow out of the north and northwest far less frequently, making transport of PM 10 from...
Health-Terrain: Visualizing Large Scale Health Data
2014-12-01
systems can only be realized if the quality of emerging large medical databases can be characterized and the meaning of the data understood. For this...Designed and tested an evaluation procedure for health data visualization system. This visualization framework offers a real time and web-based solution...rule is shown in the table, with the quality measures of each rule including the support, confidence, Laplace, Gain, p-s, lift and Conviction. We
Practice databases and their uses in clinical research.
Tierney, W M; McDonald, C J
1991-04-01
A few large clinical information databases have been established within larger medical information systems. Although they are smaller than claims databases, these clinical databases offer several advantages: accurate and timely data, rich clinical detail, and continuous parameters (for example, vital signs and laboratory results). However, the nature of the data vary considerably, which affects the kinds of secondary analyses that can be performed. These databases have been used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation, resource use, quality assurance, and decision analysis. In addition, practice databases can be used to identify subjects for prospective studies. Further methodologic developments are necessary to deal with the prevalent problems of missing data and various forms of bias if such databases are to grow and contribute valuable clinical information.
Wawrzyniak, Zbigniew M; Paczesny, Daniel; Mańczuk, Marta; Zatoński, Witold A
2011-01-01
Large-scale epidemiologic studies can assess health indicators differentiating social groups and important health outcomes of the incidence and mortality of cancer, cardiovascular disease, and others, to establish a solid knowledgebase for the prevention management of premature morbidity and mortality causes. This study presents new advanced methods of data collection and data management systems with current data quality control and security to ensure high quality data assessment of health indicators in the large epidemiologic PONS study (The Polish-Norwegian Study). The material for experiment is the data management design of the large-scale population study in Poland (PONS) and the managed processes are applied into establishing a high quality and solid knowledge. The functional requirements of the PONS study data collection, supported by the advanced IT web-based methods, resulted in medical data of a high quality, data security, with quality data assessment, control process and evolution monitoring are fulfilled and shared by the IT system. Data from disparate and deployed sources of information are integrated into databases via software interfaces, and archived by a multi task secure server. The practical and implemented solution of modern advanced database technologies and remote software/hardware structure successfully supports the research of the big PONS study project. Development and implementation of follow-up control of the consistency and quality of data analysis and the processes of the PONS sub-databases have excellent measurement properties of data consistency of more than 99%. The project itself, by tailored hardware/software application, shows the positive impact of Quality Assurance (QA) on the quality of outcomes analysis results, effective data management within a shorter time. This efficiency ensures the quality of the epidemiological data and indicators of health by the elimination of common errors of research questionnaires and medical measurements.
,
2004-01-01
The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site-Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying ground-water and sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.
,
2005-01-01
The Ground-Water Site-Inventory (GWSI) System is a ground-water data storage and retrieval system that is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey (USGS). The NWIS is a distributed water database in which data can be processed over a network of workstations and file servers at USGS offices throughout the United States. This system comprises the GWSI, the Automated Data Processing System (ADAPS), the Water-Quality System (QWDATA), and the Site- Specific Water-Use Data System (SWUDS). The GWSI System provides for entering new sites and updating existing sites within the local database. In addition, the GWSI provides for retrieving and displaying groundwater and Sitefile data stored in the local database. Finally, the GWSI provides for routine maintenance of the local and national data records. This manual contains instructions for users of the GWSI and discusses the general operating procedures for the programs found within the GWSI Main Menu.
Word-level recognition of multifont Arabic text using a feature vector matching approach
NASA Astrophysics Data System (ADS)
Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III
1996-03-01
Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.
A semantic data dictionary method for database schema integration in CIESIN
NASA Astrophysics Data System (ADS)
Hinds, N.; Huang, Y.; Ravishankar, C.
1993-08-01
CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.
[Design and establishment of modern literature database about acupuncture Deqi].
Guo, Zheng-rong; Qian, Gui-feng; Pan, Qiu-yin; Wang, Yang; Xin, Si-yuan; Li, Jing; Hao, Jie; Hu, Ni-juan; Zhu, Jiang; Ma, Liang-xiao
2015-02-01
A search on acupuncture Deqi was conducted using four Chinese-language biomedical databases (CNKI, Wan-Fang, VIP and CBM) and PubMed database and using keywords "Deqi" or "needle sensation" "needling feeling" "needle feel" "obtaining qi", etc. Then, a "Modern Literature Database for Acupuncture Deqi" was established by employing Microsoft SQL Server 2005 Express Edition, introducing the contents, data types, information structure and logic constraint of the system table fields. From this Database, detailed inquiries about general information of clinical trials, acupuncturists' experience, ancient medical works, comprehensive literature, etc. can be obtained. The present databank lays a foundation for subsequent evaluation of literature quality about Deqi and data mining of undetected Deqi knowledge.
NASA Technical Reports Server (NTRS)
Saile, Lynn; Lopez, Vilma; Bickham, Grandin; FreiredeCarvalho, Mary; Kerstman, Eric; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei
2011-01-01
This slide presentation reviews the Integrated Medical Model (IMM) database, which is an organized evidence base for assessing in-flight crew health risk. The database is a relational database accessible to many people. The database quantifies the model inputs by a ranking based on the highest value of the data as Level of Evidence (LOE) and the quality of evidence (QOE) score that provides an assessment of the evidence base for each medical condition. The IMM evidence base has already been able to provide invaluable information for designers, and for other uses.
Proposal for a unified selection to medical residency programs.
Toffoli, Sônia Ferreira Lopes; Ferreira Filho, Olavo Franco; Andrade, Dalton Francisco de
2013-01-01
This paper proposes the unification of entrance exams to medical residency programs (MRP) in Brazil. Problems related to MRP and its interface with public health problems in Brazil are highlighted and how this proposal are able to help solving these problems. The proposal is to create a database to be applied in MRP unified exams. Some advantages of using the Item Response Theory (IRT) in this database are highlighted. The MRP entrance exams are developed and applied decentralized where each school is responsible for its examination. These exams quality are questionable. Reviews about items quality, validity and reliability of appliances are not common disclosed. Evaluation is important in every education system bringing on required changes and control of teaching and learning. The proposal of MRP entrance exams unification, besides offering high quality exams to institutions participants, could be as an extra source to rate medical school and cause improvements, provide studies with a database and allow a regional mobility. Copyright © 2013 Elsevier Editora Ltda. All rights reserved.
Pesticides in Drinking Water – The Brazilian Monitoring Program
Barbosa, Auria M. C.; Solano, Marize de L. M.; Umbuzeiro, Gisela de A.
2015-01-01
Brazil is the world largest pesticide consumer; therefore, it is important to monitor the levels of these chemicals in the water used by population. The Ministry of Health coordinates the National Drinking Water Quality Surveillance Program (Vigiagua) with the objective to monitor water quality. Water quality data are introduced in the program by state and municipal health secretariats using a database called Sisagua (Information System of Water Quality Monitoring). Brazilian drinking water norm (Ordinance 2914/2011 from Ministry of Health) includes 27 pesticide active ingredients that need to be monitored every 6 months. This number represents <10% of current active ingredients approved for use in the country. In this work, we analyzed data compiled in Sisagua database in a qualitative and quantitative way. From 2007 to 2010, approximately 169,000 pesticide analytical results were prepared and evaluated, although approximately 980,000 would be expected if all municipalities registered their analyses. This shows that only 9–17% of municipalities registered their data in Sisagua. In this dataset, we observed non-compliance with the minimum sampling number required by the norm, lack of information about detection and quantification limits, insufficient standardization in expression of results, and several inconsistencies, leading to low credibility of pesticide data provided by the system. Therefore, it is not possible to evaluate exposure of total Brazilian population to pesticides via drinking water using the current national database system Sisagua. Lessons learned from this study could provide insights into the monitoring and reporting of pesticide residues in drinking water worldwide. PMID:26581345
Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.
2006-01-01
Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.
NASA Astrophysics Data System (ADS)
Verdoodt, Ann; Baert, Geert; Van Ranst, Eric
2014-05-01
Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based on analyses done in Rwanda, but can be complemented with ongoing research results or prospects for Burundi.
Planas, M; Rodríguez, T; Lecha, M
2004-01-01
Decisions have to be made about what data on patient characteristics and processes and outcome need to be collected, and standard definitions of these data items need to be developed to identify data quality concerns as promptly as possible and to establish ways to improve data quality. The usefulness of any clinical database depends strongly on the quality of the collected data. If the data quality is poor, the results of studies using the database might be biased and unreliable. Furthermore, if the quality of the database has not been verified, the results might be given little credence, especially if they are unwelcome or unexpected. To assure the quality of clinical database is essential the clear definition of the uses to which the database is going to be put; the database should to be developed that is comprehensive in terms of its usefulness but limited in its size.
Dasari, Surendra; Chambers, Matthew C.; Martinez, Misti A.; Carpenter, Kristin L.; Ham, Amy-Joan L.; Vega-Montoto, Lorenzo J.; Tabb, David L.
2012-01-01
Spectral libraries have emerged as a viable alternative to protein sequence databases for peptide identification. These libraries contain previously detected peptide sequences and their corresponding tandem mass spectra (MS/MS). Search engines can then identify peptides by comparing experimental MS/MS scans to those in the library. Many of these algorithms employ the dot product score for measuring the quality of a spectrum-spectrum match (SSM). This scoring system does not offer a clear statistical interpretation and ignores fragment ion m/z discrepancies in the scoring. We developed a new spectral library search engine, Pepitome, which employs statistical systems for scoring SSMs. Pepitome outperformed the leading library search tool, SpectraST, when analyzing data sets acquired on three different mass spectrometry platforms. We characterized the reliability of spectral library searches by confirming shotgun proteomics identifications through RNA-Seq data. Applying spectral library and database searches on the same sample revealed their complementary nature. Pepitome identifications enabled the automation of quality analysis and quality control (QA/QC) for shotgun proteomics data acquisition pipelines. PMID:22217208
Amelogenin test: From forensics to quality control in clinical and biochemical genomics.
Francès, F; Portolés, O; González, J I; Coltell, O; Verdú, F; Castelló, A; Corella, D
2007-01-01
The increasing number of samples from the biomedical genetic studies and the number of centers participating in the same involves increasing risk of mistakes in the different sample handling stages. We have evaluated the usefulness of the amelogenin test for quality control in sample identification. Amelogenin test (frequently used in forensics) was undertaken on 1224 individuals participating in a biomedical study. Concordance between referred sex in the database and amelogenin test was estimated. Additional sex-error genetic detecting systems were developed. The overall concordance rate was 99.84% (1222/1224). Two samples showed a female amelogenin test outcome, being codified as males in the database. The first, after checking sex-specific biochemical and clinical profile data was found to be due to a codification error in the database. In the second, after checking the database, no apparent error was discovered because a correct male profile was found. False negatives in amelogenin male sex determination were discarded by additional tests, and feminine sex was confirmed. A sample labeling error was revealed after a new DNA extraction. The amelogenin test is a useful quality control tool for detecting sex-identification errors in large genomic studies, and can contribute to increase its validity.
The Muon Conditions Data Management:. Database Architecture and Software Infrastructure
NASA Astrophysics Data System (ADS)
Verducci, Monica
2010-04-01
The management of the Muon Conditions Database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored and their analysis. The Muon conditions database is responsible for almost all of the 'non-event' data and detector quality flags storage needed for debugging of the detector operations and for performing the reconstruction and the analysis. In particular for the early data, the knowledge of the detector performance, the corrections in term of efficiency and calibration will be extremely important for the correct reconstruction of the events. In this work, an overview of the entire Muon conditions database architecture is given, in particular the different sources of the data and the storage model used, including the database technology associated. Particular emphasis is given to the Data Quality chain: the flow of the data, the analysis and the final results are described. In addition, the description of the software interfaces used to access to the conditions data are reported, in particular, in the ATLAS Offline Reconstruction framework ATHENA environment.
Rolston, John D; Han, Seunggu J; Chang, Edward F
2017-03-01
The American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) provides a rich database of North American surgical procedures and their complications. Yet no external source has validated the accuracy of the information within this database. Using records from the 2006 to 2013 NSQIP database, we used two methods to identify errors: (1) mismatches between the Current Procedural Terminology (CPT) code that was used to identify the surgical procedure, and the International Classification of Diseases (ICD-9) post-operative diagnosis: i.e., a diagnosis that is incompatible with a certain procedure. (2) Primary anesthetic and CPT code mismatching: i.e., anesthesia not indicated for a particular procedure. Analyzing data for movement disorders, epilepsy, and tumor resection, we found evidence of CPT code and postoperative diagnosis mismatches in 0.4-100% of cases, depending on the CPT code examined. When analyzing anesthetic data from brain tumor, epilepsy, trauma, and spine surgery, we found evidence of miscoded anesthesia in 0.1-0.8% of cases. National databases like NSQIP are an important tool for quality improvement. Yet all databases are subject to errors, and measures of internal consistency show that errors affect up to 100% of case records for certain procedures in NSQIP. Steps should be taken to improve data collection on the frontend of NSQIP, and also to ensure that future studies with NSQIP take steps to exclude erroneous cases from analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uses and limitations of registry and academic databases.
Williams, William G
2010-01-01
A database is simply a structured collection of information. A clinical database may be a Registry (a limited amount of data for every patient undergoing heart surgery) or Academic (an organized and extensive dataset of an inception cohort of carefully selected subset of patients). A registry and an academic database have different purposes and cost. The data to be collected for a database is defined by its purpose and the output reports required for achieving that purpose. A Registry's purpose is to ensure quality care, an Academic Database, to discover new knowledge through research. A database is only as good as the data it contains. Database personnel must be exceptionally committed and supported by clinical faculty. A system to routinely validate and verify data integrity is essential to ensure database utility. Frequent use of the database improves its accuracy. For congenital heart surgeons, routine use of a Registry Database is an essential component of clinical practice. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Importance of Data Management in a Long-term Biological Monitoring Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty
2011-01-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less
Importance of Data Management in a Long-Term Biological Monitoring Program
NASA Astrophysics Data System (ADS)
Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.
2011-06-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.
Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.
Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong
2016-04-01
Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha
2012-06-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E
2012-01-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393
The impact of database quality on keystroke dynamics authentication
NASA Astrophysics Data System (ADS)
Panasiuk, Piotr; Rybnik, Mariusz; Saeed, Khalid; Rogowski, Marcin
2016-06-01
This paper concerns keystroke dynamics, also partially in the context of touchscreen devices. The authors concentrate on the impact of database quality and propose their algorithm to test database quality issues. The algorithm is used on their own
Epstein, Richard H; Dexter, Franklin
2018-07-01
For this special article, we reviewed the computer code, used to extract the data, and the text of all 47 studies published between January 2006 and August 2017 using anesthesia information management system (AIMS) data from Thomas Jefferson University Hospital (TJUH). Data from this institution were used in the largest number (P = .0007) of papers describing the use of AIMS published in this time frame. The AIMS was replaced in April 2017, making this finite sample finite. The objective of the current article was to identify factors that made TJUH successful in publishing anesthesia informatics studies. We examined the structured query language used for each study to examine the extent to which databases outside of the AIMS were used. We examined data quality from the perspectives of completeness, correctness, concordance, plausibility, and currency. Our results were that most could not have been completed without external database sources (36/47, 76.6%; P = .0003 compared with 50%). The operating room management system was linked to the AIMS and was used significantly more frequently (26/36, 72%) than other external sources. Access to these external data sources was provided, allowing exploration of data quality. The TJUH AIMS used high-resolution timestamps (to the nearest 3 milliseconds) and created audit tables to track changes to clinical documentation. Automatic data were recorded at 1-minute intervals and were not editable; data cleaning occurred during analysis. Few paired events with an expected order were out of sequence. Although most data elements were of high quality, there were notable exceptions, such as frequent missing values for estimated blood loss, height, and weight. Some values were duplicated with different units, and others were stored in varying locations. Our conclusions are that linking the TJUH AIMS to the operating room management system was a critical step in enabling publication of multiple studies using AIMS data. Access to this and other external databases by analysts with a high degree of anesthesia domain knowledge was necessary to be able to assess the quality of the AIMS data and ensure that the data pulled for studies were appropriate. For anesthesia departments seeking to increase their academic productivity using their AIMS as a data source, our experiences may provide helpful guidance.
Quality Measures in Orthopaedic Sports Medicine: A Systematic Review.
Abrams, Geoffrey D; Greenberg, Daniel R; Dragoo, Jason L; Safran, Marc R; Kamal, Robin N
2017-10-01
To report the current quality measures that are applicable to orthopaedic sports medicine physicians. Six databases were searched with a customized search term to identify quality measures relevant to orthopaedic sports medicine surgeons: MEDLINE/PubMed, EMBASE, the National Quality Forum (NQF) Quality Positioning System (QPS), the Agency for Healthcare Research and Quality (AHRQ) National Quality Measures Clearinghouse (NQMC), the Physician Quality Reporting System (PQRS) database, and the American Academy of Orthopaedic Surgeons (AAOS) website. Results were screened by 2 Board-certified orthopaedic surgeons with fellowship training in sports medicine and dichotomized based on sports medicine-specific or general orthopaedic (nonarthroplasty) categories. Hip and knee arthroplasty measures were excluded. Included quality measures were further categorized based on Donabedian's domains and the Center for Medicare and Medicaid (CMS) National Quality Strategy priorities. A total of 1,292 quality measures were screened and 66 unique quality measures were included. A total of 47 were sports medicine-specific and 19 related to the general practice of orthopaedics for a fellowship-trained sports medicine specialist. Nineteen (29%) quality measures were collected within PQRS, with 5 of them relating to sports medicine and 14 relating to general orthopaedics. AAOS Clinical Practice Guidelines (CPGs) comprised 40 (60%) of the included measures and were all within sports medicine. Five (8%) additional measures were collected within AHRQ and 2 (3%) within NQF. Most quality measures consist of process rather than outcome or structural measures. No measures addressing concussions were identified. There are many existing quality measures relating to the practice of orthopaedic sports medicine. Most quality measures are process measures described within PQRS or AAOS CPGs. Knowledge of quality measures are important as they may be used to improve care, are increasingly being used to determine physician reimbursement, and can inform future quality measure development efforts. Published by Elsevier Inc.
Development of a Multidisciplinary and Telemedicine Focused System Database.
Paštěka, Richard; Forjan, Mathias; Sauermann, Stefan
2017-01-01
Tele-rehabilitation at home is one of the promising approaches in increasing rehabilitative success and simultaneously decreasing the financial burden on the healthcare system. Novel and mostly mobile devices are already in use, but shall be used in the future to a higher extent for allowing at home rehabilitation processes at a high quality level. The combination of exercises, assessments and available equipment is the basic objective of the presented database. The database has been structured in order to allow easy-to-use and fast access for the three main user groups. Therapists - looking for exercise and equipment combinations - patients - rechecking their tasks for home exercises - and manufacturers - entering their equipment for specific use cases. The database has been evaluated by a proof of concept study and shows a high degree of applicability for the field of rehabilitative medicine. Currently it contains 110 exercises/assessments and 111 equipment/systems. Foundations of presented database are already established in the rehabilitative field of application, but can and will be enhanced in its functionality to be usable for a higher variety of medical fields and specifications.
Indicators for the automated analysis of drug prescribing quality.
Coste, J; Séné, B; Milstein, C; Bouée, S; Venot, A
1998-01-01
Irrational and inconsistent drug prescription has considerable impact on morbidity, mortality, health service utilization, and community burden. However, few studies have addressed the methodology of processing the information contained in these drug orders used to study the quality of drug prescriptions and prescriber behavior. We present a comprehensive set of quantitative indicators for the quality of drug prescriptions which can be derived from a drug order. These indicators were constructed using explicit a priori criteria which were previously validated on the basis of scientific data. Automatic computation is straightforward, using a relational database system, such that large sets of prescriptions can be processed with minimal human effort. We illustrate the feasibility and value of this approach by using a large set of 23,000 prescriptions for several diseases, selected from a nationally representative prescriptions database. Our study may result in direct and wide applications in the epidemiology of medical practice and in quality control procedures.
Lack, N
2001-08-01
The introduction of the modified data set for quality assurance in obstetrics (formerly perinatal survey) in Lower Saxony and Bavaria as early as 1999 saw the urgent requirement for a corresponding new statistical analysis of the revised data. The general outline of a new data reporting concept was originally presented by the Bavarian Commission for Perinatology and Neonatology at the Munich Perinatal Conference in November 1997. These ideas are germinal to content and layout of the new quality report for obstetrics currently in its nationwide harmonisation phase coordinated by the federal office for quality assurance in hospital care. A flexible and modular database oriented analysis tool developed in Bavaria is now in its second year of successful operation. The functionalities of this system are described in detail.
SITE COMPREHENSIVE LISTING (CERCLIS) (Superfund)
The Comprehensive Environmental Response, Compensation and Liability Information System (CERCLIS) (Superfund) Public Access Database contains a selected set of non-enforcement confidential information and is updated by the regions every 90 days. The data describes what has happened at Superfund sites prior to this quarter (updated quarterly). This database includes lists of involved parties (other Federal Agencies, states, and tribes), Human Exposure and Ground Water Migration, and Site Wide Ready for Reuse, Construction Completion, and Final Assessment Decision (GPRA-like measures) for fund lead sites. Other information that is included has been included only as a service to allow public evaluations utilizing this data. EPA does not have specific Data Quality Objectives for use of the data. Independent Quality Assessments may be made of this data by reviewing the Quality Assurance Action Plan (QAPP).
NASA Astrophysics Data System (ADS)
Karam, Lina J.; Zhu, Tong
2015-03-01
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
ERIC Educational Resources Information Center
Krugly, Andrew; Stein, Amanda; Centeno, Maribel G.
2014-01-01
Data-based decision making should be the driving force in any early care and education setting. Data usage compels early childhood practitioners and leaders to make decisions on the basis of more than just professional instinct. This article explores why early childhood schools should be using data for continuous quality improvement at various…
DOT National Transportation Integrated Search
2010-02-01
By utilizing ArcGIS to quickly visualize the location of any impaired waterbody in relation to its projects/activities, MoDOT will : be able to allocate resources optimally. Additionally, the Water Quality Impact Database (WQID) will allow easy trans...
The impact of mHealth interventions on health systems: a systematic review protocol.
Fortuin, Jill; Salie, Faatiema; Abdullahi, Leila H; Douglas, Tania S
2016-11-25
Mobile health (mHealth) has been described as a health enabling tool that impacts positively on the health system in terms of improved access, quality and cost of health care. The proposed systematic review will examine the impact of mHealth on health systems by assessing access, quality and cost of health care as indicators. The systematic review will include literature from various sources including published and unpublished/grey literature. The databases to be searched include: PubMed, Cochrane Library, Google Scholar, NHS Health Technology Assessment Database and Web of Science. The reference lists of studies will be screened and conference proceedings searched for additional eligible reports. Literature to be included will have mHealth as the primary intervention. Two authors will independently screen the search output, select studies and extract data; discrepancies will be resolved by consensus and discussion with the assistance of the third author. The systematic review will inform policy makers, investors, health professionals, technologists and engineers about the impact of mHealth in strengthening the health system. In particular, it will focus on three metrics to determine whether mHealth strengthens the health system, namely quality of, access to and cost of health care services. Systematic review registration: PROSPERO CRD42015026070.
Information technology model for evaluating emergency medicine teaching
NASA Astrophysics Data System (ADS)
Vorbach, James; Ryan, James
1996-02-01
This paper describes work in progress to develop an Information Technology (IT) model and supporting information system for the evaluation of clinical teaching in the Emergency Medicine (EM) Department of North Shore University Hospital. In the academic hospital setting student physicians, i.e. residents, and faculty function daily in their dual roles as teachers and students respectively, and as health care providers. Databases exist that are used to evaluate both groups in either academic or clinical performance, but rarely has this information been integrated to analyze the relationship between academic performance and the ability to care for patients. The goal of the IT model is to improve the quality of teaching of EM physicians by enabling the development of integrable metrics for faculty and resident evaluation. The IT model will include (1) methods for tracking residents in order to develop experimental databases; (2) methods to integrate lecture evaluation, clinical performance, resident evaluation, and quality assurance databases; and (3) a patient flow system to monitor patient rooms and the waiting area in the Emergency Medicine Department, to record and display status of medical orders, and to collect data for analyses.
NASA Astrophysics Data System (ADS)
Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald
2017-04-01
West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the national meteorological services.
Parra, Lorena; García, Laura
2018-01-01
The monitoring of farming processes can optimize the use of resources and improve its sustainability and profitability. In fish farms, the water quality, tank environment, and fish behavior must be monitored. Wireless sensor networks (WSNs) are a promising option to perform this monitoring. Nevertheless, its high cost is slowing the expansion of its use. In this paper, we propose a set of sensors for monitoring the water quality and fish behavior in aquaculture tanks during the feeding process. The WSN is based on physical sensors, composed of simple electronic components. The system proposed can monitor water quality parameters, tank status, the feed falling and fish swimming depth and velocity. In addition, the system includes a smart algorithm to reduce the energy waste when sending the information from the node to the database. The system is composed of three nodes in each tank that send the information though the local area network to a database on the Internet and a smart algorithm that detects abnormal values and sends alarms when they happen. All the sensors are designed, calibrated, and deployed to ensure its suitability. The greatest efforts have been accomplished with the fish presence sensor. The total cost of the sensors and nodes for the proposed system is less than 90 €. PMID:29494560
Parra, Lorena; Sendra, Sandra; García, Laura; Lloret, Jaime
2018-03-01
The monitoring of farming processes can optimize the use of resources and improve its sustainability and profitability. In fish farms, the water quality, tank environment, and fish behavior must be monitored. Wireless sensor networks (WSNs) are a promising option to perform this monitoring. Nevertheless, its high cost is slowing the expansion of its use. In this paper, we propose a set of sensors for monitoring the water quality and fish behavior in aquaculture tanks during the feeding process. The WSN is based on physical sensors, composed of simple electronic components. The system proposed can monitor water quality parameters, tank status, the feed falling and fish swimming depth and velocity. In addition, the system includes a smart algorithm to reduce the energy waste when sending the information from the node to the database. The system is composed of three nodes in each tank that send the information though the local area network to a database on the Internet and a smart algorithm that detects abnormal values and sends alarms when they happen. All the sensors are designed, calibrated, and deployed to ensure its suitability. The greatest efforts have been accomplished with the fish presence sensor. The total cost of the sensors and nodes for the proposed system is less than 90 €.
The Facility Registry System (FRS) is a centrally managed database that identifies facilities, sites or places subject to environmental regulations or of environmental interest. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous...
Markers of data quality in computer audit: the Manchester Orthopaedic Database.
Ricketts, D; Newey, M; Patterson, M; Hitchin, D; Fowler, S
1993-11-01
This study investigates the efficiency of the Manchester Orthopaedic Database (MOD), a computer software package for record collection and audit. Data is entered into the system in the form of diagnostic, operative and complication keywords. We have calculated the completeness, accuracy and quality (completeness x accuracy) of keyword data in the MOD in two departments of orthopaedics (Departments A and B). In each department, 100 sets of inpatient notes were reviewed. Department B obtained results which were significantly better than those in A at the 5% level. We attribute this to the presence of a systems coordinator to motivate and organise the team for audit. Senior and junior staff did not differ significantly with respect to completeness, accuracy and quality measures, but locum junior staff recorded data with a quality of 0%. Statistically, the biggest difference between the departments was the quality of operation keywords. Sample sizes were too small to permit effective statistical comparisons between the quality of complication keywords. In both departments, however, the poorest quality data was seen in complication keywords. The low complication keyword completeness contributed to this; on average, the true complication rate (39%) was twice the recorded complication rate (17%). In the recent Royal College of Surgeons of England Confidential Comparative Audit, the recorded complication rate was 4.7%. In the light of the above findings, we suggest that the true complication rate of the RCS CCA should approach 9%.
Informatics application provides instant research to practice benefits.
Bowles, K. H.; Peng, T.; Qian, R.; Naylor, M. D.
2001-01-01
A web-based research information system was designed to enable our research team to efficiently measure health related quality of life among frail older adults in a variety of health care settings (home care, nursing homes, assisted living, PACE). The structure, process, and outcome data is collected using laptop computers and downloaded to a SQL database. Unique features of this project are the ability to transfer research to practice by instantly sharing individual and aggregate results with the clinicians caring for these elders and directly impacting the quality of their care. Clinicians can also dial in to the database to access standard queries or receive customized reports about the patients in their facilities. This paper will describe the development and implementation of the information system. The conference presentation will include a demonstration and examples of research to practice benefits. PMID:11825156
User assumptions about information retrieval systems: Ethical concerns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Froehlich, T.J.
Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases;more » that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.« less
Information management systems for pharmacogenomics.
Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko
2002-09-01
The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.
An optical scan/statistical package for clinical data management in C-L psychiatry.
Hammer, J S; Strain, J J; Lyerly, M
1993-03-01
This paper explores aspects of the need for clinical database management systems that permit ongoing service management, measurement of the quality and appropriateness of care, databased administration of consultation liaison (C-L) services, teaching/educational observations, and research. It describes an OPTICAL SCAN databased management system that permits flexible form generation, desktop publishing, and linking of observations in multiple files. This enhanced MICRO-CARES software system--Medical Application Platform (MAP)--permits direct transfer of the data to ASCII and SAS format for mainframe manipulation of the clinical information. The director of a C-L service may now develop his or her own forms, incorporate structured instruments, or develop "branch chains" of essential data to add to the core data set without the effort and expense to reprint forms or consult with commercial vendors.
The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.
Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A
2010-08-01
The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.
Harris, Eric S J; Erickson, Sean D; Tolopko, Andrew N; Cao, Shugeng; Craycroft, Jane A; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E; Eisenberg, David M
2011-05-17
Ethnobotanically driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically driven natural product collection and drug-discovery programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Harris, Eric S. J.; Erickson, Sean D.; Tolopko, Andrew N.; Cao, Shugeng; Craycroft, Jane A.; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E.; Eisenberg, David M.
2011-01-01
Aim of the study. Ethnobotanically-driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine-Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. Materials and Methods. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. Results. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. Conclusions. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically-driven natural product collection and drug-discovery programs. PMID:21420479
CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.
Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka
2016-07-01
In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).
A design for the geoinformatics system
NASA Astrophysics Data System (ADS)
Allison, M. L.
2002-12-01
Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.
Time trends in prostate cancer surgery: data from an Internet-based multicentre database.
Schostak, Martin; Baumunk, Daniel; Jagota, Anita; Klopf, Christian; Winter, Alexander; Schäfers, Sebastian; Kössler, Robert; Brennecke, Volker; Fischer, Tom; Hagel, Susanne; Höchel, Steffen; Jäkel, Dierk; Lehsnau, Mike; Krege, Susanne; Rüffert, Bernd; Pretzer, Jana; Becht, Eduard; Zegenhagen, Thomas; Miller, Kurt; Weikert, Steffen
2012-02-01
To report our experience with an Internet-based multicentre database that enables tumour documentation, as well as the collection of quality-related parameters and follow-up data, in surgically treated patients with prostate cancer. The system was used to assess the quality of prostate cancer surgery and to analyze possible time-dependent trends in the quality of care. An Internet-based database system enabled a standardized collection of treatment data and clinical findings from the participating urological centres for the years 2005-2009. An analysis was performed aiming to evaluate relevant patient characteristics (age, pathological tumour stage, preoperative International Index of Erectile Function-5 score), intra-operative parameters (operating time, percentage of nerve-sparing operations, complication rate, transfusion rate, number of resected lymph nodes) and postoperative parameters (hospitalization time, re-operation rate, catheter indwelling time). Mean values were calculated and compared for each annual cohort from 2005 to 2008. The overall survival rate was also calculated for a subgroup of the Berlin patients. A total of 914, 1120, 1434 and 1750 patients submitted to radical prostatectomy in 2005, 2006, 2007 and 2008 were documented in the database. The mean age at the time of surgery remained constant (66 years) during the study period. More than half the patients already had erectile dysfunction before surgery (median International Index of Erectile Function-5 score of 19-20). During the observation period, there was a decrease in the percentage of pT2 tumours (1% in 2005; 64% in 2008) and a slight increase in the percentage of patients with lymph node metastases (8% in 2005; 10% in 2008). No time trend was found for the operating time (142-155 min) or the percentage of nerve-sparing operations (72-78% in patients without erectile dysfunction). A decreasing frequency was observed for the parameters: blood transfusions (1.9% in 2005; 0.5% in 2008), postoperative bleeding (2.6%; 1.2%) and re-operations (4.5%; 2.8%). The mean hospitalization time decreased accordingly (10 days in 2005; 8 days in 2008). The examined subcohort had an overall mortality of 1.5% (median follow-up of 3 years). An Internet-based database system for tumour documentation in patients with prostate cancer enables the collection and assessment of important parameters for the quality of care and outcomes. The participating centres show an improvement in the quality of surgical management, including a reduction of the complication rate. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
ERIC Educational Resources Information Center
Utah State Univ., Logan. Center for Persons with Disabilities.
This project studied the effects of implementing a computerized management information system developed for special education administrators. The Intelligent Administration Support Program (IASP), an expert system and database program, assisted in information acquisition and analysis pertaining to the district's quality of decisions and procedures…
Beach Advisory and Closing Online Notification (BEACON) system
Beach Advisory and Closing Online Notification system (BEACON) is a colletion of state and local data reported to EPA about beach closings and advisories. BEACON is the public-facing query of the Program tracking, Beach Advisories, Water quality standards, and Nutrients database (PRAWN) which tracks beach closing and advisory information.
USDA Agricultural Research Service creates Nutrient Uptake and Outcome Network (NUOnet)
USDA-ARS?s Scientific Manuscript database
One of the national goals of USDA-ARS is to conduct research that develops new practices and methods to increase agricultural production and quality with sustainable systems that have a lower environmental impact. When completed, the new NUOnet database system will be able to help in the establishme...
NASA Astrophysics Data System (ADS)
Kuzma, H. A.; Boyle, K.; Pullman, S.; Reagan, M. T.; Moridis, G. J.; Blasingame, T. A.; Rector, J. W.; Nikolaou, M.
2010-12-01
A Self Teaching Expert System (SeTES) is being developed for the analysis, design and prediction of gas production from shales. An Expert System is a computer program designed to answer questions or clarify uncertainties that its designers did not necessarily envision which would otherwise have to be addressed by consultation with one or more human experts. Modern developments in computer learning, data mining, database management, web integration and cheap computing power are bringing the promise of expert systems to fruition. SeTES is a partial successor to Prospector, a system to aid in the identification and evaluation of mineral deposits developed by Stanford University and the USGS in the late 1970s, and one of the most famous early expert systems. Instead of the text dialogue used in early systems, the web user interface of SeTES helps a non-expert user to articulate, clarify and reason about a problem by navigating through a series of interactive wizards. The wizards identify potential solutions to queries by retrieving and combining together relevant records from a database. Inferences, decisions and predictions are made from incomplete and noisy inputs using a series of probabilistic models (Bayesian Networks) which incorporate records from the database, physical laws and empirical knowledge in the form of prior probability distributions. The database is mainly populated with empirical measurements, however an automatic algorithm supplements sparse data with synthetic data obtained through physical modeling. This constitutes the mechanism for how SeTES self-teaches. SeTES’ predictive power is expected to grow as users contribute more data into the system. Samples are appropriately weighted to favor high quality empirical data over low quality or synthetic data. Finally, a set of data visualization tools digests the output measurements into graphical outputs.
CEBS: a comprehensive annotated database of toxicological data
Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer
2017-01-01
The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660
77 FR 31268 - Determination of Attainment for the Paul Spur/Douglas PM10
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-25
... requirements under 40 CFR part 58.\\7\\ Furthermore, we concluded in our Technical System Audit Report concerning... www.regulations.gov or email. www.regulations.gov is an ``anonymous access'' system, and EPA will not...) in the nonattainment area, and entered into the EPA Air Quality System (AQS) database. Data from air...
A comprehensive database of quality-rated fossil ages for Sahul's Quaternary vertebrates.
Rodríguez-Rey, Marta; Herrando-Pérez, Salvador; Brook, Barry W; Saltré, Frédérik; Alroy, John; Beeton, Nicholas; Bird, Michael I; Cooper, Alan; Gillespie, Richard; Jacobs, Zenobia; Johnson, Christopher N; Miller, Gifford H; Prideaux, Gavin J; Roberts, Richard G; Turney, Chris S M; Bradshaw, Corey J A
2016-07-19
The study of palaeo-chronologies using fossil data provides evidence for past ecological and evolutionary processes, and is therefore useful for predicting patterns and impacts of future environmental change. However, the robustness of inferences made from fossil ages relies heavily on both the quantity and quality of available data. We compiled Quaternary non-human vertebrate fossil ages from Sahul published up to 2013. This, the FosSahul database, includes 9,302 fossil records from 363 deposits, for a total of 478 species within 215 genera, of which 27 are from extinct and extant megafaunal species (2,559 records). We also provide a rating of reliability of individual absolute age based on the dating protocols and association between the dated materials and the fossil remains. Our proposed rating system identified 2,422 records with high-quality ages (i.e., a reduction of 74%). There are many applications of the database, including disentangling the confounding influences of hypothetical extinction drivers, better spatial distribution estimates of species relative to palaeo-climates, and potentially identifying new areas for fossil discovery.
A comprehensive database of quality-rated fossil ages for Sahul’s Quaternary vertebrates
Rodríguez-Rey, Marta; Herrando-Pérez, Salvador; Brook, Barry W.; Saltré, Frédérik; Alroy, John; Beeton, Nicholas; Bird, Michael I.; Cooper, Alan; Gillespie, Richard; Jacobs, Zenobia; Johnson, Christopher N.; Miller, Gifford H.; Prideaux, Gavin J.; Roberts, Richard G.; Turney, Chris S.M.; Bradshaw, Corey J.A.
2016-01-01
The study of palaeo-chronologies using fossil data provides evidence for past ecological and evolutionary processes, and is therefore useful for predicting patterns and impacts of future environmental change. However, the robustness of inferences made from fossil ages relies heavily on both the quantity and quality of available data. We compiled Quaternary non-human vertebrate fossil ages from Sahul published up to 2013. This, the FosSahul database, includes 9,302 fossil records from 363 deposits, for a total of 478 species within 215 genera, of which 27 are from extinct and extant megafaunal species (2,559 records). We also provide a rating of reliability of individual absolute age based on the dating protocols and association between the dated materials and the fossil remains. Our proposed rating system identified 2,422 records with high-quality ages (i.e., a reduction of 74%). There are many applications of the database, including disentangling the confounding influences of hypothetical extinction drivers, better spatial distribution estimates of species relative to palaeo-climates, and potentially identifying new areas for fossil discovery. PMID:27434208
Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.
Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel
2012-01-01
Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012
NASA Astrophysics Data System (ADS)
Gentry, Jeffery D.
2000-05-01
A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-11
... available, but not yet certified, in the Air Quality System (AQS) database for 2011 show that this Area.... Moreover, there is no support for the Commenter's contention, based on the flawed premise that allowance... strong legal basis. To the extent that the current status of CAIR and the Transport Rule affect any of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-23
... recorded in EPA's Air Quality System (AQS) database. To account for missing data, the procedures found in... Site Year exceedance exceedance expected days over days for exceedance 0.124 ppm missing data rate... for exceedance 0.124 ppm missing data rate 090050006 Cornwall........ 2006 0 0.0 0.3 2007 1 1.0 2008 0...
Interhospital network system using the worldwide web and the common gateway interface.
Oka, A; Harima, Y; Nakano, Y; Tanaka, Y; Watanabe, A; Kihara, H; Sawada, S
1999-05-01
We constructed an interhospital network system using the worldwide web (WWW) and the Common Gateway Interface (CGI). Original clinical images are digitized and stored as a database for educational and research purposes. Personal computers (PCs) are available for data treatment and browsing. Our system is simple, as digitized images are stored into a Unix server machine. Images of important and interesting clinical cases are selected and registered into the image database using CGI. The main image format is 8- or 12-bit Joint Photographic Experts Group (JPEG) image. Original clinical images are finally stored in CD-ROM using a CD recorder. The image viewer can browse all of the images for one case at once as thumbnail pictures; image quality can be selected depending on the user's purpose. Using the network system, clinical images of interesting cases can be rapidly transmitted and discussed with other related hospitals. Data transmission from relational hospitals takes 1 to 2 minutes per 500 Kbyte of data. More distant hospitals (e.g., Rakusai Hospital, Kyoto) takes 1 minute more. The mean number of accesses our image database in a recent 3-month period was 470. There is a total about 200 cases in our image database, acquired over the past 2 years. Our system is useful for communication and image treatment between hospitals and we will describe the elements of our system and image database.
Lara-Smalling, Agueda; Cakiner-Egilmez, Tulay; Miller, Dawn; Redshirt, Ella; Williams, Dale
2011-01-01
Currently, ophthalmic surgical cases are not included in the Veterans Administration Surgical Quality Improvement Project data collection. Furthermore, there is no comprehensive protocol in the health system for prospectively measuring outcomes for eye surgery in terms of safety and quality. There are 400,000 operative cases in the system per year. Of those, 48,000 (12%) are ophthalmic surgical cases, with 85% (41,000) of those being cataract cases. The Ophthalmic Surgical Outcome Database Pilot Project was developed to incorporate ophthalmology into VASQIP, thus evaluating risk factors and improving cataract surgical outcomes. Nurse reviewers facilitate the monitoring and measuring of these outcomes. Since its inception in 1778, the Veterans Administration (VA) Health System has provided comprehensive healthcare to millions of deserving veterans throughout the U.S. and its territories. Historically, the quality of healthcare provided by the VA has been the main focus of discussion because it did not meet a standard of care comparable to that of the private sector. Information regarding quality of healthcare services and outcomes data had been unavailable until 1986, when Congress mandated the VA to compare its surgical outcomes to those of the private sector (PL-99-166). 1 Risk adjustment of VA surgical outcomes began in 1987 with the Continuous Improvement in Cardiac Surgery Program (CICSP) in which cardiac surgical outcomes were reported and evaluated. 2 Between 1991 and 1993, the National VA Surgical Risk Study (NVASRS) initiated a validated risk-adjustment model for predicting surgical outcomes and comparative assessment of the quality of surgical care in 44 VA medical centers. 3 The success of NVASRS encouraged the VA to establish an ongoing program for monitoring and improving the quality of surgical care, thus developing the National Surgical Quality Improvement Program (NSQIP) in 1994. 4 According to a prospective study conducted between 1991-1997 in 123 VA medical centers by Khuri et al., the 30-day mortality and morbidity rates for major surgeries had decreased by 9% and 30%, respectively. 5 Recently renamed the VA Surgical Quality Improvement Program (VASQIP) in 2010, the quality of surgical outcomes has continued to improve among all documented surgical specialties. Ophthalmic surgery is presumed to have a very low mortality rate and therefore has not been included in the VASQIP database.
An Informatics Blueprint for Healthcare Quality Information Systems
Niland, Joyce C.; Rouse, Layla; Stahl, Douglas C.
2006-01-01
There is a critical gap in our nation's ability to accurately measure and manage the quality of medical care. A robust healthcare quality information system (HQIS) has the potential to address this deficiency through the capture, codification, and analysis of information about patient treatments and related outcomes. Because non-technical issues often present the greatest challenges, this paper provides an overview of these socio-technical issues in building a successful HQIS, including the human, organizational, and knowledge management (KM) perspectives. Through an extensive literature review and direct experience in building a practical HQIS (the National Comprehensive Cancer Network Outcomes Research Database system), we have formulated an “informatics blueprint” to guide the development of such systems. While the blueprint was developed to facilitate healthcare quality information collection, management, analysis, and reporting, the concepts and advice provided may be extensible to the development of other types of clinical research information systems. PMID:16622161
SFINX-a drug-drug interaction database designed for clinical decision support systems.
Böttiger, Ylva; Laine, Kari; Andersson, Marine L; Korhonen, Tuomas; Molin, Björn; Ovesjö, Marie-Louise; Tirkkonen, Tuire; Rane, Anders; Gustafsson, Lars L; Eiermann, Birgit
2009-06-01
The aim was to develop a drug-drug interaction database (SFINX) to be integrated into decision support systems or to be used in website solutions for clinical evaluation of interactions. Key elements such as substance properties and names, drug formulations, text structures and references were defined before development of the database. Standard operating procedures for literature searches, text writing rules and a classification system for clinical relevance and documentation level were determined. ATC codes, CAS numbers and country-specific codes for substances were identified and quality assured to ensure safe integration of SFINX into other data systems. Much effort was put into giving short and practical advice regarding clinically relevant drug-drug interactions. SFINX includes over 8,000 interaction pairs and is integrated into Swedish and Finnish computerised decision support systems. Over 31,000 physicians and pharmacists are receiving interaction alerts through SFINX. User feedback is collected for continuous improvement of the content. SFINX is a potentially valuable tool delivering instant information on drug interactions during prescribing and dispensing.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Astrophysics Data System (ADS)
Gilliom, R.; Hogue, T. S.; McCray, J. E.
2017-12-01
There is a need for improved parameterization of stormwater best management practices (BMP) performance estimates to improve modeling of urban hydrology, planning and design of green infrastructure projects, and water quality crediting for stormwater management. Percent removal is commonly used to estimate BMP pollutant removal efficiency, but there is general agreement that this approach has significant uncertainties and is easily affected by site-specific factors. Additionally, some fraction of monitored BMPs have negative percent removal, so it is important to understand the probability that a BMP will provide the desired water quality function versus exacerbating water quality problems. The widely used k-C* equation has shown to provide a more adaptable and accurate method to model BMP contaminant attenuation, and previous work has begun to evaluate the strengths and weaknesses of the k-C* method. However, no systematic method exists for obtaining first-order removal rate constants needed to use the k-C* equation for stormwater BMPs; thus there is minimal application of the method. The current research analyzes existing water quality data in the International Stormwater BMP Database to provide screening-level parameterization of the k-C* equation for selected BMP types and analysis of factors that skew the distribution of efficiency estimates from the database. Results illustrate that while certain BMPs are more likely to provide desired contaminant removal than others, site- and design-specific factors strongly influence performance. For example, bioretention systems show both the highest and lowest removal rates of dissolved copper, total phosphorous, and total nitrogen. Exploration and discussion of this and other findings will inform the application of the probabilistic pollutant removal rate constants. Though data limitations exist, this research will facilitate improved accuracy of BMP modeling and ultimately aid decision-making for stormwater quality management in urban systems.
EPA Facility Registry Service (FRS): Facility Interests Dataset - Intranet
This web feature service consists of location and facility identification information from EPA's Facility Registry Service (FRS) for all sites that are available in the FRS individual feature layers. The layers comprise the FRS major program databases, including:Assessment Cleanup and Redevelopment Exchange System (ACRES) : brownfields sites ; Air Facility System (AFS) : stationary sources of air pollution ; Air Quality System (AQS) : ambient air pollution data from monitoring stations; Bureau of Indian Affairs (BIA) : schools data on Indian land; Base Realignment and Closure (BRAC) facilities; Clean Air Markets Division Business System (CAMDBS) : market-based air pollution control programs; Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) : hazardous waste sites; Integrated Compliance Information System (ICIS) : integrated enforcement and compliance information; National Compliance Database (NCDB) : Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Toxic Substances Control Act (TSCA); National Pollutant Discharge Elimination System (NPDES) module of ICIS : NPDES surface water permits; Radiation Information Database (RADINFO) : radiation and radioactivity facilities; RACT/BACT/LAER Clearinghouse (RBLC) : best available air pollution technology requirements; Resource Conservation and Recovery Act Information System (RCRAInfo) : tracks generators, transporters, treaters, storers, and disposers of haz
EPA Facility Registry Service (FRS): Facility Interests Dataset - Intranet Download
This downloadable data package consists of location and facility identification information from EPA's Facility Registry Service (FRS) for all sites that are available in the FRS individual feature layers. The layers comprise the FRS major program databases, including:Assessment Cleanup and Redevelopment Exchange System (ACRES) : brownfields sites ; Air Facility System (AFS) : stationary sources of air pollution ; Air Quality System (AQS) : ambient air pollution data from monitoring stations; Bureau of Indian Affairs (BIA) : schools data on Indian land; Base Realignment and Closure (BRAC) facilities; Clean Air Markets Division Business System (CAMDBS) : market-based air pollution control programs; Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) : hazardous waste sites; Integrated Compliance Information System (ICIS) : integrated enforcement and compliance information; National Compliance Database (NCDB) : Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Toxic Substances Control Act (TSCA); National Pollutant Discharge Elimination System (NPDES) module of ICIS : NPDES surface water permits; Radiation Information Database (RADINFO) : radiation and radioactivity facilities; RACT/BACT/LAER Clearinghouse (RBLC) : best available air pollution technology requirements; Resource Conservation and Recovery Act Information System (RCRAInfo) : tracks generators, transporters, treaters, storers, and disposers
EPA Facility Registry Service (FRS): Facility Interests Dataset Download
This downloadable data package consists of location and facility identification information from EPA's Facility Registry Service (FRS) for all sites that are available in the FRS individual feature layers. The layers comprise the FRS major program databases, including:Assessment Cleanup and Redevelopment Exchange System (ACRES) : brownfields sites ; Air Facility System (AFS) : stationary sources of air pollution ; Air Quality System (AQS) : ambient air pollution data from monitoring stations; Bureau of Indian Affairs (BIA) : schools data on Indian land; Base Realignment and Closure (BRAC) facilities; Clean Air Markets Division Business System (CAMDBS) : market-based air pollution control programs; Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) : hazardous waste sites; Integrated Compliance Information System (ICIS) : integrated enforcement and compliance information; National Compliance Database (NCDB) : Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Toxic Substances Control Act (TSCA); National Pollutant Discharge Elimination System (NPDES) module of ICIS : NPDES surface water permits; Radiation Information Database (RADINFO) : radiation and radioactivity facilities; RACT/BACT/LAER Clearinghouse (RBLC) : best available air pollution technology requirements; Resource Conservation and Recovery Act Information System (RCRAInfo) : tracks generators, transporters, treaters, storers, and disposers
EPA Facility Registry Service (FRS): Facility Interests Dataset
This web feature service consists of location and facility identification information from EPA's Facility Registry Service (FRS) for all sites that are available in the FRS individual feature layers. The layers comprise the FRS major program databases, including:Assessment Cleanup and Redevelopment Exchange System (ACRES) : brownfields sites ; Air Facility System (AFS) : stationary sources of air pollution ; Air Quality System (AQS) : ambient air pollution data from monitoring stations; Bureau of Indian Affairs (BIA) : schools data on Indian land; Base Realignment and Closure (BRAC) facilities; Clean Air Markets Division Business System (CAMDBS) : market-based air pollution control programs; Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) : hazardous waste sites; Integrated Compliance Information System (ICIS) : integrated enforcement and compliance information; National Compliance Database (NCDB) : Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and the Toxic Substances Control Act (TSCA); National Pollutant Discharge Elimination System (NPDES) module of ICIS : NPDES surface water permits; Radiation Information Database (RADINFO) : radiation and radioactivity facilities; RACT/BACT/LAER Clearinghouse (RBLC) : best available air pollution technology requirements; Resource Conservation and Recovery Act Information System (RCRAInfo) : tracks generators, transporters, treaters, storers, and disposers of haz
Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington
Uhrich, M.A.; McGrath, T.S.
1997-01-01
Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.
Quality assurance for the query and distribution systems of the RCSB Protein Data Bank
Bluhm, Wolfgang F.; Beran, Bojan; Bi, Chunxiao; Dimitropoulos, Dimitris; Prlić, Andreas; Quinn, Gregory B.; Rose, Peter W.; Shah, Chaitali; Young, Jasmine; Yukich, Benjamin; Berman, Helen M.; Bourne, Philip E.
2011-01-01
The RCSB Protein Data Bank (RCSB PDB, www.pdb.org) is a key online resource for structural biology and related scientific disciplines. The website is used on average by 165 000 unique visitors per month, and more than 2000 other websites link to it. The amount and complexity of PDB data as well as the expectations on its usage are growing rapidly. Therefore, ensuring the reliability and robustness of the RCSB PDB query and distribution systems are crucially important and increasingly challenging. This article describes quality assurance for the RCSB PDB website at several distinct levels, including: (i) hardware redundancy and failover, (ii) testing protocols for weekly database updates, (iii) testing and release procedures for major software updates and (iv) miscellaneous monitoring and troubleshooting tools and practices. As such it provides suggestions for how other websites might be operated. Database URL: www.pdb.org PMID:21382834
Kringos, Dionne S; Sunol, Rosa; Wagner, Cordula; Mannion, Russell; Michel, Philippe; Klazinga, Niek S; Groene, Oliver
2015-07-22
It is now widely accepted that the mixed effect and success rates of strategies to improve quality and safety in health care are in part due to the different contexts in which the interventions are planned and implemented. The objectives of this study were to (i) describe the reporting of contextual factors in the literature on the effectiveness of quality improvement strategies, (ii) assess the relationship between effectiveness and contextual factors, and (iii) analyse the importance of contextual factors. We conducted an umbrella review of systematic reviews searching the following databases: PubMed, Cochrane Database of Systematic Reviews, Embase and CINAHL. The search focused on quality improvement strategies included in the Cochrane Effective Practice and Organisation of Care Group taxonomy. We extracted data on quality improvement effectiveness and context factors. The latter were categorized according to the Model for Understanding Success in Quality tool. We included 56 systematic reviews in this study of which only 35 described contextual factors related with the effectiveness of quality improvement interventions. The most frequently reported contextual factors were: quality improvement team (n = 12), quality improvement support and capacity (n = 11), organization (n = 9), micro-system (n = 8), and external environment (n = 4). Overall, context factors were poorly reported. Where they were reported, they seem to explain differences in quality improvement effectiveness; however, publication bias may contribute to the observed differences. Contextual factors may influence the effectiveness of quality improvement interventions, in particular at the level of the clinical micro-system. Future research on the implementation and effectiveness of quality improvement interventions should emphasize formative evaluation to elicit information on context factors and report on them in a more systematic way in order to better appreciate their relative importance.
Impact of Accurate 30-Day Status on Operative Mortality: Wanted Dead or Alive, Not Unknown.
Ring, W Steves; Edgerton, James R; Herbert, Morley; Prince, Syma; Knoff, Cathy; Jenkins, Kristin M; Jessen, Michael E; Hamman, Baron L
2017-12-01
Risk-adjusted operative mortality is the most important quality metric in cardiac surgery for determining The Society of Thoracic Surgeons (STS) Composite Score for star ratings. Accurate 30-day status is required to determine STS operative mortality. The goal of this study was to determine the effect of unknown or missing 30-day status on risk-adjusted operative mortality in a regional STS Adult Cardiac Surgery Database cooperative and demonstrate the ability to correct these deficiencies by matching with an administrative database. STS Adult Cardiac Surgery Database data were submitted by 27 hospitals from five hospital systems to the Texas Quality Initiative (TQI), a regional quality collaborative. TQI data were matched with a regional hospital claims database to resolve unknown 30-day status. The risk-adjusted operative mortality observed-to-expected (O/E) ratio was determined before and after matching to determine the effect of unknown status on the operative mortality O/E. TQI found an excessive (22%) unknown 30-day status for STS isolated coronary artery bypass grafting cases. Matching the TQI data to the administrative claims database reduced the unknowns to 7%. The STS process of imputing unknown 30-day status as alive underestimates the true operative mortality O/E (1.27 before vs 1.30 after match), while excluding unknowns overestimates the operative mortality O/E (1.57 before vs 1.37 after match) for isolated coronary artery bypass grafting. The current STS algorithm of imputing unknown 30-day status as alive and a strategy of excluding cases with unknown 30-day status both result in erroneous calculation of operative mortality and operative mortality O/E. However, external validation by matching with an administrative database can improve the accuracy of clinical databases such as the STS Adult Cardiac Surgery Database. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
... compound (VOC) emissions, and more. U.S. Department of Agriculture (USDA) Water Quality Information Center Databases : online databases that may be related to water and agriculture. National Park Service (NPS) Water Quality Program : NPS ...
Oostdik, Kathryn; Lenz, Kristy; Nye, Jeffrey; Schelling, Kristin; Yet, Donald; Bruski, Scott; Strong, Joshua; Buchanan, Clint; Sutton, Joel; Linner, Jessica; Frazier, Nicole; Young, Hays; Matthies, Learden; Sage, Amber; Hahn, Jeff; Wells, Regina; Williams, Natasha; Price, Monica; Koehler, Jody; Staples, Melisa; Swango, Katie L; Hill, Carolyn; Oyerly, Karen; Duke, Wendy; Katzilierakis, Lesley; Ensenberger, Martin G; Bourdeau, Jeanne M; Sprecher, Cynthia J; Krenke, Benjamin; Storts, Douglas R
2014-09-01
The original CODIS database based on 13 core STR loci has been overwhelmingly successful for matching suspects with evidence. Yet there remain situations that argue for inclusion of more loci and increased discrimination. The PowerPlex(®) Fusion System allows simultaneous amplification of the following loci: Amelogenin, D3S1358, D1S1656, D2S441, D10S1248, D13S317, Penta E, D16S539, D18S51, D2S1338, CSF1PO, Penta D, TH01, vWA, D21S11, D7S820, D5S818, TPOX, DYS391, D8S1179, D12S391, D19S433, FGA, and D22S1045. The comprehensive list of loci amplified by the system generates a profile compatible with databases based on either the expanded CODIS or European Standard Set (ESS) requirements. Developmental validation testing followed SWGDAM guidelines and demonstrated the quality and robustness of the PowerPlex(®) Fusion System across a number of variables. Consistent and high-quality results were compiled using data from 12 separate forensic and research laboratories. The results verify that the PowerPlex(®) Fusion System is a robust and reliable STR-typing multiplex suitable for human identification. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
,
2003-01-01
The Automated Data Processing System (ADAPS) was developed for the processing, storage, and retrieval of water data, and is part of the National Water Information System (NWIS) developed by the U.S. Geological Survey. NWIS is a distributed water database in which data can be processed over a network of computers at U.S. Geological Survey offices throughout the United States. NWIS comprises four subsystems: ADAPS, the Ground-Water Site Inventory System (GWSI), the Water-Quality System (QWDATA), and the Site-Specific Water-Use Data System (SWUDS). This section of the NWIS User's Manual describes the automated data processing of continuously recorded water data, which primarily are surface-water data; however, the system also allows for the processing of water-quality and ground-water data. This manual describes various components and features of the ADAPS, and provides an overview of the data processing system and a description of the system framework. The components and features included are: (1) data collection and processing, (2) ADAPS menus and programs, (3) command line functions, (4) steps for processing station records, (5) postprocessor programs control files, (6) the standard format for transferring and entering unit and daily values, and (7) relational database (RDB) formats.
A DICOM based radiotherapy plan database for research collaboration and reporting
NASA Astrophysics Data System (ADS)
Westberg, J.; Krogh, S.; Brink, C.; Vogelius, I. R.
2014-03-01
Purpose: To create a central radiotherapy (RT) plan database for dose analysis and reporting, capable of calculating and presenting statistics on user defined patient groups. The goal is to facilitate multi-center research studies with easy and secure access to RT plans and statistics on protocol compliance. Methods: RT institutions are able to send data to the central database using DICOM communications on a secure computer network. The central system is composed of a number of DICOM servers, an SQL database and in-house developed software services to process the incoming data. A web site within the secure network allows the user to manage their submitted data. Results: The RT plan database has been developed in Microsoft .NET and users are able to send DICOM data between RT centers in Denmark. Dose-volume histogram (DVH) calculations performed by the system are comparable to those of conventional RT software. A permission system was implemented to ensure access control and easy, yet secure, data sharing across centers. The reports contain DVH statistics for structures in user defined patient groups. The system currently contains over 2200 patients in 14 collaborations. Conclusions: A central RT plan repository for use in multi-center trials and quality assurance was created. The system provides an attractive alternative to dummy runs by enabling continuous monitoring of protocol conformity and plan metrics in a trial.
NASA Astrophysics Data System (ADS)
Zhang, Min; Pavlicek, William; Panda, Anshuman; Langer, Steve G.; Morin, Richard; Fetterly, Kenneth A.; Paden, Robert; Hanson, James; Wu, Lin-Wei; Wu, Teresa
2015-03-01
DICOM Index Tracker (DIT) is an integrated platform to harvest rich information available from Digital Imaging and Communications in Medicine (DICOM) to improve quality assurance in radiology practices. It is designed to capture and maintain longitudinal patient-specific exam indices of interests for all diagnostic and procedural uses of imaging modalities. Thus, it effectively serves as a quality assurance and patient safety monitoring tool. The foundation of DIT is an intelligent database system which stores the information accepted and parsed via a DICOM receiver and parser. The database system enables the basic dosimetry analysis. The success of DIT implementation at Mayo Clinic Arizona calls for the DIT deployment at the enterprise level which requires significant improvements. First, for geographically distributed multi-site implementation, the first bottleneck is the communication (network) delay; the second is the scalability of the DICOM parser to handle the large volume of exams from different sites. To address this issue, DICOM receiver and parser are separated and decentralized by site. To facilitate the enterprise wide Quality Assurance (QA), a notable challenge is the great diversities of manufacturers, modalities and software versions, as the solution DIT Enterprise provides the standardization tool for device naming, protocol naming, physician naming across sites. Thirdly, advanced analytic engines are implemented online which support the proactive QA in DIT Enterprise.
Strong, Vivian E.; Selby, Luke V.; Sovel, Mindy; Disa, Joseph J.; Hoskins, William; DeMatteo, Ronald; Scardino, Peter; Jaques, David P.
2015-01-01
Background Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Study Design Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Results 1,498 of 4,284 operations during the 3rd quarter of 2008 were audited. 79% (N=1,180) of the operations did not have a secondary event while 21% (N=318) of operations had an identified event. 91% (1,365) of operations were correctly entered into the SSE database. 97% (129/133) of missed secondary events were Grades I and II. Three Grade III (2%) and one Grade IV (1%) secondary event were missed. There were no missed Grade 5 secondary events. Conclusion Grade III – IV events are more accurately collected than Grade I – II events. Robust and accurate secondary events data can be collected by clinicians and research staff and these data can safely be used for quality improvement projects and research. PMID:25319579
Strong, Vivian E; Selby, Luke V; Sovel, Mindy; Disa, Joseph J; Hoskins, William; Dematteo, Ronald; Scardino, Peter; Jaques, David P
2015-04-01
Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Of 4,284 operations, 1,498 were audited during the third quarter of 2008. Of these operations, 79 % (N = 1,180) did not have a secondary event while 21 % (N = 318) had an identified event; 91 % of operations (1,365) were correctly entered into the SSE database. Also 97 % (129 of 133) of missed secondary events were grades I and II. There were 3 grade III (2 %) and 1 grade IV (1 %) secondary event that were missed. There were no missed grade 5 secondary events. Grade III-IV events are more accurately collected than grade I-II events. Robust and accurate secondary events data can be collected by clinicians and research staff, and these data can safely be used for quality improvement projects and research.
77 FR 71177 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-29
... automated Tri-Service, Web- based database containing credentialing, privileging, risk management, and... credentialing, privileging, risk- management and adverse actions capabilities which support medical quality... submitting comments. Mail: Federal Docket Management System Office, 4800 Mark Center Drive, East Tower, 2nd...
Mehryary, Farrokh; Kaewphan, Suwisa; Hakala, Kai; Ginter, Filip
2016-01-01
Biomedical event extraction is one of the key tasks in biomedical text mining, supporting various applications such as database curation and hypothesis generation. Several systems, some of which have been applied at a large scale, have been introduced to solve this task. Past studies have shown that the identification of the phrases describing biological processes, also known as trigger detection, is a crucial part of event extraction, and notable overall performance gains can be obtained by solely focusing on this sub-task. In this paper we propose a novel approach for filtering falsely identified triggers from large-scale event databases, thus improving the quality of knowledge extraction. Our method relies on state-of-the-art word embeddings, event statistics gathered from the whole biomedical literature, and both supervised and unsupervised machine learning techniques. We focus on EVEX, an event database covering the whole PubMed and PubMed Central Open Access literature containing more than 40 million extracted events. The top most frequent EVEX trigger words are hierarchically clustered, and the resulting cluster tree is pruned to identify words that can never act as triggers regardless of their context. For rarely occurring trigger words we introduce a supervised approach trained on the combination of trigger word classification produced by the unsupervised clustering method and manual annotation. The method is evaluated on the official test set of BioNLP Shared Task on Event Extraction. The evaluation shows that the method can be used to improve the performance of the state-of-the-art event extraction systems. This successful effort also translates into removing 1,338,075 of potentially incorrect events from EVEX, thus greatly improving the quality of the data. The method is not solely bound to the EVEX resource and can be thus used to improve the quality of any event extraction system or database. The data and source code for this work are available at: http://bionlp-www.utu.fi/trigger-clustering/.
NASA Technical Reports Server (NTRS)
Orcutt, John M.; Brenton, James C.
2016-01-01
The methodology and the results of the quality control (QC) process of the meteorological data from the Lightning Protection System (LPS) towers located at Kennedy Space Center (KSC) launch complex 39B (LC-39B) are documented in this paper. Meteorological data are used to design a launch vehicle, determine operational constraints, and to apply defined constraints on day-of-launch (DOL). In order to properly accomplish these tasks, a representative climatological database of meteorological records is needed because the database needs to represent the climate the vehicle will encounter. Numerous meteorological measurement towers exist at KSC; however, the engineering tasks need measurements at specific heights, some of which can only be provided by a few towers. Other than the LPS towers, Tower 313 is the only tower that provides observations up to 150 m. This tower is located approximately 3.5 km from LC-39B. In addition, data need to be QC'ed to remove erroneous reports that could pollute the results of an engineering analysis, mislead the development of operational constraints, or provide a false image of the atmosphere at the tower's location.
NASA Astrophysics Data System (ADS)
Deshpande, Ruchi; DeMarco, John; Liu, Brent J.
2015-03-01
We have developed a comprehensive DICOM RT specific database of retrospective treatment planning data for radiation therapy of head and neck cancer. Further, we have designed and built an imaging informatics module that utilizes this database to perform data mining. The end-goal of this data mining system is to provide radiation therapy decision support for incoming head and neck cancer patients, by identifying best practices from previous patients who had the most similar tumor geometries. Since the performance of such systems often depends on the size and quality of the retrospective database, we have also placed an emphasis on developing infrastructure and strategies to encourage data sharing and participation from multiple institutions. The infrastructure and decision support algorithm have both been tested and evaluated with 51 sets of retrospective treatment planning data of head and neck cancer patients. We will present the overall design and architecture of our system, an overview of our decision support mechanism as well as the results of our evaluation.
Multimodal person authentication on a smartphone under realistic conditions
NASA Astrophysics Data System (ADS)
Morris, Andrew C.; Jassim, Sabah; Sellahewa, Harin; Allano, Lorene; Ehlers, Johan; Wu, Dalei; Koreman, Jacques; Garcia-Salicetti, Sonia; Ly-Van, Bao; Dorizzi, Bernadette
2006-05-01
Verification of a person's identity by the combination of more than one biometric trait strongly increases the robustness of person authentication in real applications. This is particularly the case in applications involving signals of degraded quality, as for person authentication on mobile platforms. The context of mobility generates degradations of input signals due to the variety of environments encountered (ambient noise, lighting variations, etc.), while the sensors' lower quality further contributes to decrease in system performance. Our aim in this work is to combine traits from the three biometric modalities of speech, face and handwritten signature in a concrete application, performing non intrusive biometric verification on a personal mobile device (smartphone/PDA). Most available biometric databases have been acquired in more or less controlled environments, which makes it difficult to predict performance in a real application. Our experiments are performed on a database acquired on a PDA as part of the SecurePhone project (IST-2002-506883 project "Secure Contracts Signed by Mobile Phone"). This database contains 60 virtual subjects balanced in gender and age. Virtual subjects are obtained by coupling audio-visual signals from real English speaking subjects with signatures from other subjects captured on the touch screen of the PDA. Video data for the PDA database was recorded in 2 recording sessions separated by at least one week. Each session comprises 4 acquisition conditions: 2 indoor and 2 outdoor recordings (with in each case, a good and a degraded quality recording). Handwritten signatures were captured in one session in realistic conditions. Different scenarios of matching between training and test conditions are tested to measure the resistance of various fusion systems to different types of variability and different amounts of enrolment data.
A Machine Reading System for Assembling Synthetic Paleontological Databases
Peters, Shanan E.; Zhang, Ce; Livny, Miron; Ré, Christopher
2014-01-01
Many aspects of macroevolutionary theory and our understanding of biotic responses to global environmental change derive from literature-based compilations of paleontological data. Existing manually assembled databases are, however, incomplete and difficult to assess and enhance with new data types. Here, we develop and validate the quality of a machine reading system, PaleoDeepDive, that automatically locates and extracts data from heterogeneous text, tables, and figures in publications. PaleoDeepDive performs comparably to humans in several complex data extraction and inference tasks and generates congruent synthetic results that describe the geological history of taxonomic diversity and genus-level rates of origination and extinction. Unlike traditional databases, PaleoDeepDive produces a probabilistic database that systematically improves as information is added. We show that the system can readily accommodate sophisticated data types, such as morphological data in biological illustrations and associated textual descriptions. Our machine reading approach to scientific data integration and synthesis brings within reach many questions that are currently underdetermined and does so in ways that may stimulate entirely new modes of inquiry. PMID:25436610
Study on Full Supply Chain Quality and Safetytraceability Systems For Cereal And Oilproducts
NASA Astrophysics Data System (ADS)
Liu, Shihong; Zheng, Huoguo; Meng, Hong; Hu, Haiyan; Wu, Jiangshou; Li, Chunhua
Global food industry and Governments in many countries are putting increasing emphasis on establishment of food traceability systems. Food traceability has become an effective way in food safety management. Aimed at the major quality problems of cereal and oil products existing in the production, processing, warehousing, distribution and other links in the supply chain, this paper firstly proposes a new traceability framework combines the information flow with critical control points and quality indicators. Then it introduces traceability database design and data access mode to realize the framework. In practice, Code design for tracing goods is a challenge thing, so this paper put forward a code system based on UCC/EAN-128 standard.Middleware and Electronic terminal design are also briefly introduced to accomplish traceability system for cereal and oil products.
[Quality management and participation into clinical database].
Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi
2013-07-01
Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients.
dBBQs: dataBase of Bacterial Quality scores.
Wanchai, Visanu; Patumcharoenpol, Preecha; Nookaew, Intawat; Ussery, David
2017-12-28
It is well-known that genome sequencing technologies are becoming significantly cheaper and faster. As a result of this, the exponential growth in sequencing data in public databases allows us to explore ever growing large collections of genome sequences. However, it is less known that the majority of available sequenced genome sequences in public databases are not complete, drafts of varying qualities. We have calculated quality scores for around 100,000 bacterial genomes from all major genome repositories and put them in a fast and easy-to-use database. Prokaryotic genomic data from all sources were collected and combined to make a non-redundant set of bacterial genomes. The genome quality score for each was calculated by four different measurements: assembly quality, number of rRNA and tRNA genes, and the occurrence of conserved functional domains. The dataBase of Bacterial Quality scores (dBBQs) was designed to store and retrieve quality scores. It offers fast searching and download features which the result can be used for further analysis. In addition, the search results are shown in interactive JavaScript chart framework using DC.js. The analysis of quality scores across major public genome databases find that around 68% of the genomes are of acceptable quality for many uses. dBBQs (available at http://arc-gem.uams.edu/dbbqs ) provides genome quality scores for all available prokaryotic genome sequences with a user-friendly Web-interface. These scores can be used as cut-offs to get a high-quality set of genomes for testing bioinformatics tools or improving the analysis. Moreover, all data of the four measurements that were combined to make the quality score for each genome, which can potentially be used for further analysis. dBBQs will be updated regularly and is freely use for non-commercial purpose.
EQUIP: A European Survey of Quality Criteria for the Evaluation of Databases.
ERIC Educational Resources Information Center
Wilson, T. D.
1998-01-01
Reports on two stages of an investigation into the perceived quality of online databases. Presents data from 989 questionnaires from 600 database users in 12 European and Scandinavian countries and results of a test of the SERVQUAL methodology for identifying user expectations about database services. Lists statements used in the SERVQUAL survey.…
Griffith, B C; White, H D; Drott, M C; Saye, J D
1986-07-01
This article reports on five separate studies designed for the National Library of Medicine (NLM) to develop and test methodologies for evaluating the products of large databases. The methodologies were tested on literatures of the medical behavioral sciences (MBS). One of these studies examined how well NLM covered MBS monographic literature using CATLINE and OCLC. Another examined MBS journal and serial literature coverage in MEDLINE and other MBS-related databases available through DIALOG. These two studies used 1010 items derived from the reference lists of sixty-one journals, and tested for gaps and overlaps in coverage in the various databases. A third study examined the quality of the indexing NLM provides to MBS literatures and developed a measure of indexing as a system component. The final two studies explored how well MEDLINE retrieved documents on topics submitted by MBS professionals and how online searchers viewed MEDLINE (and other systems and databases) in handling MBS topics. The five studies yielded both broad research outcomes and specific recommendations to NLM.
Face detection on distorted images using perceptual quality-aware features
NASA Astrophysics Data System (ADS)
Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.
2014-02-01
We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.
O’Suilleabhain, Padraig E.; Sanghera, Manjit; Patel, Neepa; Khemani, Pravin; Lacritz, Laura H.; Chitnis, Shilpa; Whitworth, Louis A.; Dewey, Richard B.
2016-01-01
Objective To develop a process to improve patient outcomes from deep brain stimulation (DBS) surgery for Parkinson disease (PD), essential tremor (ET), and dystonia. Methods We employed standard quality improvement methodology using the Plan-Do-Study-Act process to improve patient selection, surgical DBS lead implantation, postoperative programming, and ongoing assessment of patient outcomes. Results The result of this quality improvement process was the development of a neuromodulation network. The key aspect of this program is rigorous patient assessment of both motor and non-motor outcomes tracked longitudinally using a REDCap database. We describe how this information is used to identify problems and to initiate Plan-Do-Study-Act cycles to address them. Preliminary outcomes data is presented for the cohort of PD and ET patients who have received surgery since the creation of the neuromodulation network. Conclusions Careful outcomes tracking is essential to ensure quality in a complex therapeutic endeavor like DBS surgery for movement disorders. The REDCap database system is well suited to store outcomes data for the purpose of ongoing quality assurance monitoring. PMID:27711133
Dewey, Richard B; O'Suilleabhain, Padraig E; Sanghera, Manjit; Patel, Neepa; Khemani, Pravin; Lacritz, Laura H; Chitnis, Shilpa; Whitworth, Louis A; Dewey, Richard B
2016-01-01
To develop a process to improve patient outcomes from deep brain stimulation (DBS) surgery for Parkinson disease (PD), essential tremor (ET), and dystonia. We employed standard quality improvement methodology using the Plan-Do-Study-Act process to improve patient selection, surgical DBS lead implantation, postoperative programming, and ongoing assessment of patient outcomes. The result of this quality improvement process was the development of a neuromodulation network. The key aspect of this program is rigorous patient assessment of both motor and non-motor outcomes tracked longitudinally using a REDCap database. We describe how this information is used to identify problems and to initiate Plan-Do-Study-Act cycles to address them. Preliminary outcomes data is presented for the cohort of PD and ET patients who have received surgery since the creation of the neuromodulation network. Careful outcomes tracking is essential to ensure quality in a complex therapeutic endeavor like DBS surgery for movement disorders. The REDCap database system is well suited to store outcomes data for the purpose of ongoing quality assurance monitoring.
HepSEQ: International Public Health Repository for Hepatitis B
Gnaneshan, Saravanamuttu; Ijaz, Samreen; Moran, Joanne; Ramsay, Mary; Green, Jonathan
2007-01-01
HepSEQ is a repository for an extensive library of public health and molecular data relating to hepatitis B virus (HBV) infection collected from international sources. It is hosted by the Centre for Infections, Health Protection Agency (HPA), England, United Kingdom. This repository has been developed as a web-enabled, quality-controlled database to act as a tool for surveillance, HBV case management and for research. The web front-end for the database system can be accessed from . The format of the database system allows for comprehensive molecular, clinical and epidemiological data to be deposited into a functional database, to search and manipulate the stored data and to extract and visualize the information on epidemiological, virological, clinical, nucleotide sequence and mutational aspects of HBV infection through web front-end. Specific tools, built into the database, can be utilized to analyse deposited data and provide information on HBV genotype, identify mutations with known clinical significance (e.g. vaccine escape, precore and antiviral-resistant mutations) and carry out sequence homology searches against other deposited strains. Further mechanisms are also in place to allow specific tailored searches of the database to be undertaken. PMID:17130143
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
High-integrity databases for helicopter operations
NASA Astrophysics Data System (ADS)
Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg
2009-05-01
Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.
Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H
2017-05-01
A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Five Librarians Talk about Quality Control and the OCLC Database.
ERIC Educational Resources Information Center
Helge, Brian; And Others
1987-01-01
Five librarians considered authorities on quality cataloging in the OCLC Online Union Catalog were interviewed to obtain their views on the current level of quality control in the OCLC database, the responsibilities of OCLC and individual libraries in improving the quality of records, and the consequences of quality control problems. (CLB)
Centralized Data Management in a Multicountry, Multisite Population-based Study.
Rahman, Qazi Sadeq-ur; Islam, Mohammad Shahidul; Hossain, Belal; Hossain, Tanvir; Connor, Nicholas E; Jaman, Md Jahiduj; Rahman, Md Mahmudur; Ahmed, A S M Nawshad Uddin; Ahmed, Imran; Ali, Murtaza; Moin, Syed Mamun Ibne; Mullany, Luke; Saha, Samir K; El Arifeen, Shams
2016-05-01
A centralized data management system was developed for data collection and processing for the Aetiology of Neonatal Infection in South Asia (ANISA) study. ANISA is a longitudinal cohort study involving neonatal infection surveillance and etiology detection in multiple sites in South Asia. The primary goal of designing such a system was to collect and store data from different sites in a standardized way to pool the data for analysis. We designed the data management system centrally and implemented it to enable data entry at individual sites. This system uses validation rules and audit that reduce errors. The study sites employ a dual data entry method to minimize keystroke errors. They upload collected data weekly to a central server via internet to create a pooled central database. Any inconsistent data identified in the central database are flagged and corrected after discussion with the relevant site. The ANISA Data Coordination Centre in Dhaka provides technical support for operations, maintenance and updating the data management system centrally. Password-protected login identifications and audit trails are maintained for the management system to ensure the integrity and safety of stored data. Centralized management of the ANISA database helps to use common data capture forms (DCFs), adapted to site-specific contextual requirements. DCFs and data entry interfaces allow on-site data entry. This reduces the workload as DCFs do not need to be shipped to a single location for entry. It also improves data quality as all collected data from ANISA goes through the same quality check and cleaning process.
Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-01-01
Objectives To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Materials and methods Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Results Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. Discussion The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Conclusion Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. PMID:25670757
National Urban Database and Access Portal Tool
Based on the need for advanced treatments of high resolution urban morphological features (e.g., buildings, trees) in meteorological, dispersion, air quality and human exposure modeling systems for future urban applications, a new project was launched called the National Urban Da...
Rationale and operational plan to upgrade the U.S. gravity database
Hildenbrand, Thomas G.; Briesacher, Allen; Flanagan, Guy; Hinze, William J.; Hittelman, A.M.; Keller, Gordon R.; Kucks, R.P.; Plouff, Donald; Roest, Walter; Seeley, John; Stith, David A.; Webring, Mike
2002-01-01
A concerted effort is underway to prepare a substantially upgraded digital gravity anomaly database for the United States and to make this data set and associated usage tools available on the internet. This joint effort, spearheaded by the geophysics groups at the National Imagery and Mapping Agency (NIMA), University of Texas at El Paso (UTEP), U.S. Geological Survey (USGS), and National Oceanic and Atmospheric Administration (NOAA), is an outgrowth of the new geoscientific community initiative called Geoinformatics (www.geoinformaticsnetwork.org). This dominantly geospatial initiative reflects the realization by Earth scientists that existing information systems and techniques are inadequate to address the many complex scientific and societal issues. Currently, inadequate standardization and chaotic distribution of geoscience data, inadequate accompanying documentation, and the lack of easy-to-use access tools and computer codes for analysis are major obstacles for scientists, government agencies, and educators. An example of the type of activities envisioned, within the context of Geoinformatics, is the construction, maintenance, and growth of a public domain gravity database and development of the software tools needed to access, implement, and expand it. This product is far more than a high quality database; it is a complete data system for a specific type of geophysical measurement that includes, for example, tools to manipulate the data and tutorials to understand and properly utilize the data. On August 9, 2002, twenty-one scientists from the federal, private and academic sectors met at a workshop to discuss the rationale for upgrading both the United States and North American gravity databases (including offshore regions) and, more importantly, to begin developing an operational plan to effectively create a new gravity data system. We encourage anyone interested in contributing data or participating in this effort to contact G.R. Keller or T.G. Hildenbrand. This workshop was the first step in building a web-based data system for sharing quality gravity data and methodology, and it builds on existing collaborative efforts. This compilation effort will result in significant additions to and major refinement of the U.S. database that is currently released publicly by NOAA’s National Geophysical Data Center and will also include an additional objective to substantially upgrade the North American database, released over 15 years ago (Committee for the Gravity Anomaly Map of North America, 1987).
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
Sollie, Annet; Sijmons, Rolf H; Helsper, Charles; Numans, Mattijs E
2017-03-01
To assess quality and reusability of coded cancer diagnoses in routine primary care data. To identify factors that influence data quality and areas for improvement. A dynamic cohort study in a Dutch network database containing 250,000 anonymized electronic medical records (EMRs) from 52 general practices was performed. Coded data from 2000 to 2011 for the three most common cancer types (breast, colon and prostate cancer) was compared to the Netherlands Cancer Registry. Data quality is expressed in Standard Incidence Ratios (SIRs): the ratio between the number of coded cases observed in the primary care network database and the expected number of cases based on the Netherlands Cancer Registry. Ratios were multiplied by 100% for readability. The overall SIR was 91.5% (95%CI 88.5-94.5) and showed improvement over the years. SIRs differ between cancer types: from 71.5% for colon cancer in males to 103.9% for breast cancer. There are differences in data quality (SIRs 76.2% - 99.7%) depending on the EMR system used, with SIRs up to 232.9% for breast cancer. Frequently observed errors in routine healthcare data can be classified as: lack of integrity checks, inaccurate use and/or lack of codes, and lack of EMR system functionality. Re-users of coded routine primary care Electronic Medical Record data should be aware that 30% of cancer cases can be missed. Up to 130% of cancer cases found in the EMR data can be false-positive. The type of EMR system and the type of cancer influence the quality of coded diagnosis registry. While data quality can be improved (e.g. through improving system design and by training EMR system users), re-use should only be taken care of by appropriately trained experts. Copyright © 2016. Published by Elsevier B.V.
EuCliD (European Clinical Database): a database comparing different realities.
Marcelli, D; Kirchgessner, J; Amato, C; Steil, H; Mitteregger, A; Moscardò, V; Carioni, C; Orlandini, G; Gatti, E
2001-01-01
Quality and variability of dialysis practice are generally gaining more and more importance. Fresenius Medical Care (FMC), as provider of dialysis, has the duty to continuously monitor and guarantee the quality of care delivered to patients treated in its European dialysis units. Accordingly, a new clinical database called EuCliD has been developed. It is a multilingual and fully codified database, using as far as possible international standard coding tables. EuCliD collects and handles sensitive medical patient data, fully assuring confidentiality. The Infrastructure: a Domino server is installed in each country connected to EuCliD. All the centres belonging to a country are connected via modem to the country server. All the Domino Servers are connected via Wide Area Network to the Head Quarter Server in Bad Homburg (Germany). Inside each country server only anonymous data related to that particular country are available. The only place where all the anonymous data are available is the Head Quarter Server. The data collection is strongly supported in each country by "key-persons" with solid relationships to their respective national dialysis units. The quality of the data in EuCliD is ensured at different levels. At the end of January 2001, more than 11,000 patients treated in 135 centres located in 7 countries are already included in the system. FMC has put the patient care at the centre of its activities for many years and now is able to provide transparency to the community (Authorities, Nephrologists, Patients.....) thus demonstrating the quality of the service.
Kappel, William M.; Sinclair, Gaylen J.; Reddy, James E.; Eckhardt, David A.; deVries, M. Peter; Phillips, Margaret E.
2012-01-01
U.S. Geological Survey (USGS) Data Rescue Program funds were used to recover data from paper records for 139 streamgages across central and western New York State; 6,133 different streamflow measurement forms, collected between 1970-80, contained field water-quality measurements. The water-quality data were entered, reviewed, and uploaded into the USGS National Water Information System. In total, 4,285 unique site visits were added to the database. The new values represent baseline water quality from which to measure change and will lead to a comparison of water-quality change over the last 40 years and into the future. Specific conductance was one of the measured properties and represents a simple way to determine if ambient inorganic water quality has been altered by anthropogenic (road salt runoff, wastewater discharges, or natural gas development) or natural sources. The objective of this report is to describe ambient specific conductance characteristics of surface water across the central and western part of New York. This report presents median specific conductance of stream discharge for the period 1970-80 and a description of the relation between specific conductance and concentrations of total dissolved solids (TDS) retrieved from the USGS National Water Information System (NWIS) database from 1955 to present. The data descriptions provide a baseline of surface-water specific conductance data that can used for comparison to current and future measurements in New York streams.
The on-site quality-assurance system for Hyper Suprime-Cam: OSQAH
NASA Astrophysics Data System (ADS)
Furusawa, Hisanori; Koike, Michitaro; Takata, Tadafumi; Okura, Yuki; Miyatake, Hironao; Lupton, Robert H.; Bickerton, Steven; Price, Paul A.; Bosch, James; Yasuda, Naoki; Mineo, Sogo; Yamada, Yoshihiko; Miyazaki, Satoshi; Nakata, Fumiaki; Koshida, Shintaro; Komiyama, Yutaka; Utsumi, Yousuke; Kawanomoto, Satoshi; Jeschke, Eric; Noumaru, Junichi; Schubert, Kiaina; Iwata, Ikuru; Finet, Francois; Fujiyoshi, Takuya; Tajitsu, Akito; Terai, Tsuyoshi; Lee, Chien-Hsiu
2018-01-01
We have developed an automated quick data analysis system for data quality assurance (QA) for Hyper Suprime-Cam (HSC). The system was commissioned in 2012-2014, and has been offered for general observations, including the HSC Subaru Strategic Program, since 2014 March. The system provides observers with data quality information, such as seeing, sky background level, and sky transparency, based on quick analysis as data are acquired. Quick-look images and validation of image focus are also provided through an interactive web application. The system is responsible for the automatic extraction of QA information from acquired raw data into a database, to assist with observation planning, assess progress of all observing programs, and monitor long-term efficiency variations of the instrument and telescope. Enhancements of the system are being planned to facilitate final data analysis, to improve the HSC archive, and to provide legacy products for astronomical communities.
Mathis, Alexander; Depaquit, Jérôme; Dvořák, Vit; Tuten, Holly; Bañuls, Anne-Laure; Halada, Petr; Zapata, Sonia; Lehrter, Véronique; Hlavačková, Kristýna; Prudhomme, Jorian; Volf, Petr; Sereno, Denis; Kaufmann, Christian; Pflüger, Valentin; Schaffner, Francis
2015-05-10
Rapid, accurate and high-throughput identification of vector arthropods is of paramount importance in surveillance programmes that are becoming more common due to the changing geographic occurrence and extent of many arthropod-borne diseases. Protein profiling by MALDI-TOF mass spectrometry fulfils these requirements for identification, and reference databases have recently been established for several vector taxa, mostly with specimens from laboratory colonies. We established and validated a reference database containing 20 phlebotomine sand fly (Diptera: Psychodidae, Phlebotominae) species by using specimens from colonies or field-collections that had been stored for various periods of time. Identical biomarker mass patterns ('superspectra') were obtained with colony- or field-derived specimens of the same species. In the validation study, high quality spectra (i.e. more than 30 evaluable masses) were obtained with all fresh insects from colonies, and with 55/59 insects deep-frozen (liquid nitrogen/-80 °C) for up to 25 years. In contrast, only 36/52 specimens stored in ethanol could be identified. This resulted in an overall sensitivity of 87 % (140/161); specificity was 100 %. Duration of storage impaired data counts in the high mass range, and thus cluster analyses of closely related specimens might reflect their storage conditions rather than phenotypic distinctness. A major drawback of MALDI-TOF MS is the restricted availability of in-house databases and the fact that mass spectrometers from 2 companies (Bruker, Shimadzu) are widely being used. We have analysed fingerprints of phlebotomine sand flies obtained by automatic routine procedure on a Bruker instrument by using our database and the software established on a Shimadzu system. The sensitivity with 312 specimens from 8 sand fly species from laboratory colonies when evaluating only high quality spectra was 98.3 %; the specificity was 100 %. The corresponding diagnostic values with 55 field-collected specimens from 4 species were 94.7 % and 97.4 %, respectively. A centralized high-quality database (created by expert taxonomists and experienced users of mass spectrometers) that is easily amenable to customer-oriented identification services is a highly desirable resource. As shown in the present work, spectra obtained from different specimens with different instruments can be analysed using a centralized database, which should be available in the near future via an online platform in a cost-efficient manner.
A series of PDB related databases for everyday needs.
Joosten, Robbie P; te Beek, Tim A H; Krieger, Elmar; Hekkelman, Maarten L; Hooft, Rob W W; Schneider, Reinhard; Sander, Chris; Vriend, Gert
2011-01-01
The Protein Data Bank (PDB) is the world-wide repository of macromolecular structure information. We present a series of databases that run parallel to the PDB. Each database holds one entry, if possible, for each PDB entry. DSSP holds the secondary structure of the proteins. PDBREPORT holds reports on the structure quality and lists errors. HSSP holds a multiple sequence alignment for all proteins. The PDBFINDER holds easy to parse summaries of the PDB file content, augmented with essentials from the other systems. PDB_REDO holds re-refined, and often improved, copies of all structures solved by X-ray. WHY_NOT summarizes why certain files could not be produced. All these systems are updated weekly. The data sets can be used for the analysis of properties of protein structures in areas ranging from structural genomics, to cancer biology and protein design.
Calculation of Cost Effectiveness of Emission Control Systems
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
A digital library for medical imaging activities
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sérgio S.
2007-03-01
This work presents the development of an electronic infrastructure to make available a free, online, multipurpose and multimodality medical image database. The proposed infrastructure implements a distributed architecture for medical image database, authoring tools, and a repository for multimedia documents. Also it includes a peer-reviewed model that assures quality of dataset. This public repository provides a single point of access for medical images and related information to facilitate retrieval tasks. The proposed approach has been used as an electronic teaching system in Radiology as well.
NASA Technical Reports Server (NTRS)
Orcutt, John M.; Brenton, James C.
2016-01-01
An accurate database of meteorological data is essential for designing any aerospace vehicle and for preparing launch commit criteria. Meteorological instrumentation were recently placed on the three Lightning Protection System (LPS) towers at Kennedy Space Center (KSC) launch complex 39B (LC-39B), which provide a unique meteorological dataset existing at the launch complex over an extensive altitude range. Data records of temperature, dew point, relative humidity, wind speed, and wind direction are produced at 40, 78, 116, and 139 m at each tower. The Marshall Space Flight Center Natural Environments Branch (EV44) received an archive that consists of one-minute averaged measurements for the period of record of January 2011 - April 2015. However, before the received database could be used EV44 needed to remove any erroneous data from within the database through a comprehensive quality control (QC) process. The QC process applied to the LPS towers' meteorological data is similar to other QC processes developed by EV44, which were used in the creation of meteorological databases for other towers at KSC. The QC process utilized in this study has been modified specifically for use with the LPS tower database. The QC process first includes a check of each individual sensor. This check includes removing any unrealistic data and checking the temporal consistency of each variable. Next, data from all three sensors at each height are checked against each other, checked against climatology, and checked for sensors that erroneously report a constant value. Then, a vertical consistency check of each variable at each tower is completed. Last, the upwind sensor at each level is selected to minimize the influence of the towers and other structures at LC-39B on the measurements. The selection process for the upwind sensor implemented a study of tower-induced turbulence. This paper describes in detail the QC process, QC results, and the attributes of the LPS towers meteorological database.
The European general thoracic surgery database project.
Falcoz, Pierre Emmanuel; Brunelli, Alessandro
2014-05-01
The European Society of Thoracic Surgeons (ESTS) Database is a free registry created by ESTS in 2001. The current online version was launched in 2007. It runs currently on a Dendrite platform with extensive data security and frequent backups. The main features are a specialty-specific, procedure-specific, prospectively maintained, periodically audited and web-based electronic database, designed for quality control and performance monitoring, which allows for the collection of all general thoracic procedures. Data collection is the "backbone" of the ESTS database. It includes many risk factors, processes of care and outcomes, which are specially designed for quality control and performance audit. The user can download and export their own data and use them for internal analyses and quality control audits. The ESTS database represents the gold standard of clinical data collection for European General Thoracic Surgery. Over the past years, the ESTS database has achieved many accomplishments. In particular, the database hit two major milestones: it now includes more than 235 participating centers and 70,000 surgical procedures. The ESTS database is a snapshot of surgical practice that aims at improving patient care. In other words, data capture should become integral to routine patient care, with the final objective of improving quality of care within Europe.
"TPSX: Thermal Protection System Expert and Material Property Database"
NASA Technical Reports Server (NTRS)
Squire, Thomas H.; Milos, Frank S.; Rasky, Daniel J. (Technical Monitor)
1997-01-01
The Thermal Protection Branch at NASA Ames Research Center has developed a computer program for storing, organizing, and accessing information about thermal protection materials. The program, called Thermal Protection Systems Expert and Material Property Database, or TPSX, is available for the Microsoft Windows operating system. An "on-line" version is also accessible on the World Wide Web. TPSX is designed to be a high-quality source for TPS material properties presented in a convenient, easily accessible form for use by engineers and researchers in the field of high-speed vehicle design. Data can be displayed and printed in several formats. An information window displays a brief description of the material with properties at standard pressure and temperature. A spread sheet window displays complete, detailed property information. Properties which are a function of temperature and/or pressure can be displayed as graphs. In any display the data can be converted from English to SI units with the click of a button. Two material databases included with TPSX are: 1) materials used and/or developed by the Thermal Protection Branch at NASA Ames Research Center, and 2) a database compiled by NASA Johnson Space Center 9JSC). The Ames database contains over 60 advanced TPS materials including flexible blankets, rigid ceramic tiles, and ultra-high temperature ceramics. The JSC database contains over 130 insulative and structural materials. The Ames database is periodically updated and expanded as required to include newly developed materials and material property refinements.
ERIC Educational Resources Information Center
Yuki, Takako; Kameyama, Yuriko
2013-01-01
This paper looks at the issue of the quality of education in Yemen. It uses micro-data from TIMSS and from surveys conducted in underserved rural areas, as well as macro-level policy information from the System Assessment for Better Education Results (SABER) database. The analysis indicates that the availability of teachers and resources at…
National Water Quality Standards Database (NWQSD)
The National Water Quality Standards Database (WQSDB) provides access to EPA and state water quality standards (WQS) information in text, tables, and maps. This data source was last updated in December 2007 and will no longer be updated.
A privacy-preserved analytical method for ehealth database with minimized information loss.
Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun
2012-01-01
Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification.
Pan, Shiyang; Mu, Yuan; Wang, Hong; Wang, Tong; Huang, Peijun; Ma, Jianfeng; Jiang, Li; Zhang, Jie; Gu, Bing; Yi, Lujiang
2010-04-01
To meet the needs of management of medical case information and biospecimen simultaneously, we developed a novel medical case information system integrating with biospecimen management. The database established by MS SQL Server 2000 covered, basic information, clinical diagnosis, imaging diagnosis, pathological diagnosis and clinical treatment of patient; physicochemical property, inventory management and laboratory analysis of biospecimen; users log and data maintenance. The client application developed by Visual C++ 6.0 was used to implement medical case and biospecimen management, which was based on Client/Server model. This system can perform input, browse, inquest, summary of case and related biospecimen information, and can automatically synthesize case-records based on the database. Management of not only a long-term follow-up on individual, but also of grouped cases organized according to the aim of research can be achieved by the system. This system can improve the efficiency and quality of clinical researches while biospecimens are used coordinately. It realizes synthesized and dynamic management of medical case and biospecimen, which may be considered as a new management platform.
The Structural Ceramics Database: Technical Foundations
Munro, R. G.; Hwang, F. Y.; Hubbard, C. R.
1989-01-01
The development of a computerized database on advanced structural ceramics can play a critical role in fostering the widespread use of ceramics in industry and in advanced technologies. A computerized database may be the most effective means of accelerating technology development by enabling new materials to be incorporated into designs far more rapidly than would have been possible with traditional information transfer processes. Faster, more efficient access to critical data is the basis for creating this technological advantage. Further, a computerized database provides the means for a more consistent treatment of data, greater quality control and product reliability, and improved continuity of research and development programs. A preliminary system has been completed as phase one of an ongoing program to establish the Structural Ceramics Database system. The system is designed to be used on personal computers. Developed in a modular design, the preliminary system is focused on the thermal properties of monolithic ceramics. The initial modules consist of materials specification, thermal expansion, thermal conductivity, thermal diffusivity, specific heat, thermal shock resistance, and a bibliography of data references. Query and output programs also have been developed for use with these modules. The latter program elements, along with the database modules, will be subjected to several stages of testing and refinement in the second phase of this effort. The goal of the refinement process will be the establishment of this system as a user-friendly prototype. Three primary considerations provide the guidelines to the system’s development: (1) The user’s needs; (2) The nature of materials properties; and (3) The requirements of the programming language. The present report discusses the manner and rationale by which each of these considerations leads to specific features in the design of the system. PMID:28053397
What's in Your Techno-Future? Vendors Share Their Views.
ERIC Educational Resources Information Center
Gerber, Carole
1995-01-01
Examines vendors' views on the future of CD-ROM technology. Topics include the library role, single point access, costs, tape backup, user-friendly library automation systems and databases, improved quality, the growth of Internet access, and perspectives on technology in schools. (AEF)
SPECIATE--EPA'S DATABASE OF SPECIATED EMISSION PROFILES
SPECIATE is EPA's repository of Total Organic Compound and Particulate Matter speciated profiles for a wide variety of sources. The profiles in this system are provided for air quality dispersion modeling and as a library for source-receptor and source apportionment type models. ...
Bohl, Daniel D; Russo, Glenn S; Basques, Bryce A; Golinvaux, Nicholas S; Fu, Michael C; Long, William D; Grauer, Jonathan N
2014-12-03
There has been an increasing use of national databases to conduct orthopaedic research. Questions regarding the validity and consistency of these studies have not been fully addressed. The purpose of this study was to test for similarity in reported measures between two national databases commonly used for orthopaedic research. A retrospective cohort study of patients undergoing lumbar spinal fusion procedures during 2009 to 2011 was performed in two national databases: the Nationwide Inpatient Sample and the National Surgical Quality Improvement Program. Demographic characteristics, comorbidities, and inpatient adverse events were directly compared between databases. The total numbers of patients included were 144,098 from the Nationwide Inpatient Sample and 8434 from the National Surgical Quality Improvement Program. There were only small differences in demographic characteristics between the two databases. There were large differences between databases in the rates at which specific comorbidities were documented. Non-morbid obesity was documented at rates of 9.33% in the Nationwide Inpatient Sample and 36.93% in the National Surgical Quality Improvement Program (relative risk, 0.25; p < 0.05). Peripheral vascular disease was documented at rates of 2.35% in the Nationwide Inpatient Sample and 0.60% in the National Surgical Quality Improvement Program (relative risk, 3.89; p < 0.05). Similarly, there were large differences between databases in the rates at which specific inpatient adverse events were documented. Sepsis was documented at rates of 0.38% in the Nationwide Inpatient Sample and 0.81% in the National Surgical Quality Improvement Program (relative risk, 0.47; p < 0.05). Acute kidney injury was documented at rates of 1.79% in the Nationwide Inpatient Sample and 0.21% in the National Surgical Quality Improvement Program (relative risk, 8.54; p < 0.05). As database studies become more prevalent in orthopaedic surgery, authors, reviewers, and readers should view these studies with caution. This study shows that two commonly used databases can identify demographically similar patients undergoing a common orthopaedic procedure; however, the databases document markedly different rates of comorbidities and inpatient adverse events. The differences are likely the result of the very different mechanisms through which the databases collect their comorbidity and adverse event data. Findings highlight concerns regarding the validity of orthopaedic database research. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient.
Brossier, David; El Taani, Redha; Sauthier, Michael; Roumeliotis, Nadia; Emeriaud, Guillaume; Jouvet, Philippe
2018-04-01
Our objective was to construct a prospective high-quality and high-frequency database combining patient therapeutics and clinical variables in real time, automatically fed by the information system and network architecture available through fully electronic charting in our PICU. The purpose of this article is to describe the data acquisition process from bedside to the research electronic database. Descriptive report and analysis of a prospective database. A 24-bed PICU, medical ICU, surgical ICU, and cardiac ICU in a tertiary care free-standing maternal child health center in Canada. All patients less than 18 years old were included at admission to the PICU. None. Between May 21, 2015, and December 31, 2016, 1,386 consecutive PICU stays from 1,194 patients were recorded in the database. Data were prospectively collected from admission to discharge, every 5 seconds from monitors and every 30 seconds from mechanical ventilators and infusion pumps. These data were linked to the patient's electronic medical record. The database total volume was 241 GB. The patients' median age was 2.0 years (interquartile range, 0.0-9.0). Data were available for all mechanically ventilated patients (n = 511; recorded duration, 77,678 hr), and respiratory failure was the most frequent reason for admission (n = 360). The complete pharmacologic profile was synched to database for all PICU stays. Following this implementation, a validation phase is in process and several research projects are ongoing using this high-fidelity database. Using the existing bedside information system and network architecture of our PICU, we implemented an ongoing high-fidelity prospectively collected electronic database, preventing the continuous loss of scientific information. This offers the opportunity to develop research on clinical decision support systems and computational models of cardiorespiratory physiology for example.
AstroCloud, a Cyber-Infrastructure for Astronomy Research: Data Archiving and Quality Control
NASA Astrophysics Data System (ADS)
He, B.; Cui, C.; Fan, D.; Li, C.; Xiao, J.; Yu, C.; Wang, C.; Cao, Z.; Chen, J.; Yi, W.; Li, S.; Mi, L.; Yang, S.
2015-09-01
AstroCloud is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences)1(Cui et al. 2014). To archive the astronomical data in China, we present the implementation of the astronomical data archiving system (ADAS). Data archiving and quality control are the infrastructure for the AstroCloud. Throughout the data of the entire life cycle, data archiving system standardized data, transferring data, logging observational data, archiving ambient data, And storing these data and metadata in database. Quality control covers the whole process and all aspects of data archiving.
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, N.; Lmd Team
The GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer accessible database system, in its former 1997 and 2001 versions, has been updated in 2003 (GEISA-03). It is developed by the ARA (Atmospheric Radiation Analysis) group at LMD (Laboratoire de Météorologie Dynamique, France) since 1974. This early effort implemented the so-called `` line-by-line and layer-by-layer '' approach for forward radiative transfer modelling action. The GEISA 2003 system comprises three databases with their associated management softwares: a database of spectroscopic parameters required to describe adequately the individual spectral lines belonging to 42 molecules (96 isotopic species) and located in a spectral range from the microwave to the limit of the visible. The featured molecules are of interest in studies of the terrestrial as well as the other planetary atmospheres, especially those of the Giant Planets. a database of absorption cross-sections of molecules such as chlorofluorocarbons which exhibit unresolvable spectra. a database of refractive indices of basic atmospheric aerosol components. Illustrations will be given of GEISA-03, data archiving method, contents, management softwares and Web access facilities at: http://ara.lmd.polytechnique.fr The performance of instruments like AIRS (Atmospheric Infrared Sounder; http://www-airs.jpl.nasa.gov) in the USA, and IASI (Infrared Atmospheric Sounding Interferometer; http://smsc.cnes.fr/IASI/index.htm) in Europe, which have a better vertical resolution and accuracy, compared to the presently existing satellite infrared vertical sounders, is directly related to the quality of the spectroscopic parameters of the optically active gases, since these are essential input in the forward models used to simulate recorded radiance spectra. For these upcoming atmospheric sounders, the so-called GEISA/IASI sub-database system has been elaborated, from GEISA. Its content, will be described, as well. This work is ongoing, with the purpose of assessing the IASI measurements capabilities and the spectroscopic information quality, within the ISSWG (IASI Sounding Science Working Group), in the frame of the CNES (Centre National d'Etudes Spatiales, France)/EUMETSAT (EUropean organization for the exploitation of METeorological SATellites) Polar System (EPS) project, by simulating high resolution radiances and/or using experimental data. EUMETSAT will implement GEISA/IASI into the EPS ground segment. The IASI soundings spectroscopic data archive requirements will be discussed in the context of comparisons between recorded and calculated experimental spectra, using the ARA/4A forward line-by-line radiative transfer modelling code in its latest version.
Towards a Global Service Registry for the World-Wide LHC Computing Grid
NASA Astrophysics Data System (ADS)
Field, Laurence; Alandes Pradillo, Maria; Di Girolamo, Alessandro
2014-06-01
The World-Wide LHC Computing Grid encompasses a set of heterogeneous information systems; from central portals such as the Open Science Grid's Information Management System and the Grid Operations Centre Database, to the WLCG information system, where the information sources are the Grid services themselves. Providing a consistent view of the information, which involves synchronising all these informations systems, is a challenging activity that has lead the LHC virtual organisations to create their own configuration databases. This experience, whereby each virtual organisation's configuration database interfaces with multiple information systems, has resulted in the duplication of effort, especially relating to the use of manual checks for the handling of inconsistencies. The Global Service Registry aims to address this issue by providing a centralised service that aggregates information from multiple information systems. It shows both information on registered resources (i.e. what should be there) and available resources (i.e. what is there). The main purpose is to simplify the synchronisation of the virtual organisation's own configuration databases, which are used for job submission and data management, through the provision of a single interface for obtaining all the information. By centralising the information, automated consistency and validation checks can be performed to improve the overall quality of information provided. Although internally the GLUE 2.0 information model is used for the purpose of integration, the Global Service Registry in not dependent on any particular information model for ingestion or dissemination. The intention is to allow the virtual organisation's configuration databases to be decoupled from the underlying information systems in a transparent way and hence simplify any possible future migration due to the evolution of those systems. This paper presents the Global Service Registry architecture, its advantages compared to the current situation and how it can support the evolution of information systems.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
U.S. Geological Survey coal quality (COALQUAL) database; version 2.0
Bragg, L.J.; Oman, J.K.; Tewalt, S.J.; Oman, C.L.; Rega, N.H.; Washington, P.M.; Finkelman, R.B.
1997-01-01
The USGS Coal Quality database is an interactive, computerized component of the NCRDS. It contains comprehensive analyses of more than 13,000 samples of coal and associated rocks from every major coal-bearing basin and coal bed in the U.S. The data in the coal quality database represent analyses of the coal as it exists in the ground. The data commonly are presented on an as-received whole-coal basis.
Brimhall, Bradley B; Hall, Timothy E; Walczak, Steven
2006-01-01
A hospital laboratory relational database, developed over eight years, has demonstrated significant cost savings and a substantial financial return on investment (ROI). In addition, the database has been used to measurably improve laboratory operations and the quality of patient care.
SU-E-T-186: Cloud-Based Quality Assurance Application for Linear Accelerator Commissioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, J
2015-06-15
Purpose: To identify anomalies and safety issues during data collection and modeling for treatment planning systems Methods: A cloud-based quality assurance system (AQUIRE - Automated QUalIty REassurance) has been developed to allow the uploading and analysis of beam data aquired during the treatment planning system commissioning process. In addition to comparing and aggregating measured data, tools have also been developed to extract dose from the treatment planning system for end-to-end testing. A gamma index is perfomed on the data to give a dose difference and distance-to-agreement for validation that a beam model is generating plans consistent with the beam datamore » collection. Results: Over 20 linear accelerators have been commissioning using this platform, and a variety of errors and potential saftey issues have been caught through the validation process. For example, the gamma index of 2% dose, 2mm DTA is quite sufficient to see curves not corrected for effective point of measurement. Also, data imported into the database is analyzed against an aggregate of similar linear accelerators to show data points that are outliers. The resulting curves in the database exhibit a very small standard deviation and imply that a preconfigured beam model based on aggregated linear accelerators will be sufficient in most cases. Conclusion: With the use of this new platform for beam data commissioning, errors in beam data collection and treatment planning system modeling are greatly reduced. With the reduction in errors during acquisition, the resulting beam models are quite similar, suggesting that a common beam model may be possible in the future. Development is ongoing to create routine quality assurance tools to compare back to the beam data acquired during commissioning. I am a medical physicist for Alzyen Medical Physics, and perform commissioning services.« less
Polarization transformation as an algorithm for automatic generalization and quality assessment
NASA Astrophysics Data System (ADS)
Qian, Haizhong; Meng, Liqiu
2007-06-01
Since decades it has been a dream of cartographers to computationally mimic the generalization processes in human brains for the derivation of various small-scale target maps or databases from a large-scale source map or database. This paper addresses in a systematic way the polarization transformation (PT) - a new algorithm that serves both the purpose of automatic generalization of discrete features and the quality assurance. By means of PT, two dimensional point clusters or line networks in the Cartesian system can be transformed into a polar coordinate system, which then can be unfolded as a single spectrum line r = f(α), where r and a stand for the polar radius and the polar angle respectively. After the transformation, the original features will correspond to nodes on the spectrum line delimited between 0° and 360° along the horizontal axis, and between the minimum and maximum polar radius along the vertical axis. Since PT is a lossless transformation, it allows a straighforward analysis and comparison of the original and generalized distributions, thus automatic generalization and quality assurance can be down in this way. Examples illustrate that PT algorithm meets with the requirement of generalization of discrete spatial features and is more scientific.
Key findings from the intelligent transportation systems (ITS) program, what have we learned?
DOT National Transportation Integrated Search
2000-06-01
A study has been conducted to evaluate the quality and variability of the International Roughness Index (IRI) data in the Long Term Pavement Performance (LTPP) database. All LTPP profiles collected between June 1989 and October 1997 were visually rev...
Indyk, Leonard; Indyk, Debbie
2006-01-01
For the past 14 years, a team of applied social scientists and system analysts has worked with a wide variety of Community- Based Organizations (CBO's), other grassroots agencies and networks, and Medical Center departments to support resource, program, staff and data development and evaluation for hospital- and community-based programs and agencies serving HIV at-risk and affected populations. A by-product of this work has been the development, elaboration and refinement of an approach to Continuous Quality Improvement (CQI) which is appropriate for diverse community-based providers and agencies. A key component of our CQI system involves the installation of a sophisticated relational database management and reporting system (DBMS) which is used to collect, analyze, and report data in an iterative process to provide feedback among the evaluators, agency administration and staff. The database system is designed for two purposes: (1) to support the agency's administrative internal and external reporting requirements; (2) to support the development of practice driven health services and early intervention research. The body of work has fostered a unique opportunity for the development of exploratory service-driven research which serves both administrative and research needs.
Soranno, Patricia A; Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Cheruvelil, Kendra S; Christel, Samuel T; Claucherty, Matt; Collins, Sarah M; Conroy, Joseph D; Downing, John A; Dukett, Jed; Fergus, C Emi; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Skaff, Nicholas K; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Tan, Pang-Ning; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai
2017-12-01
Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600-12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. © The Author 2017. Published by Oxford University Press.
Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Christel, Samuel T; Claucherty, Matt; Conroy, Joseph D; Downing, John A; Dukett, Jed; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai
2017-01-01
Abstract Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states. LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. PMID:29053868
Soranno, Patricia A.; Bacon, Linda C.; Beauchene, Michael; Bednar, Karen E.; Bissell, Edward G.; Boudreau, Claire K.; Boyer, Marvin G.; Bremigan, Mary T.; Carpenter, Stephen R.; Carr, Jamie W.; Cheruvelil, Kendra S.; Christel, Samuel T.; Claucherty, Matt; Collins, Sarah M.; Conroy, Joseph D.; Downing, John A.; Dukett, Jed; Fergus, C. Emi; Filstrup, Christopher T.; Funk, Clara; Gonzalez, Maria J.; Green, Linda T.; Gries, Corinna; Halfman, John D.; Hamilton, Stephen K.; Hanson, Paul C.; Henry, Emily N.; Herron, Elizabeth M.; Hockings, Celeste; Jackson, James R.; Jacobson-Hedin, Kari; Janus, Lorraine L.; Jones, William W.; Jones, John R.; Keson, Caroline M.; King, Katelyn B.S.; Kishbaugh, Scott A.; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A.; Lee, Yuehlin; Lottig, Noah R.; Lynch, Jason A.; Matthews, Leslie J.; McDowell, William H.; Moore, Karen E.B.; Neff, Brian; Nelson, Sarah J.; Oliver, Samantha K.; Pace, Michael L.; Pierson, Donald C.; Poisson, Autumn C.; Pollard, Amina I.; Post, David M.; Reyes, Paul O.; Rosenberry, Donald; Roy, Karen M.; Rudstam, Lars G.; Sarnelle, Orlando; Schuldt, Nancy J.; Scott, Caren E.; Skaff, Nicholas K.; Smith, Nicole J.; Spinelli, Nick R.; Stachelek, Joseph J.; Stanley, Emily H.; Stoddard, John L.; Stopyak, Scott B.; Stow, Craig A.; Tallant, Jason M.; Tan, Pang-Ning; Thorpe, Anthony P.; Vanni, Michael J.; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C.; Webster, Katherine E.; White, Jeffrey D.; Wilmes, Marcy K.; Yuan, Shuai
2017-01-01
Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales.
de Jonge, Linda; Garne, Ester; Gini, Rosa; Jordan, Susan E; Klungsoyr, Kari; Loane, Maria; Neville, Amanda J; Pierini, Anna; Puccini, Aurora; Thayer, Daniel S; Tucker, David; Vinkel Hansen, Anne; Bakker, Marian K
2015-11-01
Research on associations between medication use during pregnancy and congenital anomalies is significative for assessing the safe use of a medicine in pregnancy. Congenital anomaly (CA) registries do not have optimal information on medicine exposure, in contrast to prescription databases. Linkage of prescription databases to the CA registries is a potentially effective method of obtaining accurate information on medicine use in pregnancies and the risk of congenital anomalies. We linked data from primary care and prescription databases to five European Surveillance of Congenital Anomalies (EUROCAT) CA registries. The linkage was evaluated by looking at linkage rate, characteristics of linked and non-linked cases, first trimester exposure rates for six groups of medicines according to the prescription data and information on medication use registered in the CA databases, and agreement of exposure. Of the 52,619 cases registered in the CA databases, 26,552 could be linked. The linkage rate varied between registries over time and by type of birth. The first trimester exposure rates and the agreements between the databases varied for the different medicine groups. Information on anti-epileptic drugs and insulins and analogue medicine use recorded by CA registries was of good quality. For selective serotonin reuptake inhibitors, anti-asthmatics, antibacterials for systemic use, and gonadotropins and other ovulation stimulants, the recorded information was less complete. Linkage of primary care or prescription databases to CA registries improved the quality of information on maternal use of medicines in pregnancy, especially for medicine groups that are less fully registered in CA registries.
Geologic Map Database of Texas
Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.
2005-01-01
The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.
Plantier, Morgane; Havet, Nathalie; Durand, Thierry; Caquot, Nicolas; Amaz, Camille; Biron, Pierre; Philip, Irène; Perrier, Lionel
2017-06-01
Electronic health records (EHR) are increasingly being adopted by healthcare systems worldwide. In France, the "Hôpital numérique 2012-2017" program was implemented as part of a strategic plan to modernize health information technology (HIT), including the promotion of widespread EHR use. With significant upfront investment costs as well as ongoing operational expenses, it is important to assess this system in terms of its ability to result in improvements in hospital performances. The aim of this study was to evaluate the impact of EHR use on the quality of care management in acute care hospitals throughout France. This retrospective study was based on data derived from three national databases for the year 2011: IPAQSS (indicators of improvement in the quality and the management of healthcare, "IPAQSS"), Hospi-Diag (French hospital performance indicators), and the national accreditation database. Several multivariate models were used to examine the association between the use of EHRs and specific EHR features with four quality indicators: the quality of patient record, the delay in sending information at hospital discharge, the pain status evaluation, and the nutritional status evaluation, while also adjusting for hospital characteristics. The models revealed a significant positive impact of EHR use on the four quality indicators. Additionally, they showed a differential impact according to the functionality of the element of the health record that was computerized. All four quality indicators were also impacted by the type of hospital, the geographical region, and the severity of the pathology. These results suggest that, to improve the quality of care management in hospitals, EHR adoption represents an important lever. They complete previous work dealing with EHR and the organizational performance of hospital surgical units. Copyright © 2017 Elsevier B.V. All rights reserved.
Data-base development for water-quality modeling of the Patuxent River basin, Maryland
Fisher, G.T.; Summers, R.M.
1987-01-01
Procedures and rationale used to develop a data base and data management system for the Patuxent Watershed Nonpoint Source Water Quality Monitoring and Modeling Program of the Maryland Department of the Environment and the U.S. Geological Survey are described. A detailed data base and data management system has been developed to facilitate modeling of the watershed for water quality planning purposes; statistical analysis; plotting of meteorologic, hydrologic and water quality data; and geographic data analysis. The system is Maryland 's prototype for development of a basinwide water quality management program. A key step in the program is to build a calibrated and verified water quality model of the basin using the Hydrological Simulation Program--FORTRAN (HSPF) hydrologic model, which has been used extensively in large-scale basin modeling. The compilation of the substantial existing data base for preliminary calibration of the basin model, including meteorologic, hydrologic, and water quality data from federal and state data bases and a geographic information system containing digital land use and soils data is described. The data base development is significant in its application of an integrated, uniform approach to data base management and modeling. (Lantz-PTT)
Jacobs, Jeffrey P
2002-01-01
The field of congenital heart surgery has the opportunity to create the first comprehensive international database for a medical subspecialty. An understanding of the demographics of congenital heart disease and the rapid growth of computer technology leads to the realization that creating a comprehensive international database for pediatric cardiac surgery represents an important and achievable goal. The evolution of computer-based data analysis creates an opportunity to develop software to manage an international congenital heart surgery database and eventually become an electronic medical record. The same database data set for congenital heart surgery is now being used in Europe and North America. Additional work is under way to involve Africa, Asia, Australia, and South America. The almost simultaneous publication of the European Association for Cardio-thoracic Surgery/Society of Thoracic Surgeons coding system and the Association for European Paediatric Cardiology coding system resulted in the potential for multiple coding. Representatives of the Association for European Paediatric Cardiology, Society of Thoracic Surgeons, European Association for Cardio-thoracic Surgery, and European Congenital Heart Surgeons Foundation agree that these hierarchical systems are complementary and not competitive. An international committee will map the two systems. The ideal coding system will permit a diagnosis or procedure to be coded only one time with mapping allowing this code to be used for patient care, billing, practice management, teaching, research, and reporting to governmental agencies. The benefits of international data gathering and sharing are global, with the long-term goal of the continued upgrade in the quality of congenital heart surgery worldwide. Copyright 2002 by W.B. Saunders Company
Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed
2016-01-01
Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/ PMID:26827236
Development of a database for chemical mechanism assignments for volatile organic emissions.
Carter, William P L
2015-10-01
The development of a database for making model species assignments when preparing total organic gas (TOG) emissions input for atmospheric models is described. This database currently has assignments of model species for 12 different gas-phase chemical mechanisms for over 1700 chemical compounds and covers over 3000 chemical categories used in five different anthropogenic TOG profile databases or output by two different biogenic emissions models. This involved developing a unified chemical classification system, assigning compounds to mixtures, assigning model species for the mechanisms to the compounds, and making assignments for unknown, unassigned, and nonvolatile mass. The comprehensiveness of the assignments, the contributions of various types of speciation categories to current profile and total emissions data, inconsistencies with existing undocumented model species assignments, and remaining speciation issues and areas of needed work are also discussed. The use of the system to prepare input for SMOKE, the Speciation Tool, and for biogenic models is described in the supplementary materials. The database, associated programs and files, and a users manual are available online at http://www.cert.ucr.edu/~carter/emitdb . Assigning air quality model species to the hundreds of emitted chemicals is a necessary link between emissions data and modeling effects of emissions on air quality. This is not easy and makes it difficult to implement new and more chemically detailed mechanisms in models. If done incorrectly, it is similar to errors in emissions speciation or the chemical mechanism used. Nevertheless, making such assignments is often an afterthought in chemical mechanism development and emissions processing, and existing assignments are usually undocumented and have errors and inconsistencies. This work is designed to address some of these problems.
Can use of an administrative database improve accuracy of hospital-reported readmission rates?
Edgerton, James R; Herbert, Morley A; Hamman, Baron L; Ring, W Steves
2018-05-01
Readmission rates after cardiac surgery are being used as a quality indicator; they are also being collected by Medicare and are tied to reimbursement. Accurate knowledge of readmission rates may be difficult to achieve because patients may be readmitted to different hospitals. In our area, 81 hospitals share administrative claims data; 28 of these hospitals (from 5 different hospital systems) do cardiac surgery and share Society of Thoracic Surgeons (STS) clinical data. We used these 2 sources to compare the readmissions data for accuracy. A total of 45,539 STS records from January 2008 to December 2016 were matched with the hospital billing data records. Using the index visit as the start date, the billing records were queried for any subsequent in-patient visits for that patient. The billing records included date of readmission and hospital of readmission data and were compared with the data captured in the STS record. We found 1153 (2.5%) patients who had STS records that were marked "No" or "missing," but there were billing records that showed a readmission. The reported STS readmission rate of 4796 (10.5%) underreported the readmission rate by 2.5 actual percentage points. The true rate should have been 13.0%. Actual readmission rate was 23.8% higher than reported by the clinical database. Approximately 36% of readmissions were to a hospital that was a part of a different hospital system. It is important to know accurate readmission rates for quality improvement processes and institutional financial planning. Matching patient records to an administrative database showed that the clinical database may fail to capture many readmissions. Combining data with an administrative database can enhance accuracy of reporting. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Voss, Erica A; Makadia, Rupa; Matcho, Amy; Ma, Qianli; Knoll, Chris; Schuemie, Martijn; DeFalco, Frank J; Londhe, Ajit; Zhu, Vivienne; Ryan, Patrick B
2015-05-01
To evaluate the utility of applying the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) across multiple observational databases within an organization and to apply standardized analytics tools for conducting observational research. Six deidentified patient-level datasets were transformed to the OMOP CDM. We evaluated the extent of information loss that occurred through the standardization process. We developed a standardized analytic tool to replicate the cohort construction process from a published epidemiology protocol and applied the analysis to all 6 databases to assess time-to-execution and comparability of results. Transformation to the CDM resulted in minimal information loss across all 6 databases. Patients and observations excluded were due to identified data quality issues in the source system, 96% to 99% of condition records and 90% to 99% of drug records were successfully mapped into the CDM using the standard vocabulary. The full cohort replication and descriptive baseline summary was executed for 2 cohorts in 6 databases in less than 1 hour. The standardization process improved data quality, increased efficiency, and facilitated cross-database comparisons to support a more systematic approach to observational research. Comparisons across data sources showed consistency in the impact of inclusion criteria, using the protocol and identified differences in patient characteristics and coding practices across databases. Standardizing data structure (through a CDM), content (through a standard vocabulary with source code mappings), and analytics can enable an institution to apply a network-based approach to observational research across multiple, disparate observational health databases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
A Solution on Identification and Rearing Files Insmallhold Pig Farming
NASA Astrophysics Data System (ADS)
Xiong, Benhai; Fu, Runting; Lin, Zhaohui; Luo, Qingyao; Yang, Liang
In order to meet government supervision of pork production safety as well as consumeŕs right to know what they buy, this study adopts animal identification, mobile PDA reader, GPRS and other information technologies, and put forward a data collection method to set up rearing files of pig in smallhold pig farming, and designs related metadata structures and its mobile database, and develops a mobile PDA embedded system to collect individual information of pig and uploading into the remote central database, and finally realizes mobile links to the a specific website. The embedded PDA can identify both a special pig bar ear tag appointed by the Ministry of Agricultural and a general data matrix bar ear tag designed by this study by mobile reader, and can record all kinds of inputs data including bacterins, feed additives, animal drugs and even some forbidden medicines and submitted them to the center database through GPRS. At the same time, the remote center database can be maintained by mobile PDA and GPRS, and finally reached pork tracking from its origin to consumption and its tracing through turn-over direction. This study has suggested a feasible technology solution how to set up network pig electronic rearing files involved smallhold pig farming based on farmer and the solution is proved practical through its application in the Tianjińs pork quality traceability system construction. Although some individual techniques have some adverse effects on the system running such as GPRS transmitting speed now, these will be resolved with the development of communication technology. The full implementation of the solution around China will supply technical supports in guaranteeing the quality and safety of pork production supervision and meet consumer demand.
Risk model of valve surgery in Japan using the Japan Adult Cardiovascular Surgery Database.
Motomura, Noboru; Miyata, Hiroaki; Tsukihara, Hiroyuki; Takamoto, Shinichi
2010-11-01
Risk models of cardiac valve surgery using a large database are useful for improving surgical quality. In order to obtain accurate, high-quality assessments of surgical outcome, each geographic area should maintain its own database. The study aim was to collect Japanese data and to prepare a risk stratification of cardiac valve procedures, using the Japan Adult Cardiovascular Surgery Database (JACVSD). A total of 6562 valve procedure records from 97 participating sites throughout Japan was analyzed, using a data entry form with 255 variables that was sent to the JACVSD office from a web-based data collection system. The statistical model was constructed using multiple logistic regression. Model discrimination was tested using the area under the receiver operating characteristic curve (C-index). The model calibration was tested using the Hosmer-Lemeshow (H-L) test. Among 6562 operated cases, 15% had diabetes mellitus, 5% were urgent, and 12% involved preoperative renal failure. The observed 30-day and operative mortality rates were 2.9% and 4.0%, respectively. Significant variables with high odds ratios included emergent or salvage status (3.83), reoperation (3.43), and left ventricular dysfunction (3.01). The H-L test and C-index values for 30-day mortality were satisfactory (0.44 and 0.80, respectively). The results obtained in Japan were at least as good as those reported elsewhere. The performance of this risk model also matched that of the STS National Adult Cardiac Database and the European Society Database.
Bordeianou, Liliana; Cauley, Christy E; Antonelli, Donna; Bird, Sarah; Rattner, David; Hutter, Matthew; Mahmood, Sadiqa; Schnipper, Deborah; Rubin, Marc; Bleday, Ronald; Kenney, Pardon; Berger, David
2017-01-01
Two systems measure surgical site infection rates following colorectal surgeries: the American College of Surgeons National Surgical Quality Improvement Program and the Centers for Disease Control and Prevention National Healthcare Safety Network. The Centers for Medicare & Medicaid Services pay-for-performance initiatives use National Healthcare Safety Network data for hospital comparisons. This study aimed to compare database concordance. This is a multi-institution cohort study of systemwide Colorectal Surgery Collaborative. The National Surgical Quality Improvement Program requires rigorous, standardized data capture techniques; National Healthcare Safety Network allows 5 data capture techniques. Standardized surgical site infection rates were compared between databases. The Cohen κ-coefficient was calculated. This study was conducted at Boston-area hospitals. National Healthcare Safety Network or National Surgical Quality Improvement Program patients undergoing colorectal surgery were included. Standardized surgical site infection rates were the primary outcomes of interest. Thirty-day surgical site infection rates of 3547 (National Surgical Quality Improvement Program) vs 5179 (National Healthcare Safety Network) colorectal procedures (2012-2014). Discrepancies appeared: National Surgical Quality Improvement Program database of hospital 1 (N = 1480 patients) routinely found surgical site infection rates of approximately 10%, routinely deemed rate "exemplary" or "as expected" (100%). National Healthcare Safety Network data from the same hospital and time period (N = 1881) revealed a similar overall surgical site infection rate (10%), but standardized rates were deemed "worse than national average" 80% of the time. Overall, hospitals using less rigorous capture methods had improved surgical site infection rates for National Healthcare Safety Network compared with standardized National Surgical Quality Improvement Program reports. The correlation coefficient between standardized infection rates was 0.03 (p = 0.88). During 25 site-time period observations, National Surgical Quality Improvement Program and National Healthcare Safety Network data matched for 52% of observations (13/25). κ = 0.10 (95% CI, -0.1366 to 0.3402; p = 0.403), indicating poor agreement. This study investigated hospitals located in the Northeastern United States only. Variation in Centers for Medicare & Medicaid Services-mandated National Healthcare Safety Network infection surveillance methodology leads to unreliable results, which is apparent when these results are compared with standardized data. High-quality data would improve care quality and compare outcomes among institutions.
ECG signal quality during arrhythmia and its application to false alarm reduction.
Behar, Joachim; Oster, Julien; Li, Qiao; Clifford, Gari D
2013-06-01
An automated algorithm to assess electrocardiogram (ECG) quality for both normal and abnormal rhythms is presented for false arrhythmia alarm suppression of intensive care unit (ICU) monitors. A particular focus is given to the quality assessment of a wide variety of arrhythmias. Data from three databases were used: the Physionet Challenge 2011 dataset, the MIT-BIH arrhythmia database, and the MIMIC II database. The quality of more than 33 000 single-lead 10 s ECG segments were manually assessed and another 12 000 bad-quality single-lead ECG segments were generated using the Physionet noise stress test database. Signal quality indices (SQIs) were derived from the ECGs segments and used as the inputs to a support vector machine classifier with a Gaussian kernel. This classifier was trained to estimate the quality of an ECG segment. Classification accuracies of up to 99% on the training and test set were obtained for normal sinus rhythm and up to 95% for arrhythmias, although performance varied greatly depending on the type of rhythm. Additionally, the association between 4050 ICU alarms from the MIMIC II database and the signal quality, as evaluated by the classifier, was studied. Results suggest that the SQIs should be rhythm specific and that the classifier should be trained for each rhythm call independently. This would require a substantially increased set of labeled data in order to train an accurate algorithm.
Methods for assessing the quality of data in public health information systems: a critical review.
Chen, Hong; Yu, Ping; Hailey, David; Wang, Ning
2014-01-01
The quality of data in public health information systems can be ensured by effective data quality assessment. In order to conduct effective data quality assessment, measurable data attributes have to be precisely defined. Then reliable and valid measurement methods for data attributes have to be used to measure each attribute. We conducted a systematic review of data quality assessment methods for public health using major databases and well-known institutional websites. 35 studies were eligible for inclusion in the study. A total of 49 attributes of data quality were identified from the literature. Completeness, accuracy and timeliness were the three most frequently assessed attributes of data quality. Most studies directly examined data values. This is complemented by exploring either data users' perception or documentation quality. However, there are limitations of current data quality assessment methods: a lack of consensus on attributes measured; inconsistent definition of the data quality attributes; a lack of mixed methods for assessing data quality; and inadequate attention to reliability and validity. Removal of these limitations is an opportunity for further improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, SP; Moore, JA; Hui, X
Purpose: Database dose predictions and a commercial autoplanning engine both improve treatment plan quality in different but complimentary ways. The combination of these planning techniques is hypothesized to further improve plan quality. Methods: Four treatment plans were generated for each of 10 head and neck (HN) and 10 prostate cancer patients, including Plan-A: traditional IMRT optimization using clinically relevant default objectives; Plan-B: traditional IMRT optimization using database dose predictions; Plan-C: autoplanning using default objectives; and Plan-D: autoplanning using database dose predictions. One optimization was used for each planning method. Dose distributions were normalized to 95% of the planning target volumemore » (prostate: 8000 cGy; HN: 7000 cGy). Objectives used in plan optimization and analysis were the larynx (25%, 50%, 90%), left and right parotid glands (50%, 85%), spinal cord (0%, 50%), rectum and bladder (0%, 20%, 50%, 80%), and left and right femoral heads (0%, 70%). Results: All objectives except larynx 25% and 50% resulted in statistically significant differences between plans (Friedman’s χ{sup 2} ≥ 11.2; p ≤ 0.011). Maximum dose to the rectum (Plans A-D: 8328, 8395, 8489, 8537 cGy) and bladder (Plans A-D: 8403, 8448, 8527, 8569 cGy) were significantly increased. All other significant differences reflected a decrease in dose. Plans B-D were significantly different from Plan-A for 3, 17, and 19 objectives, respectively. Plans C-D were also significantly different from Plan-B for 8 and 13 objectives, respectively. In one case (cord 50%), Plan-D provided significantly lower dose than plan C (p = 0.003). Conclusion: Combining database dose predictions with a commercial autoplanning engine resulted in significant plan quality differences for the greatest number of objectives. This translated to plan quality improvements in most cases, although special care may be needed for maximum dose constraints. Further evaluation is warranted in a larger cohort across HN, prostate, and other treatment sites. This work is supported by Philips Radiation Oncology Systems.« less
NASA Technical Reports Server (NTRS)
Mallasch, Paul G.
1993-01-01
This volume contains the complete software system documentation for the Federal Communications Commission (FCC) Transponder Loading Data Conversion Software (FIX-FCC). This software was written to facilitate the formatting and conversion of FCC Transponder Occupancy (Loading) Data before it is loaded into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). The information that FCC supplies NASA is in report form and must be converted into a form readable by the database management software used in the GSOSTATS application. Both the User's Guide and Software Maintenance Manual are contained in this document. This volume of documentation passed an independent quality assurance review and certification by the Product Assurance and Security Office of the Planning Research Corporation (PRC). The manuals were reviewed for format, content, and readability. The Software Management and Assurance Program (SMAP) life cycle and documentation standards were used in the development of this document. Accordingly, these standards were used in the review. Refer to the System/Software Test/Product Assurance Report for the Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS) for additional information.
Rosenbaum, Benjamin P; Silkin, Nikolay; Miller, Randolph A
2014-01-01
Real-time alerting systems typically warn providers about abnormal laboratory results or medication interactions. For more complex tasks, institutions create site-wide 'data warehouses' to support quality audits and longitudinal research. Sophisticated systems like i2b2 or Stanford's STRIDE utilize data warehouses to identify cohorts for research and quality monitoring. However, substantial resources are required to install and maintain such systems. For more modest goals, an organization desiring merely to identify patients with 'isolation' orders, or to determine patients' eligibility for clinical trials, may adopt a simpler, limited approach based on processing the output of one clinical system, and not a data warehouse. We describe a limited, order-entry-based, real-time 'pick off' tool, utilizing public domain software (PHP, MySQL). Through a web interface the tool assists users in constructing complex order-related queries and auto-generates corresponding database queries that can be executed at recurring intervals. We describe successful application of the tool for research and quality monitoring.
Powell, Kimberly R; Peterson, Shenita R
Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.
Value-based purchasing and hospital acquired conditions: are we seeing improvement?
Spaulding, Aaron; Zhao, Mei; Haley, D Rob
2014-12-01
To determine if the Value-Based Purchasing Performance Scoring system correlates with hospital acquired condition quality indicators. This study utilizes the following secondary data sources: the American Hospital Association (AHA) annual survey and the Centers for Medicare and Medicaid (CMS) Value-Based Purchasing and Hospital Acquired Conditions databases. Zero-inflated negative binomial regression was used to examine the effect of CMS total performance score on counts of hospital acquired conditions. Hospital structure variables including size, ownership, teaching status, payer mix, case mix, and location were utilized as control variables. The secondary data sources were merged into a single database using Stata 10. Total performance scores, which are used to determine if hospitals should receive incentive money, do not correlate well with quality outcome in the form of hospital acquired conditions. Value-based purchasing does not appear to correlate with improved quality and patient safety as indicated by Hospital Acquired Condition (HAC) scores. This leads us to believe that either the total performance score does not measure what it should, or the quality outcome measurements do not reflect the quality of the total performance scores measure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Neurosurgery value and quality in the context of the Affordable Care Act: a policy perspective.
Menger, Richard P; Guthikonda, Bharat; Storey, Christopher M; Nanda, Anil; McGirt, Matthew; Asher, Anthony
2015-12-01
Neurosurgeons provide direct individualized care to patients. However, the majority of regulations affecting the relative value of patient-related care are drafted by policy experts whose focus is typically system- and population-based. A central, prospectively gathered, national outcomes-related database serves as neurosurgery's best opportunity to bring patient-centered outcomes to the policy arena. In this study the authors analyze the impact of the Affordable Care Act (ACA) on the determination of quality and value in neurosurgery care through the scope, language, and terminology of policy experts. The methods by which the ACA came into law and the subsequent quality implications this legislation has for neurosurgery will be discussed. The necessity of neurosurgical patient-oriented clinical registries will be discussed in the context of imminent and dramatic reforms related to medical cost containment. In the policy debate moving forward, the strength of neurosurgery's argument will rest on data, unity, and proactiveness. The National Neurosurgery Quality and Outcomes Database (N(2)QOD) allows neurosurgeons to generate objective data on specialty-specific value and quality determinations; it allows neurosurgeons to bring the patient-physician interaction to the policy debate.
The data quality analyzer: A quality control program for seismic data
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hagerty, M. T.; Holland, J.; Gonzales, A.; Gee, L. S.; Edwards, J. D.; Wilson, D.; Baker, A. M.
2015-03-01
The U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL) has several initiatives underway to enhance and track the quality of data produced from ASL seismic stations and to improve communication about data problems to the user community. The Data Quality Analyzer (DQA) is one such development and is designed to characterize seismic station data quality in a quantitative and automated manner. The DQA consists of a metric calculator, a PostgreSQL database, and a Web interface: The metric calculator, SEEDscan, is a Java application that reads and processes miniSEED data and generates metrics based on a configuration file. SEEDscan compares hashes of metadata and data to detect changes in either and performs subsequent recalculations as needed. This ensures that the metric values are up to date and accurate. SEEDscan can be run as a scheduled task or on demand. The PostgreSQL database acts as a central hub where metric values and limited station descriptions are stored at the channel level with one-day granularity. The Web interface dynamically loads station data from the database and allows the user to make requests for time periods of interest, review specific networks and stations, plot metrics as a function of time, and adjust the contribution of various metrics to the overall quality grade of the station. The quantification of data quality is based on the evaluation of various metrics (e.g., timing quality, daily noise levels relative to long-term noise models, and comparisons between broadband data and event synthetics). Users may select which metrics contribute to the assessment and those metrics are aggregated into a "grade" for each station. The DQA is being actively used for station diagnostics and evaluation based on the completed metrics (availability, gap count, timing quality, deviation from a global noise model, deviation from a station noise model, coherence between co-located sensors, and comparison between broadband data and synthetics for earthquakes) on stations in the Global Seismographic Network and Advanced National Seismic System.
Quality and Safety in Health Care, Part XXVI: The Adult Cardiac Surgery Database.
Harolds, Jay A
2017-09-01
The Adult Cardiac Surgery Database of the Society of Thoracic Surgeons has provided highly useful information in quality and safety in general thoracic surgery, including ratings of the surgeons and institutions participating in this type of surgery. The Adult Cardiac Surgery Database information is very helpful for writing guidelines and determining optimal protocols and for many research projects. This article discusses the history and current status of this database.
Disbiome database: linking the microbiome to disease.
Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart
2018-06-04
Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.
Contingency Plan for FGD Systems During Downtime as a Function of PSD
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
NASA Astrophysics Data System (ADS)
Boyer, T.; Sun, L.; Locarnini, R. A.; Mishonov, A. V.; Hall, N.; Ouellet, M.
2016-02-01
The World Ocean Database (WOD) contains systematically quality controlled historical and recent ocean profile data (temperature, salinity, oxygen, nutrients, carbon cycle variables, biological variables) ranging from Captain Cooks second voyage (1773) to this year's Argo floats. The US National Centers for Environmental Information (NCEI) also hosts the Global Temperature and Salinity Profile Program (GTSPP) Continuously Managed Database (CMD) which provides quality controlled near-real time ocean profile data and higher level quality controlled temperature and salinity profiles from 1990 to present. Both databases are used extensively for ocean and climate studies. Synchronization of these two databases will allow easier access and use of comprehensive regional and global ocean profile data sets for ocean and climate studies. Synchronizing consists of two distinct phases: 1) a retrospective comparison of data in WOD and GTSPP to ensure that the most comprehensive and highest quality data set is available to researchers without the need to individually combine and contrast the two datasets and 2) web services to allow the constantly accruing near-real time data in the GTSPP CMD and the continuous addition and quality control of historical data in WOD to be made available to researchers together, seamlessly.
Flexible solution for interoperable cloud healthcare systems.
Vida, Mihaela Marcella; Lupşe, Oana Sorina; Stoicu-Tivadar, Lăcrămioara; Bernad, Elena
2012-01-01
It is extremely important for the healthcare domain to have a standardized communication because will improve the quality of information and in the end the resulting benefits will improve the quality of patients' life. The standards proposed to be used are: HL7 CDA and CCD. For a better access to the medical data a solution based on cloud computing (CC) is investigated. CC is a technology that supports flexibility, seamless care, and reduced costs of the medical act. To ensure interoperability between healthcare information systems a solution creating a Web Custom Control is presented. The control shows the database tables and fields used to configure the two standards. This control will facilitate the work of the medical staff and hospital administrators, because they can configure the local system easily and prepare it for communication with other systems. The resulted information will have a higher quality and will provide knowledge that will support better patient management and diagnosis.
Steil, H; Amato, C; Carioni, C; Kirchgessner, J; Marcelli, D; Mitteregger, A; Moscardo, V; Orlandini, G; Gatti, E
2004-01-01
The European Clinical Database EuCliD small star, filled has been developed as a tool for supervising selected quality indicators of about 200 European dialysis centers. Major efforts had to be made to comply with local and European laws regarding data security. EuCliD is a Lotus Notes based flat-file database currently containing medical data of more than 14,000 dialysis patients from 10 European countries. Another 15,000 patients from 150 centers in 4 South-American countries will be added soon. Data are entered either manually or by means of interfaces to existing local data managing systems. This information is transferred to a central Lotus Notes Server. Data evaluation was performed with statistical tools like SPSS. EuCliD is used as a part of the CQI (Continuous Quality Improvement) management system of Fresenius Medical Care (FMC) dialysis units. Each participating dialysis center receives (currently every half year) benchmarking reports at a regular interval. The benchmark for all quality parameters is the weighted mean of the corresponding data of all centers. An obvious impact of data sampling and data evaluation on the quality of the treatments could be observed within the first one and a half years of working with EuCliD. This also concerns important outcome predictors like Kt/V and hemoglobin concentration as the outcome itself expressed in hospitalization days and survival rates. With the help of EuCliD the user is able to sample clinical data, identify problems, search for solutions with the aim of improving the dialysis treatment quality and guarantee a high-class treatment quality for all patients.
Prasad, Anjali; Helder, Meghana R; Brown, Dwight A; Schaff, Hartzell V
2016-10-01
The University HealthSystem Consortium (UHC) administrative database has been used increasingly as a quality indicator for hospitals and even individual surgeons. We aimed to determine the accuracy of cardiac surgical data in the administrative UHC database vs data in the clinical Society of Thoracic Surgeons database. We reviewed demographic and outcomes information of patients with aortic valve replacement (AVR), mitral valve replacement (MVR), and coronary artery bypass grafting (CABG) surgery between January 1, 2012, and December 31, 2013. Data collected in aggregate and compared across the databases included case volume, physician specialty coding, patient age and sex, comorbidities, mortality rate, and postoperative complications. In these 2 years, the UHC database recorded 1,270 AVRs, 355 MVRs, and 1,473 CABGs. The Society of Thoracic Surgeons database case volumes were less by 2% to 12% (1,219 AVRs; 316 MVRs; and 1,442 CABGs). Errors in physician specialty coding occurred in UHC data (AVR, 0.6%; MVR, 0.8%; and CABG, 0.7%). In matched patients from each database, demographic age and sex information was identical. Although definitions differed in the databases, percentages of patients with at least one comorbidity were similar. Hospital mortality rates were similar as well, but postoperative recorded complications differed greatly. In comparing the 2 databases, we found similarity in patient demographic information and percentage of patients with comorbidities. The small difference in volumes of each operation type and the larger disparity in postoperative complications between the databases were related to differences in data definition, data collection, and coding errors. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Bera, Maitreyee
2017-10-16
The U.S. Geological Survey (USGS), in cooperation with the DuPage County Stormwater Management Department, maintains a database of hourly meteorological and hydrologic data for use in a near real-time streamflow simulation system. This system is used in the management and operation of reservoirs and other flood-control structures in the West Branch DuPage River watershed in DuPage County, Illinois. The majority of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorological data (air temperature, dewpoint temperature, wind speed, and solar radiation) are collected at Argonne National Laboratory in Argonne, Ill. Potential evapotranspiration is computed from the meteorological data using the computer program LXPET (Lamoreux Potential Evapotranspiration). The hydrologic data (water-surface elevation [stage] and discharge) are collected at U.S.Geological Survey streamflow-gaging stations in and around DuPage County. These data are stored in a Watershed Data Management (WDM) database.This report describes a version of the WDM database that is quality-assured and quality-controlled annually to ensure datasets are complete and accurate. This database is named WBDR13.WDM. It contains data from January 1, 2007, through September 30, 2013. Each precipitation dataset may have time periods of inaccurate data. This report describes the methods used to estimate the data for the periods of missing, erroneous, or snowfall-affected data and thereby improve the accuracy of these data. The other meteorological datasets are described in detail in Over and others (2010), and the hydrologic datasets in the database are fully described in the online USGS annual water data reports for Illinois (U.S. Geological Survey, 2016) and, therefore, are described in less detail than the precipitation datasets in this report.
Locating faces in color photographs using neural networks
NASA Astrophysics Data System (ADS)
Brown, Joe R.; Talley, Jim
1994-03-01
This paper summarizes a research effort in finding the locations and sizes of faces in color images (photographs, video stills, etc.) if, in fact, faces are presented. Scenarios for using such a system include serving as the means of localizing skin for automatic color balancing during photo processing or it could be used as a front-end in a customs port of energy context for a system which identified persona non grata given a database of known faces. The approach presented here is a hybrid system including: a neural pre-processor, some conventional image processing steps, and a neural classifier as the final face/non-face discriminator. Neither the training (containing 17,655 faces) nor the test (containing 1829 faces) imagery databases were constrained in their content or quality. The results for the pilot system are reported along with a discussion for improving the current system.
Ultrasonic Fluid Quality Sensor System
Gomm, Tyler J.; Kraft, Nancy C.; Phelps, Larry D.; Taylor, Steven C.
2003-10-21
A system for determining the composition of a multiple-component fluid and for determining linear flow comprising at least one sing-around circuit that determines the velocity of a signal in the multiple-component fluid and that is correlatable to a database for the multiple-component fluid. A system for determining flow uses two of the inventive circuits, one of which is set at an angle that is not perpendicular to the direction of flow.
Ultrasonic fluid quality sensor system
Gomm, Tyler J.; Kraft, Nancy C.; Phelps, Larry D.; Taylor, Steven C.
2002-10-08
A system for determining the composition of a multiple-component fluid and for determining linear flow comprising at least one sing-around circuit that determines the velocity of a signal in the multiple-component fluid and that is correlatable to a database for the multiple-component fluid. A system for determining flow uses two of the inventive circuits, one of which is set at an angle that is not perpendicular to the direction of flow.
Ground truth and benchmarks for performance evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.
2003-09-01
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
Compilation of historical water-quality data for selected springs in Texas, by ecoregion
Heitmuller, Franklin T.; Williams, Iona P.
2006-01-01
Springs are important hydrologic features in Texas. A database of about 2,000 historically documented springs and available spring-flow measurements previously has been compiled and published, but water-quality data remain scattered in published sources. This report by the U.S. Geological Survey, in cooperation with the Texas Parks and Wildlife Department, documents the compilation of data for 232 springs in Texas on the basis of a set of criteria and the development of a water-quality database for the selected springs. The selection of springs for compilation of historical water-quality data in Texas was made using existing digital and hard-copy data, responses to mailed surveys, selection criteria established by various stakeholders, geographic information systems, and digital database queries. Most springs were selected by computing the highest mean spring flows for each Texas level III ecoregion. A brief assessment of the water-quality data for springs in Texas shows that few data are available in the Arizona/New Mexico Mountains, High Plains, East Central Texas Plains, Western Gulf Coastal Plain, and South Central Plains ecoregions. Water-quality data are more abundant for the Chihuahuan Deserts, Edwards Plateau, and Texas Blackland Prairies ecoregions. Selected constituent concentrations in Texas springs, including silica, calcium, magnesium, sodium, potassium, strontium, sulfate, chloride, fluoride, nitrate (nitrogen), dissolved solids, and hardness (as calcium carbonate) are comparatively high in the Chihuahuan Deserts, Southwestern Tablelands, Central Great Plains, and Cross Timbers ecoregions, mostly as a result of subsurface geology. Comparatively low concentrations of selected constituents in Texas springs are associated with the Arizona/New Mexico Mountains, Southern Texas Plains, East Central Texas Plains, and South Central Plains ecoregions.
Pearson, Daniel K.; Bumgarner, Johnathan R.; Houston, Natalie A.; Stanton, Gregory P.; Teeple, Andrew; Thomas, Jonathan V.
2012-01-01
The U.S. Geological Survey, in cooperation with Middle Pecos Groundwater Conservation District, Pecos County, City of Fort Stockton, Brewster County, and Pecos County Water Control and Improvement District No. 1, compiled groundwater, surface-water, water-quality, geophysical, and geologic data for site locations in the Pecos County region, Texas, and developed a geodatabase to facilitate use of this information. Data were compiled for an approximately 4,700 square mile area of the Pecos County region, Texas. The geodatabase contains data from 8,242 sampling locations; it was designed to organize and store field-collected geochemical and geophysical data, as well as digital database resources from the U.S. Geological Survey, Middle Pecos Groundwater Conservation District, Texas Water Development Board, Texas Commission on Environmental Quality,and numerous other State and local databases. The geodatabase combines these disparate database resources into a simple data model. Site locations are geospatially enabled and stored in a geodatabase feature class for cartographic visualization and spatial analysis within a Geographic Information System. The sampling locations are related to hydrogeologic information through the use of geodatabase relationship classes. The geodatabase relationship classes provide the ability to perform complex spatial and data-driven queries to explore data stored in the geodatabase.
Lundgren, Robert F.; Vining, Kevin C.
2013-01-01
The Turtle Mountain Indian Reservation relies on groundwater supplies to meet the demands of community and economic needs. The U.S. Geological Survey, in cooperation with the Turtle Mountain Band of Chippewa Indians, examined historical groundwater-level and groundwater-quality data for the Fox Hills, Hell Creek, Rolla, and Shell Valley aquifers. The two main sources of water-quality data for groundwater were the U.S. Geological Survey National Water Information System database and the North Dakota State Water Commission database. Data included major ions, trace elements, nutrients, field properties, and physical properties. The Fox Hills and Hell Creek aquifers had few groundwater water-quality data. The lack of data limits any detailed assessments that can be made about these aquifers. Data for the Rolla aquifer exist from 1978 through 1980 only. The concentrations of some water-quality constituents exceeded the U.S. Environmental Protection Agency secondary maximum contaminant levels. No samples were analyzed for pesticides and hydrocarbons. Numerous water-quality samples have been obtained from the Shell Valley aquifer. About one-half of the water samples from the Shell Valley aquifer had concentrations of iron, manganese, sulfate, and dissolved solids that exceeded the U.S. Environmental Protection Agency secondary maximum contaminant levels. Overall, the data did not indicate obvious patterns in concentrations.
Scale effects of STATSGO and SSURGO databases on flow and water quality predictions
USDA-ARS?s Scientific Manuscript database
Soil information is one of the crucial inputs needed to assess the impacts of existing and alternative agricultural management practices on water quality. Therefore, it is important to understand the effects of spatial scale at which soil databases are developed on water quality evaluations. In the ...
The Pan European Phenological Database PEP725: Data Content and Data Quality Control Procedures
NASA Astrophysics Data System (ADS)
Jurkovic, Anita; Hübner, Thomas; Koch, Elisabeth; Lipa, Wolfgang; Scheifinger, Helfried; Ungersböck, Markus; Zach-Hermann, Susanne
2014-05-01
Phenology - the study of the timing of recurring biological events in the animal and plant world - has become an important approach for climate change impact studies in recent years. It is therefore a "conditio sine qua non" to collect, archive, digitize, control and update phenological datasets. Thus and with regard to cross-border cooperation and activities it was necessary to establish, operate and promote a pan European phenological database (PEP725). Such a database - designed and tested under cost action 725 in 2004 and further developed and maintained in the framework of the EUMETNET program PEP725 - collects data from different European governmental and nongovernmental institutions and thus offers a unique compilation of plant phenological observations. The data follows the same classification scheme - the so called BBCH coding system - that makes datasets comparable. Europe had a long tradition in the observation of phenological events: the history of collecting phenological data and their usage in climatology began in 1751. The first datasets in PEP725 date back to 1868. However, there are only a few observations available until 1950. From 1951 onwards, the phenological networks all over Europe developed rapidly: Currently, PEP725 provides about 9 million records from 23 European countries (covering approximately 50% of Europe). To supply the data in a good and uniform quality it is essential and worthwhile to establish and develop data quality control procedures. Consequently, one of the main tasks within PEP725 is the conception of a multi-stage-quality control. Currently the tests are stepwise composed: completeness -, plausibility -, time consistency -, climatological - and statistical checks. In a nutshell: The poster exemplifies the status quo of the data content of the PEP725 database and incipient stages of used and planned quality controls, respectively. For more details, we would also like to promote and refer to the PEP725 website (http://www.pep725.eu) and invite additional institutions and regional services to join our program.
The ATLAS conditions database architecture for the Muon spectrometer
NASA Astrophysics Data System (ADS)
Verducci, Monica; ATLAS Muon Collaboration
2010-04-01
The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.
MARKAL SCENARIO ANALYSES OF TECHNOLOGY OPTIONS FOR THE ELECTRIC SECTOR: THE IMPACT ON AIR QUALITY
This report provides a general overview of EPA’s national MARKAL database and energy systems model and compares various scenarios to a business as usual baseline scenario. Under baseline assumptions, total electricity use increases 1.3% annually until 2030. Annual growth in ele...
Evaluating Assessment Using N-Dimensional Filtering.
ERIC Educational Resources Information Center
Dron, Jon; Boyne, Chris; Mitchell, Richard
This paper describes the use of the CoFIND (Collaborative Filter in N Dimensions) system to evaluate two assessment styles. CoFIND is a resource database that organizes itself around its users' needs. Learners enter resources, categorize, then rate them using "qualities," aspects of resources which learners find worthwhile, the n…
Using Clustering Strategies for Creating Authority Files.
ERIC Educational Resources Information Center
French, James C.; Powell, Allison L.; Schulman, Eric
2000-01-01
Discussion of quality control of data in online bibliographic databases focuses on authority files. Describes approximate string matching, introduces the concept of approximate word matching and clustering, and presents a case study using the Astrophysics Data System (ADS) that shows how to reduce human effort involved in authority work. (LRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevins, N; Vanderhoek, M; Lang, S
2014-06-15
Purpose: Medical display monitor calibration and quality control present challenges to medical physicists. The purpose of this work is to demonstrate and share experiences with an open source package that allows for both initial monitor setup and routine performance evaluation. Methods: A software package, pacsDisplay, has been developed over the last decade to aid in the calibration of all monitors within the radiology group in our health system. The software is used to calibrate monitors to follow the DICOM Grayscale Standard Display Function (GSDF) via lookup tables installed on the workstation. Additional functionality facilitates periodic evaluations of both primary andmore » secondary medical monitors to ensure satisfactory performance. This software is installed on all radiology workstations, and can also be run as a stand-alone tool from a USB disk. Recently, a database has been developed to store and centralize the monitor performance data and to provide long-term trends for compliance with internal standards and various accrediting organizations. Results: Implementation and utilization of pacsDisplay has resulted in improved monitor performance across the health system. Monitor testing is now performed at regular intervals and the software is being used across multiple imaging modalities. Monitor performance characteristics such as maximum and minimum luminance, ambient luminance and illuminance, color tracking, and GSDF conformity are loaded into a centralized database for system performance comparisons. Compliance reports for organizations such as MQSA, ACR, and TJC are generated automatically and stored in the same database. Conclusion: An open source software solution has simplified and improved the standardization of displays within our health system. This work serves as an example method for calibrating and testing monitors within an enterprise health system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Tschanz, J.; Monarch, M.
1996-05-01
The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less
Performance evaluation of no-reference image quality metrics for face biometric images
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick
2018-03-01
The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Lenz, Bernard N.
1997-01-01
An important part of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program is the analysis of existing data in each of the NAWQA study areas. The Wisconsin Department of Natural Resources (WDNR) has an extensive aquatic benthic macroinvertebrate communities in streams (benthic invertebrates) database maintained by the University of Wisconsin-Stevens Point. This database has data which date back to 1984 and includes data from streams within the Western Lake Michigan Drainages (WMIC) study area (fig. 1). This report looks at the feasibility of USGS scientists supplementing the data they collect with data from the WDNR database when assessing water quality in the study area.
Pedersen, Sidsel Arnspang; Schmidt, Sigrun Alba Johannesdottir; Klausen, Siri; Pottegård, Anton; Friis, Søren; Hölmich, Lisbet Rosenkrantz; Gaist, David
2018-05-01
The nationwide Danish Cancer Registry and the Danish Melanoma Database both record data on melanoma for purposes of monitoring, quality assurance, and research. However, the data quality of the Cancer Registry and the Melanoma Database has not been formally evaluated. We estimated the positive predictive value (PPV) of melanoma diagnosis for random samples of 200 patients from the Cancer Registry (n = 200) and the Melanoma Database (n = 200) during 2004-2014, using the Danish Pathology Registry as "gold standard" reference. We further validated tumor characteristics in the Cancer Registry and the Melanoma Database. Additionally, we estimated the PPV of in situ melanoma diagnoses in the Melanoma Database, and the sensitivity of melanoma diagnoses in 2004-2014. The PPVs of melanoma in the Cancer Registry and the Melanoma Database were 97% (95% CI = 94, 99) and 100%. The sensitivity was 90% in the Cancer Registry and 77% in the Melanoma Database. The PPV of in situ melanomas in the Melanoma Database was 97% and the sensitivity was 56%. In the Melanoma Database, we observed PPVs of ulceration of 75% and Breslow thickness of 96%. The PPV of histologic subtypes varied between 87% and 100% in the Cancer Registry and 93% and 100% in the Melanoma Database. The PPVs for anatomical localization were 83%-95% in the Cancer Registry and 93%-100% in the Melanoma Database. The data quality in both the Cancer Registry and the Melanoma Database is high, supporting their use in epidemiologic studies.
Australia's continental-scale acoustic tracking database and its automated quality control process
NASA Astrophysics Data System (ADS)
Hoenner, Xavier; Huveneers, Charlie; Steckenreuter, Andre; Simpfendorfer, Colin; Tattersall, Katherine; Jaine, Fabrice; Atkins, Natalia; Babcock, Russ; Brodie, Stephanie; Burgess, Jonathan; Campbell, Hamish; Heupel, Michelle; Pasquer, Benedicte; Proctor, Roger; Taylor, Matthew D.; Udyawer, Vinay; Harcourt, Robert
2018-01-01
Our ability to predict species responses to environmental changes relies on accurate records of animal movement patterns. Continental-scale acoustic telemetry networks are increasingly being established worldwide, producing large volumes of information-rich geospatial data. During the last decade, the Integrated Marine Observing System's Animal Tracking Facility (IMOS ATF) established a permanent array of acoustic receivers around Australia. Simultaneously, IMOS developed a centralised national database to foster collaborative research across the user community and quantify individual behaviour across a broad range of taxa. Here we present the database and quality control procedures developed to collate 49.6 million valid detections from 1891 receiving stations. This dataset consists of detections for 3,777 tags deployed on 117 marine species, with distances travelled ranging from a few to thousands of kilometres. Connectivity between regions was only made possible by the joint contribution of IMOS infrastructure and researcher-funded receivers. This dataset constitutes a valuable resource facilitating meta-analysis of animal movement, distributions, and habitat use, and is important for relating species distribution shifts with environmental covariates.
Ang, Darwin N; Behrns, Kevin E
2013-07-01
The emphasis on high-quality care has spawned the development of quality programs, most of which focus on broad outcome measures across a diverse group of providers. Our aim was to investigate the clinical outcomes for a department of surgery with multiple service lines of patient care using a relational database. Mortality, length of stay (LOS), patient safety indicators (PSIs), and hospital-acquired conditions were examined for each service line. Expected values for mortality and LOS were derived from University HealthSystem Consortium regression models, whereas expected values for PSIs were derived from Agency for Healthcare Research and Quality regression models. Overall, 5200 patients were evaluated from the months of January through May of both 2011 (n = 2550) and 2012 (n = 2650). The overall observed-to-expected (O/E) ratio of mortality improved from 1.03 to 0.92. The overall O/E ratio for LOS improved from 0.92 to 0.89. PSIs that predicted mortality included postoperative sepsis (O/E:1.89), postoperative respiratory failure (O/E:1.83), postoperative metabolic derangement (O/E:1.81), and postoperative deep vein thrombosis or pulmonary embolus (O/E:1.8). Mortality and LOS can be improved by using a relational database with outcomes reported to specific service lines. Service line quality can be influenced by distribution of frequent reports, group meetings, and service line-directed interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-06-01
The bibliography contains citations concerning standards and standard tests for water quality in drinking water sources, reservoirs, and distribution systems. Standards from domestic and international sources are presented. Glossaries and vocabularies that concern water quality analysis, testing, and evaluation are included. Standard test methods for individual elements, selected chemicals, sensory properties, radioactivity, and other chemical and physical properties are described. Discussions for proposed standards on new pollutant materials are briefly considered. (Contains a minimum of 203 citations and includes a subject term index and title list.)
Arkansas Groundwater-Quality Network
Pugh, Aaron L.; Jackson, Barry T.; Miller, Roger
2014-01-01
Arkansas is the fourth largest user of groundwater in the United States, where groundwater accounts for two-thirds of the total water use. Groundwater use in the State increased by 510 percent between 1965 and 2005 (Holland, 2007). The Arkansas Groundwater-Quality Network is a Web map interface (http://ar.water.usgs.gov/wqx) that provides rapid access to the U.S. Geological Survey’s (USGS) National Water Information System (NWIS) and the U.S. Environmental Protection Agency’s (USEPA) STOrage and RETrieval (STORET) databases of ambient water information. The interface enables users to perform simple graphical analysis and download selected water-quality data.
1992-04-01
problem. No calibration, -- no fire. (90) 6.A Are you able to take advant ge of the training time with the Batery ? Only about 25% of the time is quality...A Are you able to take advantage of the training time with the Batery ? Only about 25% of the time is quality training which is like six hours out of...advantage of the training time with the Batery ? Only about 25% of the time is quality training which is like six hours out of 24. The communications and
Patient safety in dentistry - state of play as revealed by a national database of errors.
Thusu, S; Panesar, S; Bedi, R
2012-08-01
Modern dentistry has become increasingly invasive and sophisticated. Consequently the risk to the patient has increased. The aim of this study is to investigate the types of patient safety incidents (PSIs) that occur in dentistry and the accuracy of the National Patient Safety Agency (NPSA) database in identifying those attributed to dentistry. The database was analysed for all incidents of iatrogenic harm in the speciality of dentistry. A snapshot view using the timeframe January to December 2009 was used. The free text elements from the database were analysed thematically and reclassified according to the nature of the PSI. Descriptive statistics were provided. Two thousand and twelve incident reports were analysed and organised into ten categories. The commonest was due to clerical errors - 36%. Five areas of PSI were further analysed: injury (10%), medical emergency (6%), inhalation/ingestion (4%), adverse reaction (4%) and wrong site extraction (2%). There is generally low reporting of PSIs within the dental specialities. This may be attributed to the voluntary nature of reporting and the reluctance of dental practitioners to disclose incidences for fear of loss of earnings. A significant amount of iatrogenic harm occurs not during treatment but through controllable pre- and post-procedural checks. Incidences of iatrogenic harm to dental patients do occur but their reporting is not widely used. The use of a dental specific reporting system would aid in minimising iatrogenic harm and adhere to the Care Quality Commission (CQC) compliance monitoring system on essential standards of quality and safety in dental practices.
NASA Astrophysics Data System (ADS)
Choi, Sang-Hwa; Kim, Sung Dae; Park, Hyuk Min; Lee, SeungHa
2016-04-01
We established and have operated an integrated data system for managing, archiving and sharing marine geology and geophysical data around Korea produced from various research projects and programs in Korea Institute of Ocean Science & Technology (KIOST). First of all, to keep the consistency of data system with continuous data updates, we set up standard operating procedures (SOPs) for data archiving, data processing and converting, data quality controls, and data uploading, DB maintenance, etc. Database of this system comprises two databases, ARCHIVE DB and GIS DB for the purpose of this data system. ARCHIVE DB stores archived data as an original forms and formats from data providers for data archive and GIS DB manages all other compilation, processed and reproduction data and information for data services and GIS application services. Relational data management system, Oracle 11g, adopted for DBMS and open source GIS techniques applied for GIS services such as OpenLayers for user interface, GeoServer for application server, PostGIS and PostgreSQL for GIS database. For the sake of convenient use of geophysical data in a SEG Y format, a viewer program was developed and embedded in this system. Users can search data through GIS user interface and save the results as a report.
NASA Astrophysics Data System (ADS)
Guion, A., Jr.; Hodgkins, H.
2015-12-01
The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.
Cardiac registers: the adult cardiac surgery register.
Bridgewater, Ben
2010-09-01
AIMS OF THE SCTS ADULT CARDIAC SURGERY DATABASE: To measure the quality of care of adult cardiac surgery in GB and Ireland and provide information for quality improvement and research. Feedback of structured data to hospitals, publication of named hospital and surgeon mortality data, publication of benchmarked activity and risk adjusted clinical outcomes through intermittent comprehensive database reports, annual screening of all hospital and individual surgeon risk adjusted mortality rates by the professional society. All NHS hospitals in England, Scotland and Wales with input from some private providers and hospitals in Ireland. 1994-ongoing. Consecutive patients, unconsented. Current number of records: 400000. Adult cardiac surgery operations excluding cardiac transplantation and ventricular assist devices. 129 fields covering demographic factors, pre-operative risk factors, operative details and post-operative in-hospital outcomes. Entry onto local software systems by direct key board entry or subsequent transcription from paper records, with subsequent electronic upload to the central cardiac audit database. Non-financial incentives at hospital level. Local validation processes exist in the hospitals. There is currently no external data validation process. All cause mortality is obtained through linkage with Office for National Statistics. No other linkages exist at present. Available for research and audit by application to the SCTS database committee at http://www.scts.org.
Design and implementation of a portal for the medical equipment market: MEDICOM.
Palamas, S; Kalivas, D; Panou-Diamandi, O; Zeelenberg, C; van Nimwegen, C
2001-01-01
The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers Web sites with itself. The network of the Portal and the connected manufacturers sites is called the MEDICOM system. To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system s databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM s functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support.
Design and Implementation of a Portal for the Medical Equipment Market: MEDICOM
Kalivas, Dimitris; Panou-Diamandi, Ourania; Zeelenberg, Cees; van Nimwegen, Chris
2001-01-01
Background The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers' Web sites with itself. The network of the Portal and the connected manufacturers' sites is called the MEDICOM system. Objective To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). Methods The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers' servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system's databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers' servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM's functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. Results The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. Conclusions The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support. PMID:11772547
Determinants of Post-fire Water Quality in the Western United States
NASA Astrophysics Data System (ADS)
Rust, A.; Saxe, S.; Dolan, F.; Hogue, T. S.; McCray, J. E.
2015-12-01
Large wildfires are becoming increasingly common in the Western United States. Wildfires that consume greater than twenty percent of the watershed impact river water quality. The surface waters of the arid West are limited and in demand by the aquatic ecosystems, irrigated agriculture, and the region's growing human population. A range of studies, typically focused on individual fires, have observed mobilization of contaminants, nutrients (including nitrates), and sediments into receiving streams. Post-fire metal concentrations have also been observed to increase when fires were located in streams close to urban centers. The objective of this work was to assemble an extensive historical water quality database through data mining from federal, state and local agencies into a fire-database. Data from previous studies on individual fires by the co-authors was also included. The fire-database includes observations of water quality, discharge, geospatial and land characteristics from over 200 fire-impacted watersheds in the western U.S. since 1985. Water quality data from burn impacted watersheds was examined for trends in water quality response using statistical analysis. Watersheds where there was no change in water quality after fire were also examined to determine characteristics of the watershed that make it more resilient to fire. The ultimate goal is to evaluate trends in post-fire water quality response and identify key drivers of resiliency and post-fire response. The fire-database will eventually be publicly available.Large wildfires are becoming increasingly common in the Western United States. Wildfires that consume greater than twenty percent of the watershed impact river water quality. The surface waters of the arid West are limited and in demand by the aquatic ecosystems, irrigated agriculture, and the region's growing human population. A range of studies, typically focused on individual fires, have observed mobilization of contaminants, nutrients (including nitrates), and sediments into receiving streams. Post-fire metal concentrations have also been observed to increase when fires were located in streams close to urban centers. The objective of this work was to assemble an extensive historical water quality database through data mining from federal, state and local agencies into a fire-database. Data from previous studies on individual fires by the co-authors was also included. The fire-database includes observations of water quality, discharge, geospatial and land characteristics from over 200 fire-impacted watersheds in the western U.S. since 1985. Water quality data from burn impacted watersheds was examined for trends in water quality response using statistical analysis. Watersheds where there was no change in water quality after fire were also examined to determine characteristics of the watershed that make it more resilient to fire. The ultimate goal is to evaluate trends in post-fire water quality response and identify key drivers of resiliency and post-fire response. The fire-database will eventually be publicly available.
Toward an applied technology for quality measurement in health care.
Berwick, D M
1988-01-01
Cost containment, financial incentives to conserve resources, the growth of for-profit hospitals, an aggressive malpractice environment, and demands from purchasers are among the forces today increasing the need for improved methods that measure quality in health care. At the same time, increasingly sophisticated databases and the existence of managed care systems yield new opportunities to observe and correct quality problems. Research on targets of measurement (structure, process, and outcome) and methods of measurement (implicit, explicit, and sentinel methods) has not yet produced managerially useful applied technology for quality measurement in real-world settings. Such an applied technology would have to be cheaper, faster, more flexible, better reported, and more multidimensional than the majority of current research on quality assurance. In developing a new applied technology for the measurement of health care quality, quantitative disciplines have much to offer, such as decision support systems, criteria based on rigorous decision analyses, utility theory, tools for functional status measurement, and advances in operations research.
Improving quality: bridging the health sector divide.
Pringle, Mike
2003-12-01
All too often, quality assurance looks at just one small part of the complex system that is health care. However, evidently each individual patient has one set of experiences and outcomes, often involving a range of health professionals in a number of settings across multiple sectors. In order to solve the problems of this complexity, we need to establish high-quality electronic recording in each of the settings. In the UK, primary care has been leading the way in adopting information technology and can now use databases for individual clinical care, for quality assurance using significant event and conventional auditing, and for research. Before we can understand and quality-assure the whole health care system, we need electronic patient records in all settings and good communication to build a summary electronic health record for each patient. Such an electronic health record will be under the control of the patient concerned, will be shared with the explicit consent of the patient, and will form the vehicle for quality assurance across all sectors of the health service.
Development of an Integrated Biospecimen Database among the Regional Biobanks in Korea.
Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun
2016-04-01
This study developed an integrated database for 15 regional biobanks that provides large quantities of high-quality bio-data to researchers to be used for the prevention of disease, for the development of personalized medicines, and in genetics studies. We collected raw data, managed independently by 15 regional biobanks, for database modeling and analyzed and defined the metadata of the items. We also built a three-step (high, middle, and low) classification system for classifying the item concepts based on the metadata. To generate clear meanings of the items, clinical items were defined using the Systematized Nomenclature of Medicine Clinical Terms, and specimen items were defined using the Logical Observation Identifiers Names and Codes. To optimize database performance, we set up a multi-column index based on the classification system and the international standard code. As a result of subdividing 7,197,252 raw data items collected, we refined the metadata into 1,796 clinical items and 1,792 specimen items. The classification system consists of 15 high, 163 middle, and 3,588 low class items. International standard codes were linked to 69.9% of the clinical items and 71.7% of the specimen items. The database consists of 18 tables based on a table from MySQL Server 5.6. As a result of the performance evaluation, the multi-column index shortened query time by as much as nine times. The database developed was based on an international standard terminology system, providing an infrastructure that can integrate the 7,197,252 raw data items managed by the 15 regional biobanks. In particular, it resolved the inevitable interoperability issues in the exchange of information among the biobanks, and provided a solution to the synonym problem, which arises when the same concept is expressed in a variety of ways.
Bitsch, A; Jacobi, S; Melber, C; Wahnschaffe, U; Simetska, N; Mangelsdorf, I
2006-12-01
A database for repeated dose toxicity data has been developed. Studies were selected by data quality. Review documents or risk assessments were used to get a pre-screened selection of available valid data. The structure of the chemicals should be rather simple for well defined chemical categories. The database consists of three core data sets for each chemical: (1) structural features and physico-chemical data, (2) data on study design, (3) study results. To allow consistent queries, a high degree of standardization categories and glossaries were developed for relevant parameters. At present, the database consists of 364 chemicals investigated in 1018 studies which resulted in a total of 6002 specific effects. Standard queries have been developed, which allow analyzing the influence of structural features or PC data on LOELs, target organs and effects. Furthermore, it can be used as an expert system. First queries have shown that the database is a very valuable tool.
Discrepancy Reporting Management System
NASA Technical Reports Server (NTRS)
Cooper, Tonja M.; Lin, James C.; Chatillon, Mark L.
2004-01-01
Discrepancy Reporting Management System (DRMS) is a computer program designed for use in the stations of NASA's Deep Space Network (DSN) to help establish the operational history of equipment items; acquire data on the quality of service provided to DSN customers; enable measurement of service performance; provide early insight into the need to improve processes, procedures, and interfaces; and enable the tracing of a data outage to a change in software or hardware. DRMS is a Web-based software system designed to include a distributed database and replication feature to achieve location-specific autonomy while maintaining a consistent high quality of data. DRMS incorporates commercial Web and database software. DRMS collects, processes, replicates, communicates, and manages information on spacecraft data discrepancies, equipment resets, and physical equipment status, and maintains an internal station log. All discrepancy reports (DRs), Master discrepancy reports (MDRs), and Reset data are replicated to a master server at NASA's Jet Propulsion Laboratory; Master DR data are replicated to all the DSN sites; and Station Logs are internal to each of the DSN sites and are not replicated. Data are validated according to several logical mathematical criteria. Queries can be performed on any combination of data.
Next Generation Models for Storage and Representation of Microbial Biological Annotation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quest, Daniel J; Land, Miriam L; Brettin, Thomas S
2010-01-01
Background Traditional genome annotation systems were developed in a very different computing era, one where the World Wide Web was just emerging. Consequently, these systems are built as centralized black boxes focused on generating high quality annotation submissions to GenBank/EMBL supported by expert manual curation. The exponential growth of sequence data drives a growing need for increasingly higher quality and automatically generated annotation. Typical annotation pipelines utilize traditional database technologies, clustered computing resources, Perl, C, and UNIX file systems to process raw sequence data, identify genes, and predict and categorize gene function. These technologies tightly couple the annotation software systemmore » to hardware and third party software (e.g. relational database systems and schemas). This makes annotation systems hard to reproduce, inflexible to modification over time, difficult to assess, difficult to partition across multiple geographic sites, and difficult to understand for those who are not domain experts. These systems are not readily open to scrutiny and therefore not scientifically tractable. The advent of Semantic Web standards such as Resource Description Framework (RDF) and OWL Web Ontology Language (OWL) enables us to construct systems that address these challenges in a new comprehensive way. Results Here, we develop a framework for linking traditional data to OWL-based ontologies in genome annotation. We show how data standards can decouple hardware and third party software tools from annotation pipelines, thereby making annotation pipelines easier to reproduce and assess. An illustrative example shows how TURTLE (Terse RDF Triple Language) can be used as a human readable, but also semantically-aware, equivalent to GenBank/EMBL files. Conclusions The power of this approach lies in its ability to assemble annotation data from multiple databases across multiple locations into a representation that is understandable to researchers. In this way, all researchers, experimental and computational, will more easily understand the informatics processes constructing genome annotation and ultimately be able to help improve the systems that produce them.« less
Therapy Decision Support Based on Recommender System Methods
Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen
2017-01-01
We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system. PMID:29065657
Grover, Frederick L.; Shroyer, A. Laurie W.; Hammermeister, Karl; Edwards, Fred H.; Ferguson, T. Bruce; Dziuban, Stanley W.; Cleveland, Joseph C.; Clark, Richard E.; McDonald, Gerald
2001-01-01
Objective To review the Department of Veteran Affairs (VA) and the Society of Thoracic Surgeons (STS) national databases over the past 10 years to evaluate their relative similarities and differences, to appraise their use as quality improvement tools, and to assess their potential to facilitate improvements in quality of cardiac surgical care. Summary Background Data The VA developed a mandatory risk-adjusted database in 1987 to monitor outcomes of cardiac surgery at all VA medical centers. In 1989 the STS developed a voluntary risk-adjusted database to help members assess quality and outcomes in their individual programs and to facilitate improvements in quality of care. Methods A short data form on every veteran operated on at each VA medical center is completed and transmitted electronically for analysis of unadjusted and risk-adjusted death and complications, as well as length of stay. Masked, confidential semiannual reports are then distributed to each program’s clinical team and the associated administrator. These reports are also reviewed by a national quality oversight committee. Thus, VA data are used both locally for quality improvement and at the national level with quality surveillance. The STS dataset (217 core fields and 255 extended fields) is transmitted for each patient semiannually to the Duke Clinical Research Institute (DCRI) for warehousing, analysis, and distribution. Site-specific reports are produced with regional and national aggregate comparisons for unadjusted and adjusted surgical deaths and complications, as well as length of stay for coronary artery bypass grafting (CABG), valvular procedures, and valvular/CABG procedures. Both databases use the logistic regression modeling approach. Data for key processes of care are also captured in both databases. Research projects are frequently carried out using each database. Results More than 74,000 and 1.6 million cardiac surgical patients have been entered into the VA and STS databases, respectively. Risk factors that predict surgical death for CABG are very similar in the two databases, as are the odds ratios for most of the risk factors. One major difference is that the VA is 99% male, the STS 71% male. Both databases have shown a significant reduction in the risk-adjusted surgical death rate during the past decade despite the fact that patients have presented with an increased risk factor profile. The ratio of observed to expected deaths decreased from 1.05 to 0.9 for the VA and from 1.5 to 0.9 for the STS. Conclusion It appears that the routine feedback of risk-adjusted data on local performance provided by these programs heightens awareness and leads to self-examination and self-assessment, which in turn improves quality and outcomes. This general quality improvement template should be considered for application in other settings beyond cardiac surgery. PMID:11573040
Evaluating Land-Atmosphere Interactions with the North American Soil Moisture Database
NASA Astrophysics Data System (ADS)
Giles, S. M.; Quiring, S. M.; Ford, T.; Chavez, N.; Galvan, J.
2015-12-01
The North American Soil Moisture Database (NASMD) is a high-quality observational soil moisture database that was developed to study land-atmosphere interactions. It includes over 1,800 monitoring stations the United States, Canada and Mexico. Soil moisture data are collected from multiple sources, quality controlled and integrated into an online database (soilmoisture.tamu.edu). The period of record varies substantially and only a few of these stations have an observation record extending back into the 1990s. Daily soil moisture observations have been quality controlled using the North American Soil Moisture Database QAQC algorithm. The database is designed to facilitate observationally-driven investigations of land-atmosphere interactions, validation of the accuracy of soil moisture simulations in global land surface models, satellite calibration/validation for SMOS and SMAP, and an improved understanding of how soil moisture influences climate on seasonal to interannual timescales. This paper provides some examples of how the NASMD has been utilized to enhance understanding of land-atmosphere interactions in the U.S. Great Plains.
Albreht, T; Paulin, M
1999-01-01
The article describes the possibilities of planning of the health care providers' network enabled by the use of information technology. The cornerstone of such planning is the development and establishment of a quality database on health care providers, health care professionals and their employment statuses. Based on the analysis of information needs, a new database was developed for various users in health care delivery as well as for those in health insurance. The method of information engineering was used in the standard four steps of the information system construction, while the whole project was run in accordance with the principles of two internationally approved project management methods. Special attention was dedicated to a careful analysis of the users' requirements and we believe the latter to be fulfilled to a very large degree. The new NHCPD is a relational database which is set up in two important state institutions, the National Institute of Public Health and the Health Insurance Institute of Slovenia. The former is responsible for updating the database, while the latter is responsible for the technological side as well as for the implementation of data security and protection. NHCPD will be inter linked with several other existing applications in the area of health care, public health and health insurance. Several important state institutions and professional chambers are users of the database in question, thus integrating various aspects of the health care system in Slovenia. The setting up of a completely revised health care providers' database in Slovenia is an important step in the development of a uniform and integrated information system that would support top decision-making processes at the national level.
Layani, Géraldine; Fleet, Richard; Dallaire, Renée; Tounkara, Fatoumata K; Poitras, Julien; Archambault, Patrick; Chauny, Jean-Marc; Ouimet, Mathieu; Gauthier, Josée; Dupuis, Gilles; Tanguay, Alain; Lévesque, Jean-Frédéric; Simard-Racine, Geneviève; Haggerty, Jeannie; Légaré, France
2016-01-01
Evidence-based indicators of quality of care have been developed to improve care and performance in Canadian emergency departments. The feasibility of measuring these indicators has been assessed mainly in urban and academic emergency departments. We sought to assess the feasibility of measuring quality-of-care indicators in rural emergency departments in Quebec. We previously identified rural emergency departments in Quebec that offered medical coverage with hospital beds 24 hours a day, 7 days a week and were located in rural areas or small towns as defined by Statistics Canada. A standardized protocol was sent to each emergency department to collect data on 27 validated quality-of-care indicators in 8 categories: duration of stay, patient safety, pain management, pediatrics, cardiology, respiratory care, stroke and sepsis/infection. Data were collected by local professional medical archivists between June and December 2013. Fifteen (58%) of the 26 emergency departments invited to participate completed data collection. The ability to measure the 27 quality-of-care indicators with the use of databases varied across departments. Centres 2, 5, 6 and 13 used databases for at least 21 of the indicators (78%-92%), whereas centres 3, 8, 9, 11, 12 and 15 used databases for 5 (18%) or fewer of the indicators. On average, the centres were able to measure only 41% of the indicators using heterogeneous databases and manual extraction. The 15 centres collected data from 15 different databases or combinations of databases. The average data collection time for each quality-of-care indicator varied from 5 to 88.5 minutes. The median data collection time was 15 minutes or less for most indicators. Quality-of-care indicators were not easily captured with the use of existing databases in rural emergency departments in Quebec. Further work is warranted to improve standardized measurement of these indicators in rural emergency departments in the province and to generalize the information gathered in this study to other health care environments.
Sulis, Andrea; Buscarinu, Paola; Soru, Oriana; Sechi, Giovanni M
2014-04-22
The definition of a synthetic index for classifying the quality of water bodies is a key aspect in integrated planning and management of water resource systems. In previous works [1,2], a water system optimization modeling approach that requires a single quality index for stored water in reservoirs has been applied to a complex multi-reservoir system. Considering the same modeling field, this paper presents an improved quality index estimated both on the basis of the overall trophic state of the water body and on the basis of the density values of the most potentially toxic Cyanobacteria. The implementation of the index into the optimization model makes it possible to reproduce the conditions limiting water use due to excessive nutrient enrichment in the water body and to the health hazard linked to toxic blooms. The analysis of an extended limnological database (1996-2012) in four reservoirs of the Flumendosa-Campidano system (Sardinia, Italy) provides useful insights into the strengths and limitations of the proposed synthetic index.
Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.
Chen, Qingyu; Zobel, Justin; Zhang, Xiuzhen; Verspoor, Karin
2016-01-01
First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases. We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.
Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen
2016-04-01
To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen
Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less
78 FR 28848 - Information Collection Activities; Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative Database.'' In accordance with the... for Healthcare Research and Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative... SOPS) Comparative Database; OMB NO. 0935- [[Page 28849
Hirabayashi, Satoshi; Nowak, David J
2016-08-01
Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and urban areas in every county in the conterminous United States. The results populated a national database of annual air pollutant removal, concentration changes, and reductions in adverse health incidences and costs for NO2, O3, PM2.5 and SO2. The developed database enabled a first order approximation of air quality and associated human health benefits provided by trees with any forest configurations anywhere in the conterminous United States over time. Comprehensive national database of tree effects on air quality and human health in the United States was developed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Roudier, B; Davit, B; Schütz, H; Cardot, J-M
2015-01-01
The in vitro-in vivo correlation (IVIVC) (Food and Drug Administration 1997) aims to predict performances in vivo of a pharmaceutical formulation based on its in vitro characteristics. It is a complex process that (i) incorporates in a gradual and incremental way a large amount of information and (ii) requires information from different properties (formulation, analytical, clinical) and associated dedicated treatments (statistics, modeling, simulation). These results in many studies that are initiated and integrated into the specifications (quality target product profile, QTPP). This latter defines the appropriate experimental designs (quality by design, QbD) (Food and Drug Administration 2011, 2012) whose main objectives are determination (i) of key factors of development and manufacturing (critical process parameters, CPPs) and (ii) of critical points of physicochemical nature relating to active ingredients (API) and critical quality attribute (CQA) which may have implications in terms of efficiency, safety, and inoffensiveness for the patient, due to their non-inclusion. These processes generate a very large amount of data that is necessary to structure. In this context, the storage of information in a database (DB) and the management of this database (database management system, DBMS) become an important issue for the management of projects and IVIVC and more generally for development of new pharmaceutical forms. This article describes the implementation of a prototype object-oriented database (OODB) considered as a tool, which is helpful for decision taking, responding in a structured and consistent way to the issues of project management of IVIVC (including bioequivalence and bioavailability) (Food and Drug Administration 2003) necessary for the implementation of QTPP.
Bull, Janet; Zafar, S Yousuf; Wheeler, Jane L; Harker, Matthew; Gblokpor, Agbessi; Hanson, Laura; Hulihan, Deirdre; Nugent, Rikki; Morris, John; Abernethy, Amy P
2010-08-01
Outpatient palliative care, an evolving delivery model, seeks to improve continuity of care across settings and to increase access to services in hospice and palliative medicine (HPM). It can provide a critical bridge between inpatient palliative care and hospice, filling the gap in community-based supportive care for patients with advanced life-limiting illness. Low capacities for data collection and quantitative research in HPM have impeded assessment of the impact of outpatient palliative care. In North Carolina, a regional database for community-based palliative care has been created through a unique partnership between a HPM organization and academic medical center. This database flexibly uses information technology to collect patient data, entered at the point of care (e.g., home, inpatient hospice, assisted living facility, nursing home). HPM physicians and nurse practitioners collect data; data are transferred to an academic site that assists with analyses and data management. Reports to community-based sites, based on data they provide, create a better understanding of local care quality. The data system was developed and implemented over a 2-year period, starting with one community-based HPM site and expanding to four. Data collection methods were collaboratively created and refined. The database continues to grow. Analyses presented herein examine data from one site and encompass 2572 visits from 970 new patients, characterizing the population, symptom profiles, and change in symptoms after intervention. A collaborative regional approach to HPM data can support evaluation and improvement of palliative care quality at the local, aggregated, and statewide levels.
Atsuta, Yoshiko
2016-01-01
Collection and analysis of information on diseases and post-transplant courses of allogeneic hematopoietic stem cell transplant recipients have played important roles in improving therapeutic outcomes in hematopoietic stem cell transplantation. Efficient, high-quality data collection systems are essential. The introduction of the Second-Generation Transplant Registry Unified Management Program (TRUMP2) is intended to improve data quality and more efficient data management. The TRUMP2 system will also expand possible uses of data, as it is capable of building a more complex relational database. The construction of an accessible data utilization system for adequate data utilization by researchers would promote greater research activity. Study approval and management processes and authorship guidelines also need to be organized within this context. Quality control of processes for data manipulation and analysis will also affect study outcomes. Shared scripts have been introduced to define variables according to standard definitions for quality control and improving efficiency of registry studies using TRUMP data.
Bartholomay, Roy C.; Maimer, Neil V.; Wehnke, Amy J.
2014-01-01
Water-quality activities and water-level measurements by the personnel of the U.S. Geological Survey (USGS) Idaho National Laboratory (INL) Project Office coincide with the USGS mission of appraising the quantity and quality of the Nation’s water resources. The activities are carried out in cooperation with the U.S. Department of Energy (DOE) Idaho Operations Office. Results of the water-quality and hydraulic head investigations are presented in various USGS publications or in refereed scientific journals and the data are stored in the National Water Information System (NWIS) database. The results of the studies are used by researchers, regulatory and managerial agencies, and interested civic groups. In the broadest sense, quality assurance refers to doing the job right the first time. It includes the functions of planning for products, review and acceptance of the products, and an audit designed to evaluate the system that produces the products. Quality control and quality assurance differ in that quality control ensures that things are done correctly given the “state-of-the-art” technology, and quality assurance ensures that quality control is maintained within specified limits.
Toward an integrated knowledge environment to support modern oncology.
Blake, Patrick M; Decker, David A; Glennon, Timothy M; Liang, Yong Michael; Losko, Sascha; Navin, Nicholas; Suh, K Stephen
2011-01-01
Around the world, teams of researchers continue to develop a wide range of systems to capture, store, and analyze data including treatment, patient outcomes, tumor registries, next-generation sequencing, single-nucleotide polymorphism, copy number, gene expression, drug chemistry, drug safety, and toxicity. Scientists mine, curate, and manually annotate growing mountains of data to produce high-quality databases, while clinical information is aggregated in distant systems. Databases are currently scattered, and relationships between variables coded in disparate datasets are frequently invisible. The challenge is to evolve oncology informatics from a "systems" orientation of standalone platforms and silos into an "integrated knowledge environments" that will connect "knowable" research data with patient clinical information. The aim of this article is to review progress toward an integrated knowledge environment to support modern oncology with a focus on supporting scientific discovery and improving cancer care.
NASA Astrophysics Data System (ADS)
Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.
2018-04-01
In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.
Quality assessment and improvement of nationwide cancer registration system in Taiwan: a review.
Chiang, Chun-Ju; You, San-Lin; Chen, Chien-Jen; Yang, Ya-Wen; Lo, Wei-Cheng; Lai, Mei-Shu
2015-03-01
Cancer registration provides core information for cancer surveillance and control. The population-based Taiwan Cancer Registry was implemented in 1979. After the Cancer Control Act was promulgated in 2003, the completeness (97%) and data quality of cancer registry database has achieved at an excellent level. Hospitals with 50 or more beds, which provide outpatient and hospitalized cancer care, are recruited to report 20 items of information on all newly diagnosed cancers to the central registry office (called short-form database). The Taiwan Cancer Registry is organized and funded by the Ministry of Health and Welfare. The National Taiwan University has been contracted to operate the registry and organized an advisory board to standardize definitions of terminology, coding and procedures of the registry's reporting system since 1996. To monitor the cancer care patterns and evaluate the cancer treatment outcomes, central cancer registry has been reformed since 2002 to include detail items of the stage at diagnosis and the first course of treatment (called long-form database). There are 80 hospitals, which count for >90% of total cancer cases, involved in the long-form registration. The Taiwan Cancer Registry has run smoothly for >30 years, which provides essential foundation for academic research and cancer control policy in Taiwan. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.
Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R
2017-08-05
Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons
Bodner, Martin; Bastisch, Ingo; Butler, John M; Fimmers, Rolf; Gill, Peter; Gusmão, Leonor; Morling, Niels; Phillips, Christopher; Prinz, Mechthild; Schneider, Peter M; Parson, Walther
2016-09-01
The statistical evaluation of autosomal Short Tandem Repeat (STR) genotypes is based on allele frequencies. These are empirically determined from sets of randomly selected human samples, compiled into STR databases that have been established in the course of population genetic studies. There is currently no agreed procedure of performing quality control of STR allele frequency databases, and the reliability and accuracy of the data are largely based on the responsibility of the individual contributing research groups. It has been demonstrated with databases of haploid markers (EMPOP for mitochondrial mtDNA, and YHRD for Y-chromosomal loci) that centralized quality control and data curation is essential to minimize error. The concepts employed for quality control involve software-aided likelihood-of-genotype, phylogenetic, and population genetic checks that allow the researchers to compare novel data to established datasets and, thus, maintain the high quality required in forensic genetics. Here, we present STRidER (http://strider.online), a publicly available, centrally curated online allele frequency database and quality control platform for autosomal STRs. STRidER expands on the previously established ENFSI DNA WG STRbASE and applies standard concepts established for haploid and autosomal markers as well as novel tools to reduce error and increase the quality of autosomal STR data. The platform constitutes a significant improvement and innovation for the scientific community, offering autosomal STR data quality control and reliable STR genotype estimates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Quantifying Data Quality for Clinical Trials Using Electronic Data Capture
Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.
2008-01-01
Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
No-reference quality assessment based on visual perception
NASA Astrophysics Data System (ADS)
Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao
2014-11-01
The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.
Inventory of Exposure-Related Data Systems Sponsored By Federal Agencies
1992-05-01
Health and Nutrition Examination Survey (NHANES) .... 1-152 National Herbicide Use Database .......................... 1-157 National Human Adipose Tissue ...Human Adipose Tissue ) ..................................... National Hydrologic Benchmark Network (see National Water Quality Networks Programs...Inorganic compounds (arsenic, iron, lead, mercury, zinc , cadmium , chromium, copper); pesticides (1982 and 1987 data available for 35 pesticides; original
The Evaluation of Forms of Assessment Using N-Dimensional Filtering
ERIC Educational Resources Information Center
Dron, Jon; Boyne, Chris; Mitchell, Richard
2004-01-01
This paper describes the use of the CoFIND (Collaborative Filter in N Dimensions) system to evaluate two assessment styles. CoFIND is a resource database which organizes itself around its users' needs. Learners enter resources, categorize, then rate them using "qualities," aspects of resources which learners find worthwhile, the n dimensions of…
Background: Electronic health records (EHRs) are now a ubiquitous component of the US healthcare system and are attractive for secondary data analysis as they contain detailed and longitudinal clinical records on potentially millions of individuals. However, due to their relative...
NASA Astrophysics Data System (ADS)
Mallari, Lawrence Anthony Castro
This project proposes a manual specifically for remedying an ineffective Corrective Action Request System for Company ABC by providing dispositions within the company's quality procedure. A Corrective Action Request System is a corrective action tool that provides a means for employees to engage in the process improvement, problem elimination cycle. At Company ABC, Corrective Action Recommendations (CARs) are not provided with timely dispositions; CARs are being ignored due to a lack of training and awareness of Company ABC's personnel and quality procedures. In this project, Company ABC's quality management software database is scrutinized to identify the number of delinquent, non-dispositioned CARs in 2014. These CARs are correlated with the number of nonconformances generated for the same issue while the CAR is still open. Using secondary data, the primary investigator finds that nonconformances are being remediated at the operational level. However, at the administrative level, CARS are being ignored and forgotten.
Redox Conditions in Selected Principal Aquifers of the United States
McMahon, P.B.; Cowdery, T.K.; Chapelle, F.H.; Jurgens, B.C.
2009-01-01
Reduction/oxidation (redox) processes affect the quality of groundwater in all aquifer systems. Redox processes can alternately mobilize or immobilize potentially toxic metals associated with naturally occurring aquifer materials, contribute to the degradation or preservation of anthropogenic contami-nants, and generate undesirable byproducts, such as dissolved manganese (Mn2+), ferrous iron (Fe2+), hydrogen sulfide (H2S), and methane (CH4). Determining the kinds of redox processes that occur in an aquifer system, documenting their spatial distribution, and understanding how they affect concentrations of natural or anthropogenic contaminants are central to assessing and predicting the chemical quality of groundwater. This Fact Sheet extends the analysis of U.S. Geological Survey authors to additional principal aquifer systems by applying a framework developed by the USGS to a larger set of water-quality data from the USGS national water databases. For a detailed explanation, see the 'Introduction' in the Fact Sheet.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Databases applicable to quantitative hazard/risk assessment-Towards a predictive systems toxicology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waters, Michael; Jackson, Marcus
2008-11-15
The Workshop on The Power of Aggregated Toxicity Data addressed the requirement for distributed databases to support quantitative hazard and risk assessment. The authors have conceived and constructed with federal support several databases that have been used in hazard identification and risk assessment. The first of these databases, the EPA Gene-Tox Database was developed for the EPA Office of Toxic Substances by the Oak Ridge National Laboratory, and is currently hosted by the National Library of Medicine. This public resource is based on the collaborative evaluation, by government, academia, and industry, of short-term tests for the detection of mutagens andmore » presumptive carcinogens. The two-phased evaluation process resulted in more than 50 peer-reviewed publications on test system performance and a qualitative database on thousands of chemicals. Subsequently, the graphic and quantitative EPA/IARC Genetic Activity Profile (GAP) Database was developed in collaboration with the International Agency for Research on Cancer (IARC). A chemical database driven by consideration of the lowest effective dose, GAP has served IARC for many years in support of hazard classification of potential human carcinogens. The Toxicological Activity Profile (TAP) prototype database was patterned after GAP and utilized acute, subchronic, and chronic data from the Office of Air Quality Planning and Standards. TAP demonstrated the flexibility of the GAP format for air toxics, water pollutants and other environmental agents. The GAP format was also applied to developmental toxicants and was modified to represent quantitative results from the rodent carcinogen bioassay. More recently, the authors have constructed: 1) the NIEHS Genetic Alterations in Cancer (GAC) Database which quantifies specific mutations found in cancers induced by environmental agents, and 2) the NIEHS Chemical Effects in Biological Systems (CEBS) Knowledgebase that integrates genomic and other biological data including dose-response studies in toxicology and pathology. Each of the public databases has been discussed in prior publications. They will be briefly described in the present report from the perspective of aggregating datasets to augment the data and information contained within them.« less
[Review of meta-analysis research on exercise in South Korea].
Song, Youngshin; Gang, Moonhee; Kim, Sun Ae; Shin, In Soo
2014-10-01
The purpose of this study was to evaluate the quality of meta-analysis regarding exercise using Assessment of Multiple Systematic Reviews (AMSTAR) as well as to compare effect size according to outcomes. Electronic databases including the Korean Studies Information Service System (KISS), the National Assembly Library and the DBpia, HAKJISA and RISS4U for the dates 1990 to January 2014 were searched for 'meta-analysis' and 'exercise' in the fields of medical, nursing, physical therapy and physical exercise in Korea. AMSTAR was scored for quality assessment of the 33 articles included in the study. Data were analyzed using descriptive statistics, t-test, ANOVA and χ²-test. The mean score for AMSTAR evaluations was 4.18 (SD=1.78) and about 67% were classified at the low-quality level and 30% at the moderate-quality level. The scores of quality were statistically different by field of research, number of participants, number of databases, financial support and approval by IRB. The effect size that presented in individual studies were different by type of exercise in the applied intervention. This critical appraisal of meta-analysis published in various field that focused on exercise indicates that a guideline such as the PRISMA checklist should be strongly recommended for optimum reporting of meta-analysis across research fields.
Martin, J N; Brooks, J C; Thompson, L D; Savell, J W; Harris, K B; May, L L; Haneklaus, A N; Schutz, J L; Belk, K E; Engle, T; Woerner, D R; Legako, J F; Luna, A M; Douglass, L W; Douglass, S E; Howe, J; Duvall, M; Patterson, K Y; Leheska, J L
2013-11-01
Beef nutrition is important to the worldwide beef industry. The objective of this study was to analyze proximate composition of eight beef rib and plate cuts to update the USDA National Nutrient Database for Standard Reference (SR). Furthermore, this study aimed to determine the influence of USDA Quality Grade on the separable components and proximate composition of the examined retail cuts. Carcasses (n=72) representing a composite of Yield Grade, Quality Grade, gender and genetic type were identified from six regions across the U.S. Beef plates and ribs (IMPS #109 and 121C and D) were collected from the selected carcasses and shipped to three university meat laboratories for storage, retail fabrication, cooking, and dissection and analysis of proximate composition. These data provide updated information regarding the nutrient content of beef and emphasize the influence of common classification systems (Yield Grade and Quality Grade) on the separable components, cooking yield, and proximate composition of retail beef cuts. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wakefield, Daniel V; Manole, Bogdan A; Jethanandani, Amit; May, Michael E; Marcrom, Samuel R; Farmer, Michael R; Ballo, Matthew T; VanderWalde, Noam A
2016-01-01
Radiation oncology (RO) residency applicants commonly use Internet resources for information on residency programs. The purpose of this study is to assess the accessibility, availability, and quality of online information for RO graduate medical education. Accessibility of online information was determined by surveying databases for RO residency programs within the Fellowship Residency Electronic Interactive Data Access System (FREIDA) of the American Medical Association, the Accreditation Council for Graduate Medical Education (ACGME), and Google search. As of June 30, 2015, websites were assessed for presence, accessibility, and overall content availability based on a 55-item list of desired features based on 13 program features important to previously surveyed applicants. Quality scoring of available content was performed based on previously published Likert scale variables deemed desirable to RO applicants. Quality score labels were given based on percentage of desired information presented. FREIDA and ACGME databases listed 89% and 98% of program websites, respectively, but only 56% and 52% of links routed to a RO department-specific website, respectively. Google search obtained websites for 98% of programs and 95% of links routed to RO department-specific websites. The majority of websites had program descriptions (98%) and information on staff. However, resident information was more limited (total number [42%], education [47%], previous residents [28%], positions available [35%], contact information [13%]). Based on quality scoring, program websites contained only 47% of desired information on average. Only 13% of programs had superior websites containing 80% or more of desired information. Compared with Google, the FREIDA and ACGME program databases provide limited access to RO residency websites. The overall information availability and quality of information within RO residency websites varies widely. Applicants and programs may benefit from improved content accessibility and quality from US RO program websites in the residency application process. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
The influence of performance-based payment on childhood immunisation coverage.
Merilind, Eero; Salupere, Rauno; Västra, Katrin; Kalda, Ruth
2015-06-01
Pay-for-performance, also called the quality system (QS) in Estonia, was implemented in 2006 and one indicator for achievement is the childhood immunisation coverage rate. The WHO vaccination coverage in Europe for diphtheria, tetanus and pertussis, and measles in children aged around one year old should meet or exceed 90 per cent. The study was conducted using a database from the Estonian Health Insurance Fund. The study compared childhood immunisation coverage rates of all Estonian family physicians in two groups, joined and not joined to the quality system during the observation period 2006-2012. Immunisation coverage was calculated as the percentage of persons in the target age group who received a vaccine dose by a given age. The target level of immunisations in Estonia is set at 90 per cent and higher. Immunisation coverage rates of family doctors (FD) in Estonia showed significant differences between two groups of doctors: joined to the quality system and not joined. Doctors joined to the quality system met the 90 per cent vaccination criterion more frequently compared to doctors not joined to the quality system. Doctors not joined to the quality system were below the 90 per cent vaccination criterion in all vaccinations listed in the Estonian State Immunisation Schedule. Pay-for-performance as a financial incentive encourages higher levels of childhood immunisations. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Performance indicators used to assess the quality of primary dental care.
González, Grisel Zacca; Klazinga, Niek; ten Asbroek, Guus; Delnoij, Diana M
2006-12-01
An appropriate quality of medical care including dental care should be an objective of every government that aims to improve the oral health of its population. To determine performance indicators that could be used to assess the quality of primary dental care at different levels of a health care system, the sources for data collection and finally, the dimensions of quality measured by these indicators. An explorative study of the international literature was conducted using medical databases, journals and books, and official websites of organisations and associations. This resulted in a set of 57 indicators, which were classified into the following dimensions for each intended user group: For patients: health outcomes and subjective indicators; for professionals: their performance and the rates of success, failure and complications; for health care system managers and policymakers: their resources, finances and health care utilisation. A set of 57 performance indicators were identified to assess the quality of primary dental care at the levels of patients, professionals and the health care system. These indicators could be used by managers and decision-makers at any level of the health care system according to the characteristics of the services.
EMR Database Upgrade from MUMPS to CACHE: Lessons Learned.
Alotaibi, Abduallah; Emshary, Mshary; Househ, Mowafa
2014-01-01
Over the past few years, Saudi hospitals have been implementing and upgrading Electronic Medical Record Systems (EMRs) to ensure secure data transfer and exchange between EMRs.This paper focuses on the process and lessons learned in upgrading the MUMPS database to a the newer Caché database to ensure the integrity of electronic data transfer within a local Saudi hospital. This paper examines the steps taken by the departments concerned, their action plans and how the change process was managed. Results show that user satisfaction was achieved after the upgrade was completed. The system was stable and offered better healthcare quality to patients as a result of the data exchange. Hardware infrastructure upgrades improved scalability and software upgrades to Caché improved stability. The overall performance was enhanced and new functions were added (CPOE) during the upgrades. The essons learned were: 1) Involve higher management; 2) Research multiple solutions available in the market; 3) Plan for a variety of implementation scenarios.
Benchmarking and audit of breast units improves quality of care
van Dam, P.A.; Verkinderen, L.; Hauspy, J.; Vermeulen, P.; Dirix, L.; Huizing, M.; Altintas, S.; Papadimitriou, K.; Peeters, M.; Tjalma, W.
2013-01-01
Quality Indicators (QIs) are measures of health care quality that make use of readily available hospital inpatient administrative data. Assessment quality of care can be performed on different levels: national, regional, on a hospital basis or on an individual basis. It can be a mandatory or voluntary system. In all cases development of an adequate database for data extraction, and feedback of the findings is of paramount importance. In the present paper we performed a Medline search on “QIs and breast cancer” and “benchmarking and breast cancer care”, and we have added some data from personal experience. The current data clearly show that the use of QIs for breast cancer care, regular internal and external audit of performance of breast units, and benchmarking are effective to improve quality of care. Adherence to guidelines improves markedly (particularly regarding adjuvant treatment) and there are data emerging showing that this results in a better outcome. As quality assurance benefits patients, it will be a challenge for the medical and hospital community to develop affordable quality control systems, which are not leading to excessive workload. PMID:24753926
Performance of Malaysian Medical Journals
Abrizah, Abdullah
2016-01-01
Indexation status matters for scholarly journal prestige and trust. The performance of Malaysian medical journals at the international level is gauged through the global citation databases, and at the national level through MyCite, a national citation indexing system. The performance indicators include journals publication productivity, the citations they garner, and their scores on other bibliometric indices such as journal impact factor (IF), and h-index. There is a growing consciousness amongst journal editorials to improve quality and increase chances of getting indexed in MyCite. Although it is now possible to gauge journal performance within Malaysia, through MyCite, the government and public are concerned about journal performance in international databases. Knowing the performance of journals in MyCite will help the editors and publishers to improve the quality and visibility of Malaysian journals and strategise to bring their journal to the international level of indexation. PMID:27547108
Natural language processing and the representation of clinical data.
Sager, N; Lyman, M; Bucknall, C; Nhan, N; Tick, L J
1994-01-01
OBJECTIVE: Develop a representation of clinical observations and actions and a method of processing free-text patient documents to facilitate applications such as quality assurance. DESIGN: The Linguistic String Project (LSP) system of New York University utilizes syntactic analysis, augmented by a sublanguage grammar and an information structure that are specific to the clinical narrative, to map free-text documents into a database for querying. MEASUREMENTS: Information precision (I-P) and information recall (I-R) were measured for queries for the presence of 13 asthma-health-care quality assurance criteria in a database generated from 59 discharge letters. RESULTS: I-P, using counts of major errors only, was 95.7% for the 28-letter training set and 98.6% for the 31-letter test set. I-R, using counts of major omissions only, was 93.9% for the training set and 92.5% for the test set. PMID:7719796
Vanhoorne, Bart; Decock, Wim; Vranken, Sofie; Lanssens, Thomas; Dekeyzer, Stefanie; Verfaille, Kevin; Horton, Tammy; Kroh, Andreas; Hernandez, Francisco; Mees, Jan
2018-01-01
The World Register of Marine Species (WoRMS) celebrated its 10th anniversary in 2017. WoRMS is a unique database: there is no comparable global database for marine species, which is driven by a large, global expert community, is supported by a Data Management Team and can rely on a permanent host institute, dedicated to keeping WoRMS online. Over the past ten years, the content of WoRMS has grown steadily, and the system currently contains more than 242,000 accepted marine species. WoRMS has not yet reached completeness: approximately 2,000 newly described species per year are added, and editors also enter the remaining missing older names–both accepted and unaccepted–an effort amounting to approximately 20,000 taxon name additions per year. WoRMS is used extensively, through different channels, indicating that it is recognized as a high-quality database on marine species information. It is updated on a daily basis by its Editorial Board, which currently consists of 490 taxonomic and thematic experts located around the world. Owing to its unique qualities, WoRMS has become a partner in many large-scale initiatives including OBIS, LifeWatch and the Catalogue of Life, where it is recognized as a high-quality and reliable source of information for marine taxonomy. PMID:29624577
Guerette, P.; Robinson, B.; Moran, W. P.; Messick, C.; Wright, M.; Wofford, J.; Velez, R.
1995-01-01
Community-based multi-disciplinary care of chronically ill individuals frequently requires the efforts of several agencies and organizations. The Community Care Coordination Network (CCCN) is an effort to establish a community-based clinical database and electronic communication system to facilitate the exchange of pertinent patient data among primary care, community-based and hospital-based providers. In developing a primary care based electronic record, a method is needed to update records from the field or remote sites and agencies and yet maintain data quality. Scannable data entry with fixed fields, optical character recognition and verification was compared to traditional keyboard data entry to determine the relative efficiency of each method in updating the CCCN database. PMID:8563414
Zhang, Guang Lan; Riemer, Angelika B.; Keskin, Derin B.; Chitkushev, Lou; Reinherz, Ellis L.; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/ PMID:24705205
Zhang, Guang Lan; Riemer, Angelika B; Keskin, Derin B; Chitkushev, Lou; Reinherz, Ellis L; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/.
Bio-optical data integration based on a 4 D database system approach
NASA Astrophysics Data System (ADS)
Imai, N. N.; Shimabukuro, M. H.; Carmo, A. F. C.; Alcantara, E. H.; Rodrigues, T. W. P.; Watanabe, F. S. Y.
2015-04-01
Bio-optical characterization of water bodies requires spatio-temporal data about Inherent Optical Properties and Apparent Optical Properties which allow the comprehension of underwater light field aiming at the development of models for monitoring water quality. Measurements are taken to represent optical properties along a column of water, and then the spectral data must be related to depth. However, the spatial positions of measurement may differ since collecting instruments vary. In addition, the records should not refer to the same wavelengths. Additional difficulty is that distinct instruments store data in different formats. A data integration approach is needed to make these large and multi source data sets suitable for analysis. Thus, it becomes possible, even automatically, semi-empirical models evaluation, preceded by preliminary tasks of quality control. In this work it is presented a solution, in the stated scenario, based on spatial - geographic - database approach with the adoption of an object relational Database Management System - DBMS - due to the possibilities to represent all data collected in the field, in conjunction with data obtained by laboratory analysis and Remote Sensing images that have been taken at the time of field data collection. This data integration approach leads to a 4D representation since that its coordinate system includes 3D spatial coordinates - planimetric and depth - and the time when each data was taken. It was adopted PostgreSQL DBMS extended by PostGIS module to provide abilities to manage spatial/geospatial data. It was developed a prototype which has the mainly tools an analyst needs to prepare the data sets for analysis.
An Integrated Korean Biodiversity and Genetic Information Retrieval System
Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee
2008-01-01
Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing bio-resource utilization for conservation, management, research, education, industrial applications, and integration with other bioinformation data resources. It can be found at . PMID:19091024
Development of an electronic database for Acute Pain Service outcomes
Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH
2012-01-01
BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364
An Integrated Information System for Supporting Quality Management Tasks
NASA Astrophysics Data System (ADS)
Beyer, N.; Helmreich, W.
2004-08-01
In a competitive environment, well defined processes become the strategic advantage of a company. Hence, targeted Quality Management ensures efficiency, trans- parency and, ultimately, customer satisfaction. In the particular context of a Space Test Centre, a num- ber of specific Quality Management standards have to be applied. According to the revision of ISO 9001 dur- ing 2000, and due to the adaptation of ECSS-Q20-07, process orientation and data analysis are key tasks for ensuring and evaluating the efficiency of a company's processes. In line with these requirements, an integrated management system for accessing the necessary infor- mation to support Quality Management and other proc- esses has been established. Some of its test-related fea- tures are presented here. Easy access to the integrated management system from any work place at IABG's Space Test Centre is ensured by means of an intranet portal. It comprises a full set of quality-related process descriptions, information on test facilities, emergency procedures, and other relevant in- formation. The portal's web interface provides direct access to a couple of external applications. Moreover, easy updating of all information and low cost mainte- nance are features of this integrated information system. The timely and transparent management of non- conformances is covered by a dedicated NCR database which incorporates full documentation capability, elec- tronic signature and e-mail notification of concerned staff. A search interface allows for queries across all documented non-conformances. Furthermore, print ver- sions can be generated at any stage in the process, e.g. for distribution to customers. Feedback on customer satisfaction is sought through a web-based questionnaire. The process is initiated by the responsible test manager through submission of an e- mail that contains a hyperlink to a secure website, ask- ing the customer to complete the brief online form, which is directly fed to a database for subsequent evaluation by the Quality Manager. All such information can be processed and presented in an appropriate manner for internal or external audits, as well as for regular management reviews.
Database for the degradation risk assessment of groundwater resources (Southern Italy)
NASA Astrophysics Data System (ADS)
Polemio, M.; Dragone, V.; Mitolo, D.
2003-04-01
The risk characterisation of quality degradation and availability lowering of groundwater resources has been pursued for a wide coastal plain (Basilicata region, Southern Italy), an area covering 40 km along the Ionian Sea and 10 km inland. The quality degradation is due two phenomena: pollution due to discharge of waste water (coming from urban areas) and due to salt pollution, related to seawater intrusion but not only. The availability lowering is due to overexploitation but also due to drought effects. To this purpose the historical data of 1,130 wells have been collected. Wells, homogenously distributed in the area, were the source of geological, stratigraphical, hydrogeological, geochemical data. In order to manage space-related information via a GIS, a database system has been devised to encompass all the surveyed wells and the body of information available per well. Geo-databases were designed to comprise the four types of data collected: a database including geometrical, geological and hydrogeological data on wells (WDB), a database devoted to chemical and physical data on groundwater (CDB), a database including the geotechnical parameters (GDB), a database concering piezometric and hydrological (rainfall, air temperature, river discharge) data (HDB). The record pertaining to each well is identified in these databases by the progressive number of the well itself. Every database is designed as follows: a) the HDB contains 1,158 records, 28 of and 31 fields, mainly describing the geometry of the well and of the stratigraphy; b) the CDB encompasses data about 157 wells, based on which the chemical and physical analyses of groundwater have been carried out. More than one record has been associated with these 157 wells, due to periodic monitoring and analysis; c) the GDB covers 61 wells to which the geotechnical parameters obtained by soil samples taken at various depths; the HDB is designed to permit the analysis of long time series (from 1918) of piezometric data, monitored by more than 60 wells, temperature, rainfall and river discharge data. Based on geo-databases, the geostatistical processing of data has permitted to characterise the degradation risk of groundwater resources of a wide coastal aquifer.
NASA Astrophysics Data System (ADS)
García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João
2017-08-01
Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.
Automating the training development process for mission flight operations
NASA Technical Reports Server (NTRS)
Scott, Carol J.
1994-01-01
Traditional methods of developing training do not effectively support the changing needs of operational users in a multimission environment. The Automated Training Development System (ATDS) provides advantages over conventional methods in quality, quantity, turnaround, database maintenance, and focus on individualized instruction. The Operations System Training Group at the JPL performed a six-month study to assess the potential of ATDS to automate curriculum development and to generate and maintain course materials. To begin the study, the group acquired readily available hardware and participated in a two-week training session to introduce the process. ATDS is a building activity that combines training's traditional information-gathering with a hierarchical method for interleaving the elements. The program can be described fairly simply. A comprehensive list of candidate tasks determines the content of the database; from that database, selected critical tasks dictate which competencies of skill and knowledge to include in course material for the target audience. The training developer adds pertinent planning information about each task to the database, then ATDS generates a tailored set of instructional material, based on the specific set of selection criteria. Course material consistently leads students to a prescribed level of competency.
Kaas, Quentin; Ruiz, Manuel; Lefranc, Marie-Paule
2004-01-01
IMGT/3Dstructure-DB and IMGT/Structural-Query are a novel 3D structure database and a new tool for immunological proteins. They are part of IMGT, the international ImMunoGenetics information system®, a high-quality integrated knowledge resource specializing in immunoglobulins (IG), T cell receptors (TR), major histocompatibility complex (MHC) and related proteins of the immune system (RPI) of human and other vertebrate species, which consists of databases, Web resources and interactive on-line tools. IMGT/3Dstructure-DB data are described according to the IMGT Scientific chart rules based on the IMGT-ONTOLOGY concepts. IMGT/3Dstructure-DB provides IMGT gene and allele identification of IG, TR and MHC proteins with known 3D structures, domain delimitations, amino acid positions according to the IMGT unique numbering and renumbered coordinate flat files. Moreover IMGT/3Dstructure-DB provides 2D graphical representations (or Collier de Perles) and results of contact analysis. The IMGT/StructuralQuery tool allows search of this database based on specific structural characteristics. IMGT/3Dstructure-DB and IMGT/StructuralQuery are freely available at http://imgt.cines.fr. PMID:14681396
Web-based flood database for Colorado, water years 1867 through 2011
Kohn, Michael S.; Jarrett, Robert D.; Krammes, Gary S.; Mommandi, Amanullah
2013-01-01
In order to provide a centralized repository of flood information for the State of Colorado, the U.S. Geological Survey, in cooperation with the Colorado Department of Transportation, created a Web-based geodatabase for flood information from water years 1867 through 2011 and data for paleofloods occurring in the past 5,000 to 10,000 years. The geodatabase was created using the Environmental Systems Research Institute ArcGIS JavaScript Application Programing Interface 3.2. The database can be accessed at http://cwscpublic2.cr.usgs.gov/projects/coflood/COFloodMap.html. Data on 6,767 flood events at 1,597 individual sites throughout Colorado were compiled to generate the flood database. The data sources of flood information are indirect discharge measurements that were stored in U.S. Geological Survey offices (water years 1867–2011), flood data from indirect discharge measurements referenced in U.S. Geological Survey reports (water years 1884–2011), paleoflood studies from six peer-reviewed journal articles (data on events occurring in the past 5,000 to 10,000 years), and the U.S. Geological Survey National Water Information System peak-discharge database (water years 1883–2010). A number of tests were performed on the flood database to ensure the quality of the data. The Web interface was programmed using the Environmental Systems Research Institute ArcGIS JavaScript Application Programing Interface 3.2, which allows for display, query, georeference, and export of the data in the flood database. The data fields in the flood database used to search and filter the database include hydrologic unit code, U.S. Geological Survey station number, site name, county, drainage area, elevation, data source, date of flood, peak discharge, and field method used to determine discharge. Additional data fields can be viewed and exported, but the data fields described above are the only ones that can be used for queries.
AIRSAR Automated Web-based Data Processing and Distribution System
NASA Technical Reports Server (NTRS)
Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen
2005-01-01
In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.
Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed
2016-01-01
Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/. © The Author(s) 2016. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Fatyga, M; Vora, S
Purpose: To determine if differences in patient positioning methods have an impact on the incidence and modeling of grade >=2 acute rectal toxicity in prostate cancer patients who were treated with Intensity Modulated Radiation Therapy (IMRT). Methods: We compared two databases of patients treated with radiation therapy for prostate cancer: a database of 79 patients who were treated with 7 field IMRT and daily image guided positioning based on implanted gold markers (IGRTdb), and a database of 302 patients who were treated with 5 field IMRT and daily positioning using a trans-abdominal ultrasound system (USdb). Complete planning dosimetry was availablemore » for IGRTdb patients while limited planning dosimetry, recorded at the time of planning, was available for USdb patients. We fit Lyman-Kutcher-Burman (LKB) model to IGRTdb only, and Univariate Logistic Regression (ULR) NTCP model to both databases. We perform Receiver Operating Characteristics analysis to determine the predictive power of NTCP models. Results: The incidence of grade >= 2 acute rectal toxicity in IGRTdb was 20%, while the incidence in USdb was 54%. Fits of both LKB and ULR models yielded predictive NTCP models for IGRTdb patients with Area Under the Curve (AUC) in the 0.63 – 0.67 range. Extrapolation of the ULR model from IGRTdb to planning dosimetry in USdb predicts that the incidence of acute rectal toxicity in USdb should not exceed 40%. Fits of the ULR model to the USdb do not yield predictive NTCP models and their AUC is consistent with AUC = 0.5. Conclusion: Accuracy of a patient positioning system affects clinically observed toxicity rates and the quality of NTCP models that can be derived from toxicity data. Poor correlation between planned and clinically delivered dosimetry may lead to erroneous or poorly performing NTCP models, even if the number of patients in a database is large.« less
NASA Astrophysics Data System (ADS)
Garcia Menendez, F.; Afrin, S.
2017-12-01
Prescribed fires are used extensively across the Southeastern United States and are a major source of air pollutant emissions in the region. These land management projects can adversely impact local and regional air quality. However, the emissions and air pollution impacts of prescribed fires remain largely uncertain. Satellite data, commonly used to estimate fire emissions, is often unable to detect the low-intensity, short-lived prescribed fires characteristic of the region. Additionally, existing ground-based prescribed burn records are incomplete, inconsistent and scattered. Here we present a new unified database of prescribed fire occurrence and characteristics developed from systemized digital burn permit records collected from public and private land management organizations in the Southeast. This bottom-up fire database is used to analyze the correlation between high PM2.5 concentrations measured by monitoring networks in southern states and prescribed fire occurrence at varying spatial and temporal scales. We show significant associations between ground-based records of prescribed fire activity and the observational air quality record at numerous sites by applying regression analysis and controlling confounding effects of meteorology. Furthermore, we demonstrate that the response of measured PM2.5 concentrations to prescribed fire estimates based on burning permits is significantly stronger than their response to satellite fire observations from MODIS (moderate-resolution imaging spectroradiometer) and geostationary satellites or prescribed fire emissions data in the National Emissions Inventory. These results show the importance of bottom-up smoke emissions estimates and reflect the need for improved ground-based fire data to advance air quality impacts assessments focused on prescribed burning.
Customer and household matching: resolving entity identity in data warehouses
NASA Astrophysics Data System (ADS)
Berndt, Donald J.; Satterfield, Ronald K.
2000-04-01
The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.
Hiremath, Shivayogi V; Hogaboom, Nathan S; Roscher, Melissa R; Worobey, Lynn A; Oyster, Michelle L; Boninger, Michael L
2017-12-01
To examine (1) differences in quality-of-life scores for groups based on transitions in locomotion status at 1, 5, and 10 years postdischarge in a sample of people with spinal cord injury (SCI); and (2) whether demographic factors and transitions in locomotion status can predict quality-of-life measures at these time points. Retrospective case study of the National SCI Database. Model SCI Systems Centers. Individuals with SCI (N=10,190) from 21 SCI Model Systems Centers, identified through the National SCI Model Systems Centers database between the years 1985 and 2012. Subjects had FIM (locomotion mode) data at discharge and at least 1 of the following: 1, 5, or 10 years postdischarge. Not applicable. FIM-locomotion mode; Severity of Depression Scale; Satisfaction With Life Scale; and Craig Handicap Assessment and Reporting Technique. Participants who transitioned from ambulation to wheelchair use reported lower participation and life satisfaction, and higher depression levels (P<.05) than those who maintained their ambulatory status. Participants who transitioned from ambulation to wheelchair use reported higher depression levels (P<.05) and no difference for participation (P>.05) or life satisfaction (P>.05) compared with those who transitioned from wheelchair to ambulation. Demographic factors and locomotion transitions predicted quality-of-life scores at all time points (P<.05). The results of this study indicate that transitioning from ambulation to wheelchair use can negatively impact psychosocial health 10 years after SCI. Clinicians should be aware of this when deciding on ambulation training. Further work to characterize who may be at risk for these transitions is needed. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Intelligent community management system based on the devicenet fieldbus
NASA Astrophysics Data System (ADS)
Wang, Yulan; Wang, Jianxiong; Liu, Jiwen
2013-03-01
With the rapid development of the national economy and the improvement of people's living standards, people are making higher demands on the living environment. And the estate management content, management efficiency and service quality have been higher required. This paper in-depth analyzes about the intelligent community of the structure and composition. According to the users' requirements and related specifications, it achieves the district management systems, which includes Basic Information Management: the management level of housing, household information management, administrator-level management, password management, etc. Service Management: standard property costs, property charges collecting, the history of arrears and other property expenses. Security Management: household gas, water, electricity and security and other security management, security management district and other public places. Systems Management: backup database, restore database, log management. This article also carries out on the Intelligent Community System analysis, proposes an architecture which is based on B / S technology system. And it has achieved a global network device management with friendly, easy to use, unified human - machine interface.
Crawford, April D; Zucker, Tricia A; Williams, Jeffrey M; Bhavsar, Vibhuti; Landry, Susan H
2013-12-01
Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based coaching model. We examined psychometric characteristics of the COT and explored how coaches and teachers used the COT goal-setting system. The study included 193 coaches working with 3,909 pre-k teachers in a statewide professional development program. Classrooms served 3 and 4 year olds (n = 56,390) enrolled mostly in Title I, Head Start, and other need-based pre-k programs. Coaches used the COT during a 2-hr observation at the beginning of the academic year. Teachers collected progress-monitoring data on children's language, literacy, and math outcomes three times during the year. Results indicated a theoretically supported eight-factor structure of the COT across language, literacy, and math instructional domains. Overall interrater reliability among coaches was good (.75). Although correlations with an established teacher observation measure were small, significant positive relations between COT scores and children's literacy outcomes indicate promising predictive validity. Patterns of goal-setting behaviors indicate teachers and coaches set an average of 43.17 goals during the academic year, and coaches reported that 80.62% of goals were met. Both coaches and teachers reported the COT was a helpful measure for enhancing quality of Tier 1 instruction. Limitations of the current study and implications for research and data-based coaching efforts are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Integrated web visualizations for protein-protein interaction databases.
Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas
2015-06-16
Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.
High throughput profile-profile based fold recognition for the entire human proteome.
McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T
2006-06-07
In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Murphy, Elizabeth A.; Ishii, Audrey L.
2006-01-01
The U.S. Geological Survey (USGS), in cooperation with DuPage County Department of Engineering, Stormwater Management Division, maintains a database of hourly meteorologic and hydrologic data for use in a near real-time streamflow simulation system, which assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek watershed in DuPage County, Illinois. The majority of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorologic data (wind speed, solar radiation, air temperature, and dewpoint temperature) are collected at Argonne National Laboratory in Argonne, Illinois. Potential evapotranspiration is computed from the meteorologic data. The hydrologic data (discharge and stage) are collected at USGS streamflow-gaging stations in DuPage County. These data are stored in a Watershed Data Management (WDM) database. This report describes a version of the WDM database that was quality-assured and quality-controlled annually to ensure the datasets were complete and accurate. This version of the WDM database contains data from January 1, 1997, through September 30, 2004, and is named SEP04.WDM. This report provides a record of time periods of poor data for each precipitation dataset and describes methods used to estimate the data for the periods when data were missing, flawed, or snowfall-affected. The precipitation dataset data-filling process was changed in 2001, and both processes are described. The other meteorologic and hydrologic datasets in the database are fully described in the annual U.S. Geological Survey Water Data Report for Illinois and, therefore, are described in less detail than the precipitation datasets in this report.
Zhao, Xiyan; Zhen, Zhong; Guo, Jing; Zhao, Tianyu; Ye, Ru; Guo, Yu; Chen, Hongdong; Lian, Fengmei; Tong, Xiaolin
2016-01-01
Placebo-controlled randomized trials are often used to evaluate the absolute effect of new treatments and are considered gold standard for clinical trials. No studies, however, have yet been conducted evaluating the reporting quality of placebo-controlled randomized trials. The current study aims to assess the reporting quality of placebo-controlled randomized trials on treatment of diabetes with Traditional Chinese Medicine (TCM) in Mainland China and to provide recommendations for improvements.China National Knowledge Infrastructure database, Wanfang database, China Biology Medicine database, and VIP database were searched for placebo-controlled randomized trials on treatment of diabetes with TCM. Review, animal experiment, and randomized controlled trials without placebo control were excluded. According to Consolidated Standards of Reporting Trials (CONSORT) 2010 checklists items, each item was given a yes or no depending on whether it was reported or not.A total of 68 articles were included. The reporting percentage in each article ranged from 24.3% to 73%, and 30.9% articles reported more than 50% of the items. Seven of the 37 items were reported more than 90% of the items, whereas 7 items were not mentioned at all. The average reporting for "title and abstract," "introduction," "methods," "results," "discussion," and "other information" was 43.4%, 78.7%, 40.1%, 49.9%, 71.1%, and 17.2%, respectively. The percentage of each section had increased after 2010. In addition, the reporting of multiple study centers, funding, placebo species, informed consent forms, and ethical approvals were 14.7%, 50%, 36.85%, 33.8%, and 4.4%, respectively.Although a scoring system was created according to the CONSORT 2010 checklist, it was not designed as an assessment tool. According to CONSORT 2010, the reporting quality of placebo-controlled randomized trials on the treatment of diabetes with TCM improved after 2010. Future improvements, however, are still needed, particularly in methods sections.
ERIC Educational Resources Information Center
Grooms, David W.
1988-01-01
Discusses the quality controls imposed on text and image data that is currently being converted from paper to digital images by the Patent and Trademark Office. The methods of inspection used on text and on images are described, and the quality of the data delivered thus far is discussed. (CLB)
Shao, Huikai; Li, Mengsi; Chen, Fuchao; Chen, Lianghua; Jiang, Zhengjin; Zhao, Lingguo
2018-04-01
During the last 40 years, Danshen injection has been widely used as an adjunctive therapy for angina pectoris in China, but its efficacy is not yet well defined. The objective of this study was to verify the efficacy of Danshen injection as adjunctive therapy in treating angina pectoris. The major databases including PubMed, Cochrane Library, Sino-Med, Medline, Embase, Google Scholar, China National Knowledge Infrastructure, Wanfang Databases, Chinese Scientific Journal Database, Chinese Biomedical Literature Database and the Chinese Science Citation Database were systematically searched for the published randomised controlled trials (RCTs) on Danshen injection until April 2016. Meta-analysis was conducted on the primary outcomes (i.e., the improvements in symptoms and electrocardiography (ECG)). The quality of the included RCTs was evaluated with the M scoring system (the refined Jadad scale). Based on the quality, year of publication and sample size of RCTs, sensitivity analysis and subgroup analysis were performed in this study. Ten RCTs, including 944 anginal patients, were identified in this meta-analysis. Compared with using antianginal agents (β-blockers, calcium antagonists, nitrates, etc.) alone, Danshen injection combined with antianginal agents had a better therapeutic effect in symptom improvement (odds ratio [OR], 3.66; 95% confidence interval [CI]: 2.50-5.36) and in ECG improvement (OR, 3.25; 95% CI: 1.74-6.08). This study showed that Danshen injection as adjunctive therapy seemed to be more effective than antianginal agents alone in treating angina pectoris. However, more evidence is needed to accurately evaluate the efficacy of Danshen injection because of the low methodological quality of the included RCTs. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Rural Water Quality Database: Educational Program to Collect Information.
ERIC Educational Resources Information Center
Lemley, Ann; Wagenet, Linda
1993-01-01
A New York State project created a water quality database for private drinking water supplies, using the statewide educational program to collect the data. Another goal was to develop this program so rural residents could increase their knowledge of water supply management. (Author)
ERIC Educational Resources Information Center
Crawford, April D.; Zucker, Tricia A.; Williams, Jeffrey M.; Bhavsar, Vibhuti; Landry, Susan H.
2013-01-01
Although coaching is a popular approach for enhancing the quality of Tier 1 instruction, limited research has addressed observational measures specifically designed to focus coaching on evidence-based practices. This study explains the development of the prekindergarten (pre-k) Classroom Observation Tool (COT) designed for use in a data-based…
Offline Signature Verification Using the Discrete Radon Transform and a Hidden Markov Model
NASA Astrophysics Data System (ADS)
Coetzer, J.; Herbst, B. M.; du Preez, J. A.
2004-12-01
We developed a system that automatically authenticates offline handwritten signatures using the discrete Radon transform (DRT) and a hidden Markov model (HMM). Given the robustness of our algorithm and the fact that only global features are considered, satisfactory results are obtained. Using a database of 924 signatures from 22 writers, our system achieves an equal error rate (EER) of 18% when only high-quality forgeries (skilled forgeries) are considered and an EER of 4.5% in the case of only casual forgeries. These signatures were originally captured offline. Using another database of 4800 signatures from 51 writers, our system achieves an EER of 12.2% when only skilled forgeries are considered. These signatures were originally captured online and then digitally converted into static signature images. These results compare well with the results of other algorithms that consider only global features.
Reiner, Bruce
2015-06-01
One of the greatest challenges facing healthcare professionals is the ability to directly and efficiently access relevant data from the patient's healthcare record at the point of care; specific to both the context of the task being performed and the specific needs and preferences of the individual end-user. In radiology practice, the relative inefficiency of imaging data organization and manual workflow requirements serves as an impediment to historical imaging data review. At the same time, clinical data retrieval is even more problematic due to the quality and quantity of data recorded at the time of order entry, along with the relative lack of information system integration. One approach to address these data deficiencies is to create a multi-disciplinary patient referenceable database which consists of high-priority, actionable data within the cumulative patient healthcare record; in which predefined criteria are used to categorize and classify imaging and clinical data in accordance with anatomy, technology, pathology, and time. The population of this referenceable database can be performed through a combination of manual and automated methods, with an additional step of data verification introduced for data quality control. Once created, these referenceable databases can be filtered at the point of care to provide context and user-specific data specific to the task being performed and individual end-user requirements.
Giffen, Sarah E.
2002-01-01
An environmental database was developed to store water-quality data collected during the 1999 U.S. Geological Survey investigation of the occurrence and distribution of dioxins, furans, and PCBs in the riverbed sediment and fish tissue in the Penobscot River in Maine. The database can be used to store a wide range of detailed information and to perform complex queries on the data it contains. The database also could be used to store data from other historical and any future environmental studies conducted on the Penobscot River and surrounding regions.
Layani, Géraldine; Fleet, Richard; Dallaire, Renée; Tounkara, Fatoumata K.; Poitras, Julien; Archambault, Patrick; Chauny, Jean-Marc; Ouimet, Mathieu; Gauthier, Josée; Dupuis, Gilles; Tanguay, Alain; Lévesque, Jean-Frédéric; Simard-Racine, Geneviève; Haggerty, Jeannie; Légaré, France
2016-01-01
Background: Evidence-based indicators of quality of care have been developed to improve care and performance in Canadian emergency departments. The feasibility of measuring these indicators has been assessed mainly in urban and academic emergency departments. We sought to assess the feasibility of measuring quality-of-care indicators in rural emergency departments in Quebec. Methods: We previously identified rural emergency departments in Quebec that offered medical coverage with hospital beds 24 hours a day, 7 days a week and were located in rural areas or small towns as defined by Statistics Canada. A standardized protocol was sent to each emergency department to collect data on 27 validated quality-of-care indicators in 8 categories: duration of stay, patient safety, pain management, pediatrics, cardiology, respiratory care, stroke and sepsis/infection. Data were collected by local professional medical archivists between June and December 2013. Results: Fifteen (58%) of the 26 emergency departments invited to participate completed data collection. The ability to measure the 27 quality-of-care indicators with the use of databases varied across departments. Centres 2, 5, 6 and 13 used databases for at least 21 of the indicators (78%-92%), whereas centres 3, 8, 9, 11, 12 and 15 used databases for 5 (18%) or fewer of the indicators. On average, the centres were able to measure only 41% of the indicators using heterogeneous databases and manual extraction. The 15 centres collected data from 15 different databases or combinations of databases. The average data collection time for each quality-of-care indicator varied from 5 to 88.5 minutes. The median data collection time was 15 minutes or less for most indicators. Interpretation: Quality-of-care indicators were not easily captured with the use of existing databases in rural emergency departments in Quebec. Further work is warranted to improve standardized measurement of these indicators in rural emergency departments in the province and to generalize the information gathered in this study to other health care environments. PMID:27730103
Nørrelund, Helene; Mazin, Wiktor; Pedersen, Lars
2014-01-01
Denmark is facing a reduction in clinical trial activity as the pharmaceutical industry has moved trials to low-cost emerging economies. Competitiveness in industry-sponsored clinical research depends on speed, quality, and cost. Because Denmark is widely recognized as a region that generates high quality data, an enhanced ability to attract future trials could be achieved if speed can be improved by taking advantage of the comprehensive national and regional registries. A "single point-of-entry" system has been established to support collaboration between hospitals and industry. When assisting industry in early-stage feasibility assessments, potential trial participants are identified by use of registries to shorten the clinical trial startup times. The Aarhus University Clinical Trial Candidate Database consists of encrypted data from the Danish National Registry of Patients allowing an immediate estimation of the number of patients with a specific discharge diagnosis in each hospital department or outpatient specialist clinic in the Central Denmark Region. The free access to health care, thorough monitoring of patients who are in contact with the health service, completeness of registration at the hospital level, and ability to link all databases are competitive advantages in an increasingly complex clinical trial environment.
Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses
Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu
2015-01-01
Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099
Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.
Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu
2015-01-01
Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.
A taxonomy has been developed for outcomes in medical research to help improve knowledge discovery.
Dodd, Susanna; Clarke, Mike; Becker, Lorne; Mavergames, Chris; Fish, Rebecca; Williamson, Paula R
2018-04-01
There is increasing recognition that insufficient attention has been paid to the choice of outcomes measured in clinical trials. The lack of a standardized outcome classification system results in inconsistencies due to ambiguity and variation in how outcomes are described across different studies. Being able to classify by outcome would increase efficiency in searching sources such as clinical trial registries, patient registries, the Cochrane Database of Systematic Reviews, and the Core Outcome Measures in Effectiveness Trials (COMET) database of core outcome sets (COS), thus aiding knowledge discovery. A literature review was carried out to determine existing outcome classification systems, none of which were sufficiently comprehensive or granular for classification of all potential outcomes from clinical trials. A new taxonomy for outcome classification was developed, and as proof of principle, outcomes extracted from all published COS in the COMET database, selected Cochrane reviews, and clinical trial registry entries were classified using this new system. Application of this new taxonomy to COS in the COMET database revealed that 274/299 (92%) COS include at least one physiological outcome, whereas only 177 (59%) include at least one measure of impact (global quality of life or some measure of functioning) and only 105 (35%) made reference to adverse events. This outcome taxonomy will be used to annotate outcomes included in COS within the COMET database and is currently being piloted for use in Cochrane Reviews within the Cochrane Linked Data Project. Wider implementation of this standard taxonomy in trial and systematic review databases and registries will further promote efficient searching, reporting, and classification of trial outcomes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Developing a national framework of quality indicators for public hospitals.
Simou, Effie; Pliatsika, Paraskevi; Koutsogeorgou, Eleni; Roumeliotou, Anastasia
2014-01-01
The current study describes the development of a preliminary set of quality indicators for public Greek National Health System (GNHS) hospitals, which were used in the "Health Monitoring Indicators System: Health Map" (Ygeionomikos Chartis) project, with the purpose that these quality indicators would assess the quality of all the aspects relevant to public hospital healthcare workforce and services provided. A literature review was conducted in the MEDLINE database to identify articles referring to international and national hospital quality assessment projects, together with an online search for relevant projects. Studies were included if they were published in English, from 1980 to 2010. A consensus panel took place afterwards with 40 experts in the field and tele-voting procedure. Twenty relevant projects and their 1698 indicators were selected through the literature search, and after the consensus panel process, a list of 67 indicators were selected to be implemented for the assessment of the public hospitals categorized under six distinct dimensions: Quality, Responsiveness, Efficiency, Utilization, Timeliness, and Resources and Capacity. Data gathered and analyzed in this manner provided a novel evaluation and monitoring system for Greece, which can assist decision-makers, healthcare professionals, and patients in Greece to retrieve relevant information, with the long-term goal to improve quality in care in the GNHS hospital sector. Copyright © 2014 John Wiley & Sons, Ltd.
Nursing informatics, outcomes, and quality improvement.
Charters, Kathleen G
2003-08-01
Nursing informatics actively supports nursing by providing standard language systems, databases, decision support, readily accessible research results, and technology assessments. Through normalized datasets spanning an entire enterprise or other large demographic, nursing informatics tools support improvement of healthcare by answering questions about patient outcomes and quality improvement on an enterprise scale, and by providing documentation for business process definition, business process engineering, and strategic planning. Nursing informatics tools provide a way for advanced practice nurses to examine their practice and the effect of their actions on patient outcomes. Analysis of patient outcomes may lead to initiatives for quality improvement. Supported by nursing informatics tools, successful advance practice nurses leverage their quality improvement initiatives against the enterprise strategic plan to gain leadership support and resources.
Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives.
Chelico, John D; Wilcox, Adam B; Vawdrey, David K; Kuperman, Gilad J
2016-01-01
Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement.
Designing a Clinical Data Warehouse Architecture to Support Quality Improvement Initiatives
Chelico, John D.; Wilcox, Adam B.; Vawdrey, David K.; Kuperman, Gilad J.
2016-01-01
Clinical data warehouses, initially directed towards clinical research or financial analyses, are evolving to support quality improvement efforts, and must now address the quality improvement life cycle. In addition, data that are needed for quality improvement often do not reside in a single database, requiring easier methods to query data across multiple disparate sources. We created a virtual data warehouse at NewYork Presbyterian Hospital that allowed us to bring together data from several source systems throughout the organization. We also created a framework to match the maturity of a data request in the quality improvement life cycle to proper tools needed for each request. As projects progress in the Define, Measure, Analyze, Improve, Control stages of quality improvement, there is a proper matching of resources the data needs at each step. We describe the analysis and design creating a robust model for applying clinical data warehousing to quality improvement. PMID:28269833
Best kept secrets ... First Coast Systems, Inc. (FCS).
Andrew, W F
1991-04-01
The FCS/APaCS system is a viable option for small-to medium-size hospitals (up to 400 beds). The table-driven system takes full advantage of IBM AS/400 computer architecture. A comprehensive application set, provided in an integrated database environment, is adaptable to multi-facility environments. Price/performance appears to be competitive. Commitment to IBM AS/400 environment assures cost-effective hardware platforms backed by IBM support and resources. As an IBM Health Industry Business Partner, FCS (and its clients) benefits from IBM's well-known commitment to quality and service. Corporate emphasis on user involvement and satisfaction, along with a commitment to quality and service for the APaCS systems, assures clients of "leading edge" capabilities in this evolutionary healthcare delivery environment. FCS/APaCS will be a strong contender in selected marketing environments.
NASA Astrophysics Data System (ADS)
Schoitsch, Erwin
1988-07-01
Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.
Amaratunga, Thelina; Dobranowski, Julian
2016-09-01
Preventable yet clinically significant rates of medical error remain systemic, while health care spending is at a historic high. Industry-based quality improvement (QI) methodologies show potential for utility in health care and radiology because they use an empirical approach to reduce variability and improve workflow. The aim of this review was to systematically assess the literature with regard to the use and efficacy of Lean and Six Sigma (the most popular of the industrial QI methodologies) within radiology. MEDLINE, the Allied & Complementary Medicine Database, Embase Classic + Embase, Health and Psychosocial Instruments, and the Ovid HealthStar database, alongside the Cochrane Library databases, were searched on June 2015. Empirical studies in peer-reviewed journals were included if they assessed the use of Lean, Six Sigma, or Lean Six Sigma with regard to their ability to improve a variety of quality metrics in a radiology-centered clinical setting. Of the 278 articles returned, 23 studies were suitable for inclusion. Of these, 10 assessed Six Sigma, 7 assessed Lean, and 6 assessed Lean Six Sigma. The diverse range of measured outcomes can be organized into 7 common aims: cost savings, reducing appointment wait time, reducing in-department wait time, increasing patient volume, reducing cycle time, reducing defects, and increasing staff and patient safety and satisfaction. All of the included studies demonstrated improvements across a variety of outcomes. However, there were high rates of systematic bias and imprecision as per the Grading of Recommendations Assessment, Development and Evaluation guidelines. Lean and Six Sigma QI methodologies have the potential to reduce error and costs and improve quality within radiology. However, there is a pressing need to conduct high-quality studies in order to realize the true potential of these QI methodologies in health care and radiology. Recommendations on how to improve the quality of the literature are proposed. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
The Danish Inguinal Hernia database.
Friis-Andersen, Hans; Bisgaard, Thue
2016-01-01
To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. Patients ≥18 years operated for groin hernia. Type and size of hernia, primary or recurrent, type of surgical repair procedure, mesh and mesh fixation methods. According to the Danish National Health Act, surgeons are obliged to register all hernia repairs immediately after surgery (3 minute registration time). All institutions have continuous access to their own data stratified on individual surgeons. Registrations are based on a closed, protected Internet system requiring personal codes also identifying the operating institution. A national steering committee consisting of 13 voluntary and dedicated surgeons, 11 of whom are unpaid, handles the medical management of the database. The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015). A total of 49 peer-reviewed national and international publications have been published from the database (June 2015). The Danish Inguinal Hernia Database is fully active monitoring surgical quality and contributes to the national and international surgical society to improve outcome after groin hernia repair.
2008 Niday Perinatal Database quality audit: report of a quality assurance project.
Dunn, S; Bottomley, J; Ali, A; Walker, M
2011-12-01
This quality assurance project was designed to determine the reliability, completeness and comprehensiveness of the data entered into Niday Perinatal Database. Quality of the data was measured by comparing data re-abstracted from the patient record to the original data entered into the Niday Perinatal Database. A representative sample of hospitals in Ontario was selected and a random sample of 100 linked mother and newborn charts were audited for each site. A subset of 33 variables (representing 96 data fields) from the Niday dataset was chosen for re-abstraction. Of the data fields for which Cohen's kappa statistic or intraclass correlation coefficient (ICC) was calculated, 44% showed substantial or almost perfect agreement (beyond chance). However, about 17% showed less than 95% agreement and a kappa or ICC value of less than 60% indicating only slight, fair or moderate agreement (beyond chance). Recommendations to improve the quality of these data fields are presented.
NASA Astrophysics Data System (ADS)
Giesbrecht, K. E.; Miller, L. A.; Davelaar, M.; Zimmermann, S.; Carmack, E.; Johnson, W. K.; Macdonald, R. W.; McLaughlin, F.; Mucci, A.; Williams, W. J.; Wong, C. S.; Yamamoto-Kawai, M.
2014-03-01
We have assembled and conducted primary quality control on previously publicly unavailable water column measurements of the dissolved inorganic carbon system and associated biogeochemical parameters (oxygen, nutrients, etc.) made on 26 cruises in the subarctic and Arctic regions dating back to 1974. The measurements are primarily from the western side of the Canadian Arctic, but also include data that cover an area ranging from the North Pacific to the Gulf of St. Lawrence. The data were subjected to primary quality control (QC) to identify outliers and obvious errors. This data set incorporates over four thousand individual measurements of total inorganic carbon (TIC), alkalinity, and pH from the Canadian Arctic over a period of more than 30 years and provides an opportunity to increase our understanding of temporal changes in the inorganic carbon system in northern waters and the Arctic Ocean. The data set is available for download on the CDIAC (Carbon Dioxide Information Analysis Center) website: http://cdiac.ornl.gov/ftp/oceans/IOS_Arctic_Database/ (doi:10.3334/CDIAC/OTG.IOS_ARCT_CARBN).
Zander, Britta; Busse, Reinhard
2017-02-22
Adequate performance assessment benefits from the use of disaggregated data to allow a proper evaluation of health systems. Since routinely collected data are usually not disaggregated enough to allow stratified analyses of healthcare needs, utilisation, cost and quality across different sectors, international research projects could fill this gap by exploring means to data collection or even providing individual-level data. The aim of this paper is therefore to (1) study the availability and accessibility of relevant European-funded health projects, and (2) to analyse their contents and methodologies. The European Commission Public Health Projects Database and CORDIS were searched for eligible projects, which were then analysed by information openly available online. Overall, only a few of the 39 identified projects produced data useful for proper performance assessment, due to, for example, lacking available or accessible data, or poor linkage of health status to costs and patient experiences. Other problems were insufficient databases to identify projects and poor communication of project contents and results. A new approach is necessary to improve accessibility to and coverage of data on outcomes, quality and costs of health systems enabling decision-makers and health professionals to properly assess performance.
Burley, Thomas E.; Asquith, William H.; Brooks, Donald L.
2011-01-01
The U.S. Geological Survey (USGS), in cooperation with Texas Tech University, constructed a dataset of selected reservoir storage (daily and instantaneous values), reservoir elevation (daily and instantaneous values), and water-quality data from 59 reservoirs throughout Texas. The period of record for the data is as large as January 1965-January 2010. Data were acquired from existing databases, spreadsheets, delimited text files, and hard-copy reports. The goal was to obtain as much data as possible; therefore, no data acquisition restrictions specifying a particular time window were used. Primary data sources include the USGS National Water Information System, the Texas Commission on Environmental Quality Surface Water-Quality Management Information System, and the Texas Water Development Board monthly Texas Water Condition Reports. Additional water-quality data for six reservoirs were obtained from USGS Texas Annual Water Data Reports. Data were combined from the multiple sources to create as complete a set of properties and constituents as the disparate databases allowed. By devising a unique per-reservoir short name to represent all sites on a reservoir regardless of their source, all sampling sites at a reservoir were spatially pooled by reservoir and temporally combined by date. Reservoir selection was based on various criteria including the availability of water-quality properties and constituents that might affect the trophic status of the reservoir and could also be important for understanding possible effects of climate change in the future. Other considerations in the selection of reservoirs included the general reservoir-specific period of record, the availability of concurrent reservoir storage or elevation data to match with water-quality data, and the availability of sample depth measurements. Additional separate selection criteria included historic information pertaining to blooms of golden algae. Physical properties and constituents were water temperature, reservoir storage, reservoir elevation, specific conductance, dissolved oxygen, pH, unfiltered salinity, unfiltered total nitrogen, filtered total nitrogen, unfiltered nitrate plus nitrite, unfiltered phosphorus, filtered phosphorus, unfiltered carbon, carbon in suspended sediment, total hardness, unfiltered noncarbonate hardness, filtered noncarbonate hardness, unfiltered calcium, filtered calcium, unfiltered magnesium, filtered magnesium, unfiltered sodium, filtered sodium, unfiltered potassium, filtered potassium, filtered chloride, filtered sulfate, unfiltered fluoride, and filtered fluoride. When possible, USGS and Texas Commission on Environmental Quality water-quality properties and constituents were matched using the database parameter codes for individual physical properties and constituents, descriptions of each physical property or constituent, and their reporting units. This report presents a collection of delimited text files of source-aggregated, spatially pooled, depth-dependent, instantaneous water-quality data as well as instantaneous, daily, and monthly storage and elevation reservoir data.
St Louis, James D; Kurosawa, Hiromi; Jonas, Richard A; Sandoval, Nestor; Cervantes, Jorge; Tchervenkov, Christo I; Jacobs, Jeffery P; Sakamoto, Kisaburo; Stellin, Giovanni; Kirklin, James K
2017-09-01
The World Society for Pediatric and Congenital Heart Surgery was founded with the mission to "promote the highest quality comprehensive cardiac care to all patients with congenital heart disease, from the fetus to the adult, regardless of the patient's economic means, with an emphasis on excellence in teaching, research, and community service." Early on, the Society's members realized that a crucial step in meeting this goal was to establish a global database that would collect vital information, allowing cardiac surgical centers worldwide to benchmark their outcomes and improve the quality of congenital heart disease care. With tireless efforts from all corners of the globe and utilizing the vast experience and invaluable input of multiple international experts, such a platform of global information exchange was created: The World Database for Pediatric and Congenital Heart Disease went live on January 1, 2017. This database has been thoughtfully designed to produce meaningful performance and quality analyses of surgical outcomes extending beyond immediate hospital survival, allowing capture of important morbidities and mortalities for up to 1 year postoperatively. In order to advance the societal mission, this quality improvement program is available free of charge to WSPCHS members. In establishing the World Database, the Society has taken an essential step to further the process of global improvement in care for children with congenital heart disease.
NASA Astrophysics Data System (ADS)
Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.
2013-09-01
Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying Lateral Boundary Conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2000-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complimented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone vertical profiles. The results show performance is largely within uncertainty estimates for the Tropospheric Emission Spectrometer (TES) with some exceptions. The major difference shows a high bias in the upper troposphere along the southern boundary in January. This publication documents the global simulation database, the tool for conversion to LBC, and the fidelity of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.
The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...
Human Connectome Project Informatics: quality control, database services, and data visualization
Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.
2013-01-01
The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591
DNApod: DNA polymorphism annotation database from next-generation sequence read archives.
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information.
DNApod: DNA polymorphism annotation database from next-generation sequence read archives
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information. PMID:28234924
Shackleton, David; Pagram, Jenny; Ives, Lesley; Vanhinsbergh, Des
2018-06-02
The RapidHIT™ 200 System is a fully automated sample-to-DNA profile system designed to produce high quality DNA profiles within 2h. The use of RapidHIT™ 200 System within the United Kingdom Criminal Justice System (UKCJS) has required extensive development and validation of methods with a focus on AmpFℓSTR ® NGMSElect™ Express PCR kit to comply with specific regulations for loading to the UK National DNA Database (NDNAD). These studies have been carried out using single source reference samples to simulate live reference samples taken from arrestees and victims for elimination. The studies have shown that the system is capable of generating high quality profile and has achieved the accreditations necessary to load to the NDNAD; a first for the UK. Copyright © 2018 Elsevier B.V. All rights reserved.
An intelligent remote monitoring system for artificial heart.
Choi, Jaesoon; Park, Jun W; Chung, Jinhan; Min, Byoung G
2005-12-01
A web-based database system for intelligent remote monitoring of an artificial heart has been developed. It is important for patients with an artificial heart implant to be discharged from the hospital after an appropriate stabilization period for better recovery and quality of life. Reliable continuous remote monitoring systems for these patients with life support devices are gaining practical meaning. The authors have developed a remote monitoring system for this purpose that consists of a portable/desktop monitoring terminal, a database for continuous recording of patient and device status, a web-based data access system with which clinicians can access real-time patient and device status data and past history data, and an intelligent diagnosis algorithm module that noninvasively estimates blood pump output and makes automatic classification of the device status. The system has been tested with data generation emulators installed on remote sites for simulation study, and in two cases of animal experiments conducted at remote facilities. The system showed acceptable functionality and reliability. The intelligence algorithm also showed acceptable practicality in an application to animal experiment data.
Motor Rehabilitation Using Kinect: A Systematic Review.
Da Gama, Alana; Fallavollita, Pascal; Teichrieb, Veronica; Navab, Nassir
2015-04-01
Interactive systems are being developed with the intention to help in the engagement of patients on various therapies. Amid the recent technological advances, Kinect™ from Microsoft (Redmond, WA) has helped pave the way on how user interaction technology facilitates and complements many clinical applications. In order to examine the actual status of Kinect developments for rehabilitation, this article presents a systematic review of articles that involve interactive, evaluative, and technical advances related to motor rehabilitation. Systematic research was performed in the IEEE Xplore and PubMed databases using the key word combination "Kinect AND rehabilitation" with the following inclusion criteria: (1) English language, (2) page number >4, (3) Kinect system for assistive interaction or clinical evaluation, or (4) Kinect system for improvement or evaluation of the sensor tracking or movement recognition. Quality assessment was performed by QualSyst standards. In total, 109 articles were found in the database research, from which 31 were included in the review: 13 were focused on the development of assistive systems for rehabilitation, 3 in evaluation, 3 in the applicability category, 7 on validation of Kinect anatomic and clinical evaluation, and 5 on improvement techniques. Quality analysis of all included articles is also presented with their respective QualSyst checklist scores. Research and development possibilities and future works with the Kinect for rehabilitation application are extensive. Methodological improvements when performing studies on this area need to be further investigated.
Olola, C H O; Missinou, M A; Issifou, S; Anane-Sarpong, E; Abubakar, I; Gandi, J N; Chagomerana, M; Pinder, M; Agbenyega, T; Kremsner, P G; Newton, C R J C; Wypij, D; Taylor, T E
2006-01-01
Computers are widely used for data management in clinical trials in the developed countries, unlike in developing countries. Dependable systems are vital for data management, and medical decision making in clinical research. Monitoring and evaluation of data management is critical. In this paper we describe database structures and procedures of systems used to implement, coordinate, and sustain data management in Africa. We outline major lessons, challenges and successes achieved, and recommendations to improve medical informatics application in biomedical research in sub-Saharan Africa. A consortium of experienced research units at five sites in Africa in studying children with disease formed a new clinical trials network, Severe Malaria in African Children. In December 2000, the network introduced an observational study involving these hospital-based sites. After prototyping, relational database management systems were implemented for data entry and verification, data submission and quality assurance monitoring. Between 2000 and 2005, 25,858 patients were enrolled. Failure to meet data submission deadline and data entry errors correlated positively (correlation coefficient, r = 0.82), with more errors occurring when data was submitted late. Data submission lateness correlated inversely with hospital admissions (r = -0.62). Developing and sustaining dependable DBMS, ongoing modifications to optimize data management is crucial for clinical studies. Monitoring and communication systems are vital in multi-center networks for good data management. Data timeliness is associated with data quality and hospital admissions.
James Webb Space Telescope - Applying Lessons Learned to I&T
NASA Technical Reports Server (NTRS)
Johns, Alan; Seaton, Bonita; Gal-Edd, Jonathan; Jones, Ronald; Fatig, Curtis; Wasiak, Francis
2008-01-01
The James Webb Space Telescope (JWST) is part of a new generation of spacecraft acquiring large data volumes from remote regions in space. To support a mission such as the JWST, it is imperative that lessons learned from the development of previous missions such as the Hubble Space Telescope and the Earth Observing System mission set be applied throughout the development and operational lifecycles. One example of a key lesson that should be applied is that core components, such as the command and telemetry system and the project database, should be developed early, used throughout development and testing, and evolved into the operational system. The purpose of applying lessons learned is to reap benefits in programmatic or technical parameters such as risk reduction, end product quality, cost efficiency, and schedule optimization. In the cited example, the early development and use of the operational command and telemetry system as well as the establishment of the intended operational database will allow these components to be used by the developers of various spacecraft components such that development, testing, and operations will all use the same core components. This will reduce risk through the elimination of transitions between development and operational components and improve end product quality by extending the verification of those components through continual use. This paper will discuss key lessons learned that have been or are being applied to the JWST Ground Segment integration and test program.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Soltani, S Abolfazi; Ingolfsson, Armann; Zygun, David A; Stelfox, Henry T; Hartling, Lisa; Featherstone, Robin; Opgenorth, Dawn; Bagshaw, Sean M
2015-11-12
The matching of critical care service supply with demand is fundamental for the efficient delivery of advanced life support to patients in urgent need. Mismatch in this supply/demand relationship contributes to "intensive care unit (ICU) capacity strain," defined as a time-varying disruption in the ability of an ICU to provide well-timed and high-quality intensive care support to any and all patients who are or may become critically ill. ICU capacity strain leads to suboptimal quality of care and may directly contribute to heightened risk of adverse events, premature discharges, unplanned readmissions, and avoidable death. Unrelenting strain on ICU capacity contributes to inefficient health resource utilization and may negatively impact the satisfaction of patients, their families, and frontline providers. It is unknown how to optimally quantify the instantaneous and temporal "stress" an ICU experiences due to capacity strain. We will perform a systematic review to identify, appraise, and evaluate quality and performance measures of strain on ICU capacity and their association with relevant patient-centered, ICU-level, and health system-level outcomes. Electronic databases (i.e., MEDLINE, EMBASE, CINAHL, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Web of Science, and the Agency of Healthcare Research and Quality (AHRQ) - National Quality Measures Clearinghouse (NQMC)) will be searched for original studies of measures of ICU capacity strain. Selected gray literature sources will be searched. Search themes will focus on intensive care, quality, operations management, and capacity. Analysis will be primarily narrative. Each identified measure will be defined, characterized, and evaluated using the criteria proposed by the US Strategic Framework Board for a National Quality Measurement and Reporting System (i.e., importance, scientific acceptability, usability, feasibility). Our systematic review will comprehensively identify, define, and evaluate quality and performance measures of ICU capacity strain. This is a necessary step towards understanding the impact of capacity strain on quality and performance in intensive care and to develop innovative interventions aimed to improve efficiency, avoid waste, and better anticipate impending capacity shortfalls. PROSPERO, CRD42015017931.
Hammond, Davyda; Conlon, Kathryn; Barzyk, Timothy; Chahine, Teresa; Zartarian, Valerie; Schultz, Brad
2011-03-01
Communities are concerned over pollution levels and seek methods to systematically identify and prioritize the environmental stressors in their communities. Geographic information system (GIS) maps of environmental information can be useful tools for communities in their assessment of environmental-pollution-related risks. Databases and mapping tools that supply community-level estimates of ambient concentrations of hazardous pollutants, risk, and potential health impacts can provide relevant information for communities to understand, identify, and prioritize potential exposures and risk from multiple sources. An assessment of existing databases and mapping tools was conducted as part of this study to explore the utility of publicly available databases, and three of these databases were selected for use in a community-level GIS mapping application. Queried data from the U.S. EPA's National-Scale Air Toxics Assessment, Air Quality System, and National Emissions Inventory were mapped at the appropriate spatial and temporal resolutions for identifying risks of exposure to air pollutants in two communities. The maps combine monitored and model-simulated pollutant and health risk estimates, along with local survey results, to assist communities with the identification of potential exposure sources and pollution hot spots. Findings from this case study analysis will provide information to advance the development of new tools to assist communities with environmental risk assessments and hazard prioritization. © 2010 Society for Risk Analysis.
Rimland, Joseph M; Abraha, Iosief; Luchetta, Maria Laura; Cozzolino, Francesco; Orso, Massimiliano; Cherubini, Antonio; Dell'Aquila, Giuseppina; Chiatti, Carlos; Ambrosio, Giuseppe; Montedori, Alessandro
2016-06-01
Healthcare databases are useful sources to investigate the epidemiology of chronic obstructive pulmonary disease (COPD), to assess longitudinal outcomes in patients with COPD, and to develop disease management strategies. However, in order to constitute a reliable source for research, healthcare databases need to be validated. The aim of this protocol is to perform the first systematic review of studies reporting the validation of codes related to COPD diagnoses in healthcare databases. MEDLINE, EMBASE, Web of Science and the Cochrane Library databases will be searched using appropriate search strategies. Studies that evaluated the validity of COPD codes (such as the International Classification of Diseases 9th Revision and 10th Revision system; the Real codes system or the International Classification of Primary Care) in healthcare databases will be included. Inclusion criteria will be: (1) the presence of a reference standard case definition for COPD; (2) the presence of at least one test measure (eg, sensitivity, positive predictive values, etc); and (3) the use of a healthcare database (including administrative claims databases, electronic healthcare databases or COPD registries) as a data source. Pairs of reviewers will independently abstract data using standardised forms and will assess quality using a checklist based on the Standards for Reporting of Diagnostic accuracy (STARD) criteria. This systematic review protocol has been produced in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocol (PRISMA-P) 2015 statement. Ethics approval is not required. Results of this study will be submitted to a peer-reviewed journal for publication. The results from this systematic review will be used for outcome research on COPD and will serve as a guide to identify appropriate case definitions of COPD, and reference standards, for researchers involved in validating healthcare databases. CRD42015029204. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Mobile Source Observation Database (MSOD)
The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).
Using database reports to reduce workplace violence: Perceptions of hospital stakeholders
Arnetz, Judith E.; Hamblin, Lydia; Ager, Joel; Aranyos, Deanna; Essenmacher, Lynnette; Upfal, Mark J.; Luborsky, Mark
2016-01-01
BACKGROUND Documented incidents of violence provide the foundation for any workplace violence prevention program. However, no published research to date has examined stakeholders’ preferences for workplace violence data reports in healthcare settings. If relevant data are not readily available and effectively summarized and presented, the likelihood is low that they will be utilized by stakeholders in targeted efforts to reduce violence. OBJECTIVE To discover and describe hospital system stakeholders’ perceptions of database-generated workplace violence data reports. PARTICIPANTS Eight hospital system stakeholders representing Human Resources, Security, Occupational Health Services, Quality and Safety, and Labor in a large, metropolitan hospital system. METHODS The hospital system utilizes a central database for reporting adverse workplace events, including incidents of violence. A focus group was conducted to identify stakeholders’ preferences and specifications for standardized, computerized reports of workplace violence data to be generated by the central database. The discussion was audio-taped, transcribed verbatim, processed as text, and analyzed using stepwise content analysis. RESULTS Five distinct themes emerged from participant responses: Concerns, Etiology, Customization, Use, and Outcomes. In general, stakeholders wanted data reports to provide “the big picture,” i.e., rates of occurrence; reasons for and details regarding incident occurrence; consequences for the individual employee and/or the workplace; and organizational efforts that were employed to deal with the incident. CONCLUSIONS Exploring stakeholder views regarding workplace violence summary reports provided concrete information on the preferred content, format, and use of workplace violence data. Participants desired both epidemiological and incident-specific data in order to better understand and work to prevent the workplace violence occurring in their hospital system. PMID:25059315
Using database reports to reduce workplace violence: Perceptions of hospital stakeholders.
Arnetz, Judith E; Hamblin, Lydia; Ager, Joel; Aranyos, Deanna; Essenmacher, Lynnette; Upfal, Mark J; Luborsky, Mark
2015-01-01
Documented incidents of violence provide the foundation for any workplace violence prevention program. However, no published research to date has examined stakeholders' preferences for workplace violence data reports in healthcare settings. If relevant data are not readily available and effectively summarized and presented, the likelihood is low that they will be utilized by stakeholders in targeted efforts to reduce violence. To discover and describe hospital system stakeholders' perceptions of database-generated workplace violence data reports. Eight hospital system stakeholders representing Human Resources, Security, Occupational Health Services, Quality and Safety, and Labor in a large, metropolitan hospital system. The hospital system utilizes a central database for reporting adverse workplace events, including incidents of violence. A focus group was conducted to identify stakeholders' preferences and specifications for standardized, computerized reports of workplace violence data to be generated by the central database. The discussion was audio-taped, transcribed verbatim, processed as text, and analyzed using stepwise content analysis. Five distinct themes emerged from participant responses: Concerns, Etiology, Customization, Use, and Outcomes. In general, stakeholders wanted data reports to provide ``the big picture,'' i.e., rates of occurrence; reasons for and details regarding incident occurrence; consequences for the individual employee and/or the workplace; and organizational efforts that were employed to deal with the incident. Exploring stakeholder views regarding workplace violence summary reports provided concrete information on the preferred content, format, and use of workplace violence data. Participants desired both epidemiological and incident-specific data in order to better understand and work to prevent the workplace violence occurring in their hospital system.
The Danish Nonmelanoma Skin Cancer Dermatology Database.
Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke; Vinding, Gabrielle Randskov; Stender, Ida Marie; Jemec, Gregor Borut Ernst; Vestergaard, Tine; Thormann, Henrik; Hædersdal, Merete; Dam, Tomas Norman; Olesen, Anne Braae
2016-01-01
The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents a significant challenge in terms of public health management and health care costs. However, high-quality epidemiological and treatment data on NMSC are sparse. The NMSC database includes patients with the following skin tumors: basal cell carcinoma (BCC), squamous cell carcinoma, Bowen's disease, and keratoacanthoma diagnosed by the participating office-based dermatologists in Denmark. Clinical and histological diagnoses, BCC subtype, localization, size, skin cancer history, skin phototype, and evidence of metastases and treatment modality are the main variables in the NMSC database. Information on recurrence, cosmetic results, and complications are registered at two follow-up visits at 3 months (between 0 and 6 months) and 12 months (between 6 and 15 months) after treatment. In 2014, 11,522 patients with 17,575 tumors were registered in the database. Of tumors with a histological diagnosis, 13,571 were BCCs, 840 squamous cell carcinomas, 504 Bowen's disease, and 173 keratoakanthomas. The NMSC database encompasses detailed information on the type of tumor, a variety of prognostic factors, treatment modalities, and outcomes after treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance.
Kouloulias, V E; Ntasis, E; Poortmans, Ph; Maniatis, T A; Nikita, K S
2003-01-01
The desire to develop web-based platforms for remote collaboration among physicians and technologists is becoming a great challenge. In this paper we describe a web-based radiotherapy treatment planning (WBRTP) system to facilitate decentralized radiotherapy services by allowing remote treatment planning and quality assurance (QA) of treatment delivery. Significant prerequisites are digital storage of relevant data as well as efficient and reliable telecommunication system between collaborating units. The system of WBRTP includes video conferencing, display of medical images (CT scans, dose distributions etc), replication of selected data from a common database, remote treatment planning, evaluation of treatment technique and follow-up of the treated patients. Moreover the system features real-time remote operations in terms of tele-consulting like target volume delineation performed by a team of experts at different and distant units. An appraisal of its possibilities in quality assurance in radiotherapy is also discussed. As a conclusion, a WBRTP system would not only be a medium for communication between experts in oncology but mainly a tool for improving the QA in radiotherapy.
Sulis, Andrea; Buscarinu, Paola; Soru, Oriana; Sechi, Giovanni M.
2014-01-01
The definition of a synthetic index for classifying the quality of water bodies is a key aspect in integrated planning and management of water resource systems. In previous works [1,2], a water system optimization modeling approach that requires a single quality index for stored water in reservoirs has been applied to a complex multi-reservoir system. Considering the same modeling field, this paper presents an improved quality index estimated both on the basis of the overall trophic state of the water body and on the basis of the density values of the most potentially toxic Cyanobacteria. The implementation of the index into the optimization model makes it possible to reproduce the conditions limiting water use due to excessive nutrient enrichment in the water body and to the health hazard linked to toxic blooms. The analysis of an extended limnological database (1996–2012) in four reservoirs of the Flumendosa-Campidano system (Sardinia, Italy) provides useful insights into the strengths and limitations of the proposed synthetic index. PMID:24759172
The Clear Creek Envirohydrologic Observatory: From Vision Toward Reality
NASA Astrophysics Data System (ADS)
Just, C.; Muste, M.; Kruger, A.
2007-12-01
As the vision of a fully-functional Clear Creek Envirohydrologic Observatory comes closer to reality, the opportunities for significant watershed science advances in the near future become more apparent. As a starting point to approaching this vision, we focused on creating a working example of cyberinfrastructure in the hydrologic and environmental sciences. The system will integrate a broad range of technologies and ideas: wired and wireless sensors, low power wireless communication, embedded microcontrollers, commodity cellular networks, the internet, unattended quality assurance, metadata, relational databases, machine-to-machine communication, interfaces to hydrologic and environmental models, feedback, and external inputs. Hardware: An accomplishment to date is "in-house" developed sensor networking electronics to compliment commercially available communications. The first of these networkable sensors are dielectric soil moisture probes that are arrayed and equipped with wireless connectivity for communications. Commercially available data logging and telemetry-enabled systems deployed at the Clear Creek testbed include a Campbell Scientific CR1000 datalogger, a Redwing 100 cellular modem, a YA Series yagi antenna, a NP12 rechargeable battery, and a BP SX20U solar panel. This networking equipment has been coupled with Hach DS5X water quality sondes, DTS-12 turbidity probes and MicroLAB nutrient analyzers. Software: Our existing data model is an Arc Hydro-based geodatabase customized with applications for extraction and population of the database with third party data. The following third party data are acquired automatically and in real time into the Arc Hydro customized database: 1) geophysical data: 10m DEM and soil grids, soils; 2) land use/land cover data; and 3) eco-hydrological: radar-based rainfall estimates, stream gage, streamlines, and water quality data. A new processing software for data analysis of Acoustic Doppler Current Profilers (ADCP) measurements has been finalized. The software package provides mean flow field and turbulence characteristics obtained by operating the ADCP at fixed points or using the moving-boat approach. Current Work: The current development work is focused on extracting and populating the Clear Creek database with in-situ measurements acquired and transmitted in real time with sensors deployed in the Clear Creek watershed.
NASA Technical Reports Server (NTRS)
Liaw, Morris; Evesson, Donna
1988-01-01
This is a manual for users of the Software Engineering and Ada Database (SEAD). SEAD was developed to provide an information resource to NASA and NASA contractors with respect to Ada-based resources and activities that are available or underway either in NASA or elsewhere in the worldwide Ada community. The sharing of such information will reduce the duplication of effort while improving quality in the development of future software systems. The manual describes the organization of the data in SEAD, the user interface from logging in to logging out, and concludes with a ten chapter tutorial on how to use the information in SEAD. Two appendices provide quick reference for logging into SEAD and using the keyboard of an IBM 3270 or VT100 computer terminal.
Shah, Sachin D.; Quigley, Sean M.
2005-01-01
Air Force Plant 4 (AFP4) and adjacent Naval Air Station-Joint Reserve Base (NAS-JRB) at Fort Worth, Tex., constitute a government-owned, contractor-operated (GOCO) facility that has been in operation since 1942. Contaminants from the facility, primarily volatile organic compounds (VOCs) and metals, have entered the groundwater-flow system through leakage from waste-disposal sites (landfills and pits) and from manufacturing processes (U.S. Air Force, Aeronautical Systems Center, 1995). The U.S. Geological Survey (USGS), in cooperation with the U.S. Air Force (USAF), Aeronautical Systems Center, Environmental Management Directorate (ASC/ENVR), developed a comprehensive database (or geodatabase) of temporal and spatial environmental information associated with the geology, hydrology, and water quality at AFP4 and NAS-JRB. The database of this report provides information about the AFP4 and NAS-JRB study area including sample location names, identification numbers, locations, historical dates, and various measured hydrologic data. This database does not include every sample location at the site, but is limited to an aggregation of selected digital and hardcopy data of the USAF, USGS, and various consultants who have previously or are currently working at the site.
NASA Astrophysics Data System (ADS)
Sheldon, W.; Chamblee, J.; Cary, R. H.
2013-12-01
Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.
Quantitative approach for optimizing e-beam condition of photoresist inspection and measurement
NASA Astrophysics Data System (ADS)
Lin, Chia-Jen; Teng, Chia-Hao; Cheng, Po-Chung; Sato, Yoshishige; Huang, Shang-Chieh; Chen, Chu-En; Maruyama, Kotaro; Yamazaki, Yuichiro
2018-03-01
Severe process margin in advanced technology node of semiconductor device is controlled by e-beam metrology system and e-beam inspection system with scanning electron microscopy (SEM) image. By using SEM, larger area image with higher image quality is required to collect massive amount of data for metrology and to detect defect in a large area for inspection. Although photoresist is the one of the critical process in semiconductor device manufacturing, observing photoresist pattern by SEM image is crucial and troublesome especially in the case of large image. The charging effect by e-beam irradiation on photoresist pattern causes deterioration of image quality, and it affect CD variation on metrology system and causes difficulties to continue defect inspection in a long time for a large area. In this study, we established a quantitative approach for optimizing e-beam condition with "Die to Database" algorithm of NGR3500 on photoresist pattern to minimize charging effect. And we enhanced the performance of measurement and inspection on photoresist pattern by using optimized e-beam condition. NGR3500 is the geometry verification system based on "Die to Database" algorithm which compares SEM image with design data [1]. By comparing SEM image and design data, key performance indicator (KPI) of SEM image such as "Sharpness", "S/N", "Gray level variation in FOV", "Image shift" can be retrieved. These KPIs were analyzed with different e-beam conditions which consist of "Landing Energy", "Probe Current", "Scanning Speed" and "Scanning Method", and the best e-beam condition could be achieved with maximum image quality, maximum scanning speed and minimum image shift. On this quantitative approach of optimizing e-beam condition, we could observe dependency of SEM condition on photoresist charging. By using optimized e-beam condition, measurement could be continued on photoresist pattern over 24 hours stably. KPIs of SEM image proved image quality during measurement and inspection was stabled enough.
Application of furniture images selection based on neural network
NASA Astrophysics Data System (ADS)
Wang, Yong; Gao, Wenwen; Wang, Ying
2018-05-01
In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.
[Development of Hospital Equipment Maintenance Information System].
Zhou, Zhixin
2015-11-01
Hospital equipment maintenance information system plays an important role in improving medical treatment quality and efficiency. By requirement analysis of hospital equipment maintenance, the system function diagram is drawed. According to analysis of input and output data, tables and reports in connection with equipment maintenance process, relationships between entity and attribute is found out, and E-R diagram is drawed and relational database table is established. Software development should meet actual process requirement of maintenance and have a friendly user interface and flexible operation. The software can analyze failure cause by statistical analysis.
Economic impact of electronic prescribing in the hospital setting: A systematic review.
Ahmed, Zamzam; Barber, Nick; Jani, Yogini; Garfield, Sara; Franklin, Bryony Dean
2016-04-01
To examine evidence on the economic impact of electronic prescribing (EP) systems in the hospital setting. We conducted a systematic search of MEDLINE, EMBASE, PsycINFO, International Pharmaceutical Abstracts, the NHS Economic Evaluation Database, the European Network of Health Economic Evaluation Database and Web of Science from inception to October 2013. Full and partial economic evaluations of EP or computerized provider order entry were included. We excluded studies assessing prescribing packages for specific drugs, and monetary outcomes that were not related to medicines. A checklist was used to evaluate risk of bias and evidence quality. The search yielded 1160 articles of which three met the inclusion criteria. Two were full economic evaluations and one a partial economic evaluation. A meta-analysis was not appropriate as studies were heterogeneous in design, economic evaluation method, interventions and outcome measures. Two studies investigated the financial impact of reducing preventable adverse drug events. The third measured savings related to various aspects of the system including those related to medication. Two studies reported positive financial effects. However the overall quality of the economic evidence was low and key details often not reported. There seems to be some evidence of financial benefits of EP in the hospital setting. However, it is not clear if evidence is transferable to other settings. Research is scarce and limited in quality, and reported methods are not always transparent. Further robust, high quality research is required to establish if hospital EP is cost effective and thus inform policy makers' decisions. Copyright © 2016. Published by Elsevier Ireland Ltd.
Concentrations of indoor pollutants (CIP) database user's manual (Version 4. 0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, M.G.; Brown, S.R.; Corradi, C.A.
1990-10-01
This is the latest release of the database and the user manual. The user manual is a tutorial and reference for utilizing the CIP Database system. An installation guide is included to cover various hardware configurations. Numerous examples and explanations of the dialogue between the user and the database program are provided. It is hoped that this resource will, along with on-line help and the menu-driven software, make for a quick and easy learning curve. For the purposes of this manual, it is assumed that the user is acquainted with the goals of the CIP Database, which are: (1) tomore » collect existing measurements of concentrations of indoor air pollutants in a user-oriented database and (2) to provide a repository of references citing measured field results openly accessible to a wide audience of researchers, policy makers, and others interested in the issues of indoor air quality. The database software, as distinct from the data, is contained in two files, CIP. EXE and PFIL.COM. CIP.EXE is made up of a number of programs written in dBase III command code and compiled using Clipper into a single, executable file. PFIL.COM is a program written in Turbo Pascal that handles the output of summary text files and is called from CIP.EXE. Version 4.0 of the CIP Database is current through March 1990.« less
The USA-NPN Information Management System: A tool in support of phenological assessments
NASA Astrophysics Data System (ADS)
Rosemartin, A.; Vazquez, R.; Wilson, B. E.; Denny, E. G.
2009-12-01
The USA National Phenology Network (USA-NPN) serves science and society by promoting a broad understanding of plant and animal phenology and the relationships among phenological patterns and all aspects of environmental change. Data management and information sharing are central to the USA-NPN mission. The USA-NPN develops, implements, and maintains a comprehensive Information Management System (IMS) to serve the needs of the network, including the collection, storage and dissemination of phenology data, access to phenology-related information, tools for data interpretation, and communication among partners of the USA-NPN. The IMS includes components for data storage, such as the National Phenology Database (NPD), and several online user interfaces to accommodate data entry, data download, data visualization and catalog searches for phenology-related information. The IMS is governed by a set of standards to ensure security, privacy, data access, and data quality. The National Phenology Database is designed to efficiently accommodate large quantities of phenology data, to be flexible to the changing needs of the network, and to provide for quality control. The database stores phenology data from multiple sources (e.g., partner organizations, researchers and citizen observers), and provides for integration with legacy datasets. Several services will be created to provide access to the data, including reports, visualization interfaces, and web services. These services will provide integrated access to phenology and related information for scientists, decision-makers and general audiences. Phenological assessments at any scale will rely on secure and flexible information management systems for the organization and analysis of phenology data. The USA-NPN’s IMS can serve phenology assessments directly, through data management and indirectly as a model for large-scale integrated data management.
ERIC Educational Resources Information Center
Miller, Frank W.; Loeding, Deborah Voigt
1989-01-01
Discussion of technological developments in library reference services focuses on contributions of the H. W. Wilson Company and highlights CD-ROM technology. Topics discussed include online access; menu-driven systems; CD-ROM hardware and software concerns; user response to CD-ROM; quality control of databases; pricing considerations; and future…
Testing in Service-Oriented Environments
2010-03-01
software releases (versions, service packs, vulnerability patches) for one com- mon ESB during the 13-month period from January 1, 2008 through...impact on quality of service : Unlike traditional software compo- nents, a single instance of a web service can be used by multiple consumers. Since the...distributed, with heterogeneous hardware and software (SOA infrastructure, services , operating systems, and databases). Because of cost and security, it
Zanganeh Baygi, Mehdi; Seyedin, Hesam
2013-07-01
In recent years, the main focus of health sector reforms in Iran is the family physician and referral system plan. Fundamental changes in the goals and strategies, has increased the necessity of the need to reform the organizational structure. This study tries to review and summarize all cases about the organizational structure of Iran and its challenges in primary health care system. This study was a systematic review of published and grey literature. We searched the relevant databases, bibliography of related papers, and laws, using appropriate search strategies and key words. The CASP tool was used by two experts to evaluate the quality of retrieved papers and inconsistencies were resolved by discussion. After removal of duplicate citations, a total of 52 titles were identified through database searching, among which 30 met the inclusion criteria. Considering the research quality criteria, 14 papers were recognized qualified, which were categorized into two groups of: articles and policies. The results showed ineffectiveness of the current organizational structure at different level. The majority of the papers recommend performing reforms in the system because of changes in goals and strategies. Also, some suggest an appropriate information system to be designed in the current structures. Centralization and delegation process are the main discussions for the studies. Because of fundamental changes in goals and strategies, reforms in the organizational structure of primary health system in Iran especially in peripheral levels are highly recommended.
Moreno, Lilliana I; Brown, Alice L; Callaghan, Thomas F
2017-07-01
Rapid DNA platforms are fully integrated systems capable of producing and analyzing short tandem repeat (STR) profiles from reference sample buccal swabs in less than two hours. The technology requires minimal user interaction and experience making it possible for high quality profiles to be generated outside an accredited laboratory. The automated production of point of collection reference STR profiles could eliminate the time delay for shipment and analysis of arrestee samples at centralized laboratories. Furthermore, point of collection analysis would allow searching against profiles from unsolved crimes during the normal booking process once the infrastructure to immediately search the Combined DNA Index System (CODIS) database from the booking station is established. The DNAscan/ANDE™ Rapid DNA Analysis™ System developed by Network Biosystems was evaluated for robustness and reliability in the production of high quality reference STR profiles for database enrollment and searching applications. A total of 193 reference samples were assessed for concordance of the CODIS 13 loci. Studies to evaluate contamination, reproducibility, precision, stutter, peak height ratio, noise and sensitivity were also performed. The system proved to be robust, consistent and dependable. Results indicated an overall success rate of 75% for the 13 CODIS core loci and more importantly no incorrect calls were identified. The DNAscan/ANDE™ could be confidently used without human interaction in both laboratory and non-laboratory settings to generate reference profiles. Published by Elsevier B.V.
Data analyst technician: an innovative role for the pharmacy technician.
Ervin, K C; Skledar, S; Hess, M M; Ryan, M
2001-10-01
The development of an innovative role for the pharmacy technician is described. The role of the pharmacy technician was based on a needs assessment and the expertise of the pharmacy technician selected. Initial responsibilities of the technician included chart reviews, benchmarking surveys, monthly financial impact analysis, initiative assessment, and quality improvement reporting. As the drug-use and disease-state management (DUDSM) program expanded, pharmacist activities increased, requiring the expansion of data analyst technician (DAT) duties. These new responsibilities included participation in patient assessment, data collection and interpretation, and formulary enforcement. Most recently, technicians' expanded duties include maintenance of a physician compliance profiling database, quality improvement reporting and graphing, active role in patient risk assessment and database management for adult vaccination, and support of financial impact monitoring for other institutions within the health system. This pharmacist-technician collaboration resulted a threefold increase in patient assessments completed per day. In addition, as the DUDSM program continues to expand across the health system, an increase in DAT resources from 0.5 to 1.0 full-time equivalent was obtained. The role of the DAT has increased the efficiency of the DUDSM program and has provided an innovative role for the pharmacy technician.
Considerations to improve functional annotations in biological databases.
Benítez-Páez, Alfonso
2009-12-01
Despite the great effort to design efficient systems allowing the electronic indexation of information concerning genes, proteins, structures, and interactions published daily in scientific journals, some problems are still observed in specific tasks such as functional annotation. The annotation of function is a critical issue for bioinformatic routines, such as for instance, in functional genomics and the further prediction of unknown protein function, which are highly dependent of the quality of existing annotations. Some information management systems evolve to efficiently incorporate information from large-scale projects, but often, annotation of single records from the literature is difficult and slow. In this short report, functional characterizations of a representative sample of the entire set of uncharacterized proteins from Escherichia coli K12 was compiled from Swiss-Prot, PubMed, and EcoCyc and demonstrate a functional annotation deficit in biological databases. Some issues are postulated as causes of the lack of annotation, and different solutions are evaluated and proposed to avoid them. The hope is that as a consequence of these observations, there will be new impetus to improve the speed and quality of functional annotation and ultimately provide updated, reliable information to the scientific community.
Save medical personnel's time by improved user interfaces.
Kindler, H
1997-01-01
Common objectives in the industrial countries are the improvement of quality of care, clinical effectiveness, and cost control. Cost control, in particular, has been addressed through the introduction of case mix systems for reimbursement by social-security institutions. More data is required to enable quality improvement, increases in clinical effectiveness and for juridical reasons. At first glance, this documentation effort is contradictory to cost reduction. However, integrated services for resource management based on better documentation should help to reduce costs. The clerical effort for documentation should be decreased by providing a co-operative working environment for healthcare professionals applying sophisticated human-computer interface technology. Additional services, e.g., automatic report generation, increase the efficiency of healthcare personnel. Modelling the medical work flow forms an essential prerequisite for integrated resource management services and for co-operative user interfaces. A user interface aware of the work flow provides intelligent assistance by offering the appropriate tools at the right moment. Nowadays there is a trend to client/server systems with relational databases or object-oriented databases as repository. The work flows used for controlling purposes and to steer the user interfaces must be represented in the repository.
Automated data mining of a proprietary database system for physician quality improvement.
Johnstone, Peter A S; Crenshaw, Tim; Cassels, Diane G; Fox, Timothy H
2008-04-01
Physician practice quality improvement is a subject of intense national debate. This report describes using a software data acquisition program to mine an existing, commonly used proprietary radiation oncology database to assess physician performance. Between 2003 and 2004, a manual analysis was performed of electronic portal image (EPI) review records. Custom software was recently developed to mine the record-and-verify database and the review process of EPI at our institution. In late 2006, a report was developed that allowed for immediate review of physician completeness and speed of EPI review for any prescribed period. The software extracted >46,000 EPIs between 2003 and 2007, providing EPI review status and time to review by each physician. Between 2003 and 2007, the department EPI review improved from 77% to 97% (range, 85.4-100%), with a decrease in the mean time to review from 4.2 days to 2.4 days. The initial intervention in 2003 to 2004 was moderately successful in changing the EPI review patterns; it was not repeated because of the time required to perform it. However, the implementation in 2006 of the automated review tool yielded a profound change in practice. Using the software, the automated chart review required approximately 1.5 h for mining and extracting the data for the 4-year period. This study quantified the EPI review process as it evolved during a 4-year period at our institution and found that automation of data retrieval and review simplified and facilitated physician quality improvement.
Stoop, Rahel; Clijsen, Ron; Leoni, Diego; Soldini, Emiliano; Castellini, Greta; Redaelli, Valentina; Barbero, Marco
2017-08-01
The methodological quality of controlled clinical trials (CCTs) of physiotherapeutic treatment modalities for myofascial trigger points (MTrP) has not been investigated yet. To detect the methodological quality of CCTs for physiotherapy treatments of MTrPs and demonstrating the possible increase over time. Systematic review. A systematic search was conducted in two databases, Physiotherapy Evidence Database (PEDro) and Medicine Medical Literature Analysis and Retrieval System online (MEDLINE), using the same keywords and selection procedure corresponding to pre-defined inclusion criteria. The methodological quality, assessed by the 11-item PEDro scale, served as outcome measure. The CCTs had to compare at least two interventions, where one intervention had to lay within the scope of physiotherapy. Participants had to be diagnosed with myofascial pain syndrome or trigger points (active or latent). A total of n = 230 studies was analysed. The cervico-thoracic region was the most frequently treated body part (n = 143). Electrophysical agent applications was the most frequent intervention. The average methodological quality reached 5.5 on the PEDro scale. A total of n = 6 studies scored the value of 9. The average PEDro score increased by 0.7 points per decade between 1978 and 2015. The average PEDro score of CCTs for MTrP treatments does not reach the cut-off of 6 proposed for moderate to high methodological quality. Nevertheless, a promising trend towards an increase of the average methodological quality of CCTs for MTrPs was recorded. More high-quality CCT studies with thorough research procedures are recommended to enhance methodological quality. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Protecting water quality in the watershed
DOE Office of Scientific and Technical Information (OSTI.GOV)
James, C.R.; Johnson, K.E.; Stewart, E.H.
1994-08-01
This article highlights the water quality component of a watershed management plan being developed for the San Francisco (CA) Water Department. The physical characteristics of the 63,000-acre watersheds were analyzed for source and transport vulnerability for five groups of water quality parameters--particulates, THM precursors, microorganisms (Giardia and cryptosporidium), nutrients (nitrogen and phosphorus), and synthetic organic chemicals--and vulnerability zones were mapped. Mapping was achieved through the use of an extensive geographic information system (GIS) database. Each water quality vulnerability zone map was developed based on five watershed physical characteristics--soils, slope, vegetation, wildlife concentration, and proximity to water bodies--and their relationships tomore » each of the five groups of water quality parameters. An approach to incorporate the watershed physical characteristics information into the five water quality vulnerability zone maps was defined and verified. The composite approach was based in part on information gathered from existing watershed management plans.« less
A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality
NASA Astrophysics Data System (ADS)
Wang, Manyi; Liu, Chaoshun; Gao, Wei
2014-10-01
An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.
Determinants of quality management systems implementation in hospitals.
Wardhani, Viera; Utarini, Adi; van Dijk, Jitse Pieter; Post, Doeke; Groothoff, Johan Willem
2009-03-01
To identify the problems and facilitating factors in the implementation of quality management system (QMS) in hospitals through a systematic review. A search strategy was performed on the Medline database for articles written in English published between 1992 and early 2006. Using the thesaurus terms 'Total Quality Management' and 'Quality Assurance Health Care', combined with the term 'hospital' and 'implement*', we identified 533 publications. The screening process was based on empirical articles describing organization-wide QMS implementation. Fourteen empirical articles fulfilled the inclusion criteria and were reviewed in this paper. An organization culture emphasizing standards and values associated with affiliation, teamwork and innovation, assumption of change and risk taking, play as the key success factor in QMS implementation. This culture needs to be supported by sufficient technical competence to apply a scientific problem-solving approach. A clear distribution of QMS function within the organizational structure is more important than establishing a formal quality structure. In addition to management leadership, physician involvement also plays an important role in implementing QMS. Six supporting and limiting factors determining QMS implementation are identified in this review. These are the organization culture, design, leadership for quality, physician involvement, quality structure and technical competence.
Shah, Prakesh S.; McDonald, Sarah D.; Barrett, Jon; Synnes, Anne; Robson, Kate; Foster, Jonathan; Pasquier, Jean-Charles; Joseph, K.S.; Piedboeuf, Bruno; Lacaze-Masmonteil, Thierry; O'Brien, Karel; Shivananda, Sandesh; Chaillet, Nils; Pechlivanoglou, Petros
2018-01-01
Background: Preterm birth (birth before 37 wk of gestation) occurs in about 8% of pregnancies in Canada and is associated with high mortality and morbidity rates that substantially affect infants, their families and the health care system. Our overall goal is to create a transdisciplinary platform, the Canadian Preterm Birth Network (CPTBN), where investigators, stakeholders and families will work together to improve childhood outcomes of preterm neonates. Methods: Our national cohort will include 24 maternal-fetal/obstetrical units, 31 neonatal intensive care units and 26 neonatal follow-up programs across Canada with planned linkages to provincial health information systems. Three broad clusters of projects will be undertaken. Cluster 1 will focus on quality-improvement efforts that use the Evidence-based Practice for Improving Quality method to evaluate information from the CPTBN database and review the current literature, then identify potentially better health care practices and implement identified strategies. Cluster 2 will assess the impact of current practices and practice changes in maternal, perinatal and neonatal care on maternal, neonatal and neurodevelopmental outcomes. Cluster 3 will evaluate the effect of preterm birth on babies, their families and the health care system by integrating CPTBN data, parent feedback, and national and provincial database information in order to identify areas where more parental support is needed, and also generate robust estimates of resource use, cost and cost-effectiveness around preterm neonatal care. Interpretation: These collaborative efforts will create a flexible, transdisciplinary, evaluable and informative research and quality-improvement platform that supports programs, projects and partnerships focused on improving outcomes of preterm neonates. PMID:29348260
Ventilator-Related Adverse Events: A Taxonomy and Findings From 3 Incident Reporting Systems.
Pham, Julius Cuong; Williams, Tamara L; Sparnon, Erin M; Cillie, Tam K; Scharen, Hilda F; Marella, William M
2016-05-01
In 2009, researchers from Johns Hopkins University's Armstrong Institute for Patient Safety and Quality; public agencies, including the FDA; and private partners, including the Emergency Care Research Institute and the University HealthSystem Consortium (UHC) Safety Intelligence Patient Safety Organization, sought to form a public-private partnership for the promotion of patient safety (P5S) to advance patient safety through voluntary partnerships. The study objective was to test the concept of the P5S to advance our understanding of safety issues related to ventilator events, to develop a common classification system for categorizing adverse events related to mechanical ventilators, and to perform a comparison of adverse events across different adverse event reporting systems. We performed a cross-sectional analysis of ventilator-related adverse events reported in 2012 from the following incident reporting systems: the Pennsylvania Patient Safety Authority's Patient Safety Reporting System, UHC's Safety Intelligence Patient Safety Organization database, and the FDA's Manufacturer and User Facility Device Experience database. Once each organization had its dataset of ventilator-related adverse events, reviewers read the narrative descriptions of each event and classified it according to the developed common taxonomy. A Pennsylvania Patient Safety Authority, FDA, and UHC search provided 252, 274, and 700 relevant reports, respectively. The 3 event types most commonly reported to the UHC and the Pennsylvania Patient Safety Authority's Patient Safety Reporting System databases were airway/breathing circuit issue, human factor issues, and ventilator malfunction events. The top 3 event types reported to the FDA were ventilator malfunction, power source issue, and alarm failure. Overall, we found that (1) through the development of a common taxonomy, adverse events from 3 reporting systems can be evaluated, (2) the types of events reported in each database were related to the purpose of the database and the source of the reports, resulting in significant differences in reported event categories across the 3 systems, and (3) a public-private collaboration for investigating ventilator-related adverse events under the P5S model is feasible. Copyright © 2016 by Daedalus Enterprises.
Meta-All: a system for managing metabolic pathway information.
Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H
2006-10-23
Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at http://bic-gh.de/meta-all and can be downloaded free of charge and installed locally.
Meta-All: a system for managing metabolic pathway information
Weise, Stephan; Grosse, Ivo; Klukas, Christian; Koschützki, Dirk; Scholz, Uwe; Schreiber, Falk; Junker, Björn H
2006-01-01
Background Many attempts are being made to understand biological subjects at a systems level. A major resource for these approaches are biological databases, storing manifold information about DNA, RNA and protein sequences including their functional and structural motifs, molecular markers, mRNA expression levels, metabolite concentrations, protein-protein interactions, phenotypic traits or taxonomic relationships. The use of these databases is often hampered by the fact that they are designed for special application areas and thus lack universality. Databases on metabolic pathways, which provide an increasingly important foundation for many analyses of biochemical processes at a systems level, are no exception from the rule. Data stored in central databases such as KEGG, BRENDA or SABIO-RK is often limited to read-only access. If experimentalists want to store their own data, possibly still under investigation, there are two possibilities. They can either develop their own information system for managing that own data, which is very time-consuming and costly, or they can try to store their data in existing systems, which is often restricted. Hence, an out-of-the-box information system for managing metabolic pathway data is needed. Results We have designed META-ALL, an information system that allows the management of metabolic pathways, including reaction kinetics, detailed locations, environmental factors and taxonomic information. Data can be stored together with quality tags and in different parallel versions. META-ALL uses Oracle DBMS and Oracle Application Express. We provide the META-ALL information system for download and use. In this paper, we describe the database structure and give information about the tools for submitting and accessing the data. As a first application of META-ALL, we show how the information contained in a detailed kinetic model can be stored and accessed. Conclusion META-ALL is a system for managing information about metabolic pathways. It facilitates the handling of pathway-related data and is designed to help biochemists and molecular biologists in their daily research. It is available on the Web at and can be downloaded free of charge and installed locally. PMID:17059592
NASA Technical Reports Server (NTRS)
ONeil, D. A.; Craig, D. A.; Christensen, C. B.; Gresham, E. C.
2005-01-01
The objective of this Technical Interchange Meeting was to increase the quantity and quality of technical, cost, and programmatic data used to model the impact of investing in different technologies. The focus of this meeting was the Technology Tool Box (TTB), a database of performance, operations, and programmatic parameters provided by technologists and used by systems engineers. The TTB is the data repository used by a system of models known as the Advanced Technology Lifecycle Analysis System (ATLAS). This report describes the result of the November meeting, and also provides background information on ATLAS and the TTB.
Information of urban morphological features at high resolution is needed to properly model and characterize the meteorological and air quality fields in urban areas. We describe a new project called National Urban Database with Access Portal Tool, (NUDAPT) that addresses this nee...
WLN's Database: New Directions.
ERIC Educational Resources Information Center
Ziegman, Bruce N.
1988-01-01
Describes features of the Western Library Network's database, including the database structure, authority control, contents, quality control, and distribution methods. The discussion covers changes in distribution necessitated by increasing telecommunications costs and the development of optical data disk products. (CLB)
In formulating hypothesis related to extrapolations across species and/or chemicals, the ECOTOX database provides researchers a means of locating high quality ecological effects data for a wide-range of terrestrial and aquatic receptors. Currently the database includes more than ...
Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.
Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre
2016-04-01
The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.
The creation, management, and use of data quality information for life cycle assessment.
Edelen, Ashley; Ingwersen, Wesley W
2018-04-01
Despite growing access to data, questions of "best fit" data and the appropriate use of results in supporting decision making still plague the life cycle assessment (LCA) community. This discussion paper addresses revisions to assessing data quality captured in a new US Environmental Protection Agency guidance document as well as additional recommendations on data quality creation, management, and use in LCA databases and studies. Existing data quality systems and approaches in LCA were reviewed and tested. The evaluations resulted in a revision to a commonly used pedigree matrix, for which flow and process level data quality indicators are described, more clarity for scoring criteria, and further guidance on interpretation are given. Increased training for practitioners on data quality application and its limits are recommended. A multi-faceted approach to data quality assessment utilizing the pedigree method alongside uncertainty analysis in result interpretation is recommended. A method of data quality score aggregation is proposed and recommendations for usage of data quality scores in existing data are made to enable improved use of data quality scores in LCA results interpretation. Roles for data generators, data repositories, and data users are described in LCA data quality management. Guidance is provided on using data with data quality scores from other systems alongside data with scores from the new system. The new pedigree matrix and recommended data quality aggregation procedure can now be implemented in openLCA software. Additional ways in which data quality assessment might be improved and expanded are described. Interoperability efforts in LCA data should focus on descriptors to enable user scoring of data quality rather than translation of existing scores. Developing and using data quality indicators for additional dimensions of LCA data, and automation of data quality scoring through metadata extraction and comparison to goal and scope are needed.
[Communication of psychiatric hospitals' specialization].
Thielscher, Christian; Kox, Andreas; Schütte, Michael
2010-09-01
To analyze whether specialization of psychiatric hospitals results in quality improvement, and whether it can and should be measured and communicated to patients and ambulatory care physicians. Depth interviews with key deciders in the German psychiatric care system. There are several specializations within the system of psychiatric hospital care which can be communicated to patients and physicians; this would facilitate choice of hospital. There is no national database available yet. Data collection and communication as provided by an independent organization would improve knowledge about hospital specialization.
NASA Astrophysics Data System (ADS)
Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Fontaine, Alain
2016-04-01
IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Database Portal (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). The new IAGOS Database Portal has been released in December 2015. The main improvement is the interoperability implementation with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the CAMS data center in Jülich (http://join.iek.fz-juelich.de). The CAMS (Copernicus Atmospheric Monitoring Service) project is a prominent user of the IGAS data network. The new portal provides improved and new services such as the download in NetCDF or NASA Ames formats, plotting tools (maps, time series, vertical profiles, etc.) and user management. Added value products are available on the portal: back trajectories, origin of air masses, co-location with satellite data, etc. The link with the CAMS data center, through JOIN (Jülich OWS Interface), allows to combine model outputs with IAGOS data for inter-comparison. Finally IAGOS metadata has been standardized (ISO 19115) and now provides complete information about data traceability and quality.
Enright, Katherine A; Taback, Nathan; Powis, Melanie Lynn; Gonzalez, Alejandro; Yun, Lingsong; Sutradhar, Rinku; Trudeau, Maureen E; Booth, Christopher M; Krzyzanowska, Monika K
2017-10-01
Purpose Routine evaluation of quality measures (QMs) can drive improvement in cancer systems by highlighting gaps in care. Targeting quality improvement at QMs that demonstrate substantial variation has the potential to make the largest impact at the population level. We developed an approach that uses both variation in performance and number of patients affected by the QM to set priorities for improving the quality of systemic therapy for women with early-stage breast cancer (EBC). Patients and Methods Patients with EBC diagnosed from 2006 to 2010 in Ontario, Canada, were identified in the Ontario Cancer Registry and linked deterministically to multiple health care databases. Individual QMs within a panel of 15 QMs previously developed to assess the quality of systemic therapy across four domains (access, treatment delivery, toxicity, and safety) were ranked on interinstitutional variation in performance (using interquartile range) and the number of patients who were affected; then the two rankings were averaged for a summative priority ranking. Results We identified 28,427 patients with EBC who were treated at 84 institutions. The use of computerized physician electronic order entry for chemotherapy, emergency room visits or hospitalizations during chemotherapy, and timely receipt of chemotherapy were identified as the QMs that had the largest potential to improve quality of care at a system level within this cohort. Conclusion A simple ranking system based on interinstitutional variation in performance and patient volume can be used to identify high-priority areas for quality improvement from a population perspective. This approach is generalizable to other health care systems that use QMs to drive improvement.
Foot and Ankle Fellowship Websites: An Assessment of Accessibility and Quality.
Hinds, Richard M; Danna, Natalie R; Capo, John T; Mroczek, Kenneth J
2017-08-01
The Internet has been reported to be the first informational resource for many fellowship applicants. The objective of this study was to assess the accessibility of orthopaedic foot and ankle fellowship websites and to evaluate the quality of information provided via program websites. The American Orthopaedic Foot and Ankle Society (AOFAS) and the Fellowship and Residency Electronic Interactive Database (FREIDA) fellowship databases were accessed to generate a comprehensive list of orthopaedic foot and ankle fellowship programs. The databases were reviewed for links to fellowship program websites and compared with program websites accessed from a Google search. Accessible fellowship websites were then analyzed for the quality of recruitment and educational content pertinent to fellowship applicants. Forty-seven orthopaedic foot and ankle fellowship programs were identified. The AOFAS database featured direct links to 7 (15%) fellowship websites with the independent Google search yielding direct links to 29 (62%) websites. No direct website links were provided in the FREIDA database. Thirty-six accessible websites were analyzed for content. Program websites featured a mean 44% (range = 5% to 75%) of the total assessed content. The most commonly presented recruitment and educational content was a program description (94%) and description of fellow operative experience (83%), respectively. There is substantial variability in the accessibility and quality of orthopaedic foot and ankle fellowship websites. Recognition of deficits in accessibility and content quality may assist foot and ankle fellowships in improving program information online. Level IV.
Maetens, Arno; De Schreye, Robrecht; Faes, Kristof; Houttekier, Dirk; Deliens, Luc; Gielen, Birgit; De Gendt, Cindy; Lusyne, Patrick; Annemans, Lieven; Cohen, Joachim
2016-10-18
The use of full-population databases is under-explored to study the use, quality and costs of end-of-life care. Using the case of Belgium, we explored: (1) which full-population databases provide valid information about end-of-life care, (2) what procedures are there to use these databases, and (3) what is needed to integrate separate databases. Technical and privacy-related aspects of linking and accessing Belgian administrative databases and disease registries were assessed in cooperation with the database administrators and privacy commission bodies. For all relevant databases, we followed procedures in cooperation with database administrators to link the databases and to access the data. We identified several databases as fitting for end-of-life care research in Belgium: the InterMutualistic Agency's national registry of health care claims data, the Belgian Cancer Registry including data on incidence of cancer, and databases administrated by Statistics Belgium including data from the death certificate database, the socio-economic survey and fiscal data. To obtain access to the data, approval was required from all database administrators, supervisory bodies and two separate national privacy bodies. Two Trusted Third Parties linked the databases via a deterministic matching procedure using multiple encrypted social security numbers. In this article we describe how various routinely collected population-level databases and disease registries can be accessed and linked to study patterns in the use, quality and costs of end-of-life care in the full population and in specific diagnostic groups.
Face antispoofing based on frame difference and multilevel representation
NASA Astrophysics Data System (ADS)
Benlamoudi, Azeddine; Aiadi, Kamal Eddine; Ouafi, Abdelkrim; Samai, Djamel; Oussalah, Mourad
2017-07-01
Due to advances in technology, today's biometric systems become vulnerable to spoof attacks made by fake faces. These attacks occur when an intruder attempts to fool an established face-based recognition system by presenting a fake face (e.g., print photo or replay attacks) in front of the camera instead of the intruder's genuine face. For this purpose, face antispoofing has become a hot topic in face analysis literature, where several applications with antispoofing task have emerged recently. We propose a solution for distinguishing between real faces and fake ones. Our approach is based on extracting features from the difference between successive frames instead of individual frames. We also used a multilevel representation that divides the frame difference into multiple multiblocks. Different texture descriptors (local binary patterns, local phase quantization, and binarized statistical image features) have then been applied to each block. After the feature extraction step, a Fisher score is applied to sort the features in ascending order according to the associated weights. Finally, a support vector machine is used to differentiate between real and fake faces. We tested our approach on three publicly available databases: CASIA Face Antispoofing database, Replay-Attack database, and MSU Mobile Face Spoofing database. The proposed approach outperforms the other state-of-the-art methods in different media and quality metrics.
Guo, Ye; Chen, Qian; Wu, Wei; Cui, Wei
2015-03-31
To establish a system of monitoring the key indicator of quality for inspection (KIQI) on a laboratory information system (LIS), and to have a better management of KIQI. Clinical sample made in PUMCH were collected during the whole of 2014. Next, interactive input program were designed to accomplish data collecting of the disqualification rate of samples, the mistake rate of samples and the occasions of losing samples, etc. Then, a series moment of sample collection, laboratory sample arrived, sample test, sample check, response to critical value, namely, trajectory information left on LIS were recorded and the qualification rate of TAT, the notification rate of endangering result were calculated. Finally, the information about quality control were collected to build an internal quality control database and the KIQI, such as the out-of-control rate of quality control and the total error of test items were monitored. The inspection of the sample management shows the disqualification rates in 2014 were all below the target, but the rates in January and February were a little high and the rates of four wards were above 2%. The mistake rates of samples was 0.47 cases/10 000 cases, attaining the target (< 2 cases/10 000 cases). Also, there was no occasion of losing samples in 2014, attaining the target too. The inspection of laboratory reports shows the qualification rates of TAT was within the acceptable range (> 95%), however the rates of blood routine in November (94.75%) was out of range. We have solved the problem by optimizing the processes. The notification rate of endangering result attained the target (≥ 98%), while the rate of timely notification is needed to improve. Quality inspection shows the CV of APTT in August (5.02%) was rising significantly, beyond the accepted CV (5.0%). We have solved the problem by changing the reagent. The CV of TT in 2014 were all below the allowable CV, thus the allowable CV of the next year lower to 10%. It is an objective and effective method to manage KIQI with the powerful management mode of database and information process capability on LIS.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Olier, Ivan; Springate, David A; Ashcroft, Darren M; Doran, Tim; Reeves, David; Planner, Claire; Reilly, Siobhan; Kontopantelis, Evangelos
2016-01-01
The use of Electronic Health Records databases for medical research has become mainstream. In the UK, increasing use of Primary Care Databases is largely driven by almost complete computerisation and uniform standards within the National Health Service. Electronic Health Records research often begins with the development of a list of clinical codes with which to identify cases with a specific condition. We present a methodology and accompanying Stata and R commands (pcdsearch/Rpcdsearch) to help researchers in this task. We present severe mental illness as an example. We used the Clinical Practice Research Datalink, a UK Primary Care Database in which clinical information is largely organised using Read codes, a hierarchical clinical coding system. Pcdsearch is used to identify potentially relevant clinical codes and/or product codes from word-stubs and code-stubs suggested by clinicians. The returned code-lists are reviewed and codes relevant to the condition of interest are selected. The final code-list is then used to identify patients. We identified 270 Read codes linked to SMI and used them to identify cases in the database. We observed that our approach identified cases that would have been missed with a simpler approach using SMI registers defined within the UK Quality and Outcomes Framework. We described a framework for researchers of Electronic Health Records databases, for identifying patients with a particular condition or matching certain clinical criteria. The method is invariant to coding system or database and can be used with SNOMED CT, ICD or other medical classification code-lists.
Canadian Operational Air Quality Forecasting Systems: Status, Recent Progress, and Challenges
NASA Astrophysics Data System (ADS)
Pavlovic, Radenko; Davignon, Didier; Ménard, Sylvain; Munoz-Alpizar, Rodrigo; Landry, Hugo; Beaulieu, Paul-André; Gilbert, Samuel; Moran, Michael; Chen, Jack
2017-04-01
ECCC's Canadian Meteorological Centre Operations (CMCO) division runs a number of operational air quality (AQ)-related systems that revolve around the Regional Air Quality Deterministic Prediction System (RAQDPS). The RAQDPS generates 48-hour AQ forecasts and outputs hourly concentration fields of O3, PM2.5, NO2, and other pollutants twice daily on a North-American domain with 10-km horizontal grid spacing and 80 vertical levels. A closely related AQ forecast system with near-real-time wildfire emissions, known as FireWork, has been run by CMCO during the Canadian wildfire season (April to October) since 2014. This system became operational in June 2016. The CMCO`s operational AQ forecast systems also benefit from several support systems, such as a statistical post-processing model called UMOS-AQ that is applied to enhance forecast reliability at point locations with AQ monitors. The Regional Deterministic Air Quality Analysis (RDAQA) system has also been connected to the RAQDPS since February 2013, and hourly surface objective analyses are now available for O3, PM2.5, NO2, PM10, SO2 and, indirectly, the Canadian Air Quality Health Index. As of June 2015, another version of the RDAQA has been connected to FireWork (RDAQA-FW). For verification purposes, CMCO developed a third support system called Verification for Air QUality Models (VAQUM), which has a geospatial relational database core and which enables continuous monitoring of the AQ forecast systems' performance. Urban environments are particularly subject to AQ pollution. In order to improve the services offered, ECCC has recently been investing efforts to develop a high resolution air quality prediction capability for urban areas in Canada. In this presentation, a comprehensive description of the ECCC AQ systems will be provided, along with a discussion on AQ systems performance. Recent improvements, current challenges, and future directions of the Canadian operational AQ program will also be discussed.
Telephony-based voice pathology assessment using automated speech analysis.
Moran, Rosalyn J; Reilly, Richard B; de Chazal, Philip; Lacy, Peter D
2006-03-01
A system for remotely detecting vocal fold pathologies using telephone-quality speech is presented. The system uses a linear classifier, processing measurements of pitch perturbation, amplitude perturbation and harmonic-to-noise ratio derived from digitized speech recordings. Voice recordings from the Disordered Voice Database Model 4337 system were used to develop and validate the system. Results show that while a sustained phonation, recorded in a controlled environment, can be classified as normal or pathologic with accuracy of 89.1%, telephone-quality speech can be classified as normal or pathologic with an accuracy of 74.2%, using the same scheme. Amplitude perturbation features prove most robust for telephone-quality speech. The pathologic recordings were then subcategorized into four groups, comprising normal, neuromuscular pathologic, physical pathologic and mixed (neuromuscular with physical) pathologic. A separate classifier was developed for classifying the normal group from each pathologic subcategory. Results show that neuromuscular disorders could be detected remotely with an accuracy of 87%, physical abnormalities with an accuracy of 78% and mixed pathology voice with an accuracy of 61%. This study highlights the real possibility for remote detection and diagnosis of voice pathology.
The Quality Control of Data in a Clinical Database System—The Patient Identification Problem *
Lai, J. Chi-Sang; Covvey, H.D.; Sevcik, K.C.; Wigle, E.D.
1981-01-01
Ensuring the accuracy of patient identification and the linkage of records with the appropriate patient owner is the first level of quality control of data in a clinical database system. Without a unique patient identifier, the fact that patient identity may be recorded at different places and times means that multiple identities may be associated with a given patient and new records associated with any of these identities. Even when a unique patient identifier is utilized, errors introduced in the data handling process can result in the same problems. The outcome is that the retrieval request for a given record may fail, or an erroneously identified record may be retrieved. We have studied each of the ways this fundamental problem occurs and propose a solution based on record linkage techniques to detect errors of this type. Specifically, we propose a patient identification scheme for the situation where no unique health identifier is available and detail a method to find patient records with erroneous identifiers.
MOCAT: A Metagenomics Assembly and Gene Prediction Toolkit
Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R.; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/. PMID:23082188
MOCAT: a metagenomics assembly and gene prediction toolkit.
Kultima, Jens Roat; Sunagawa, Shinichi; Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.
The problems and promise of DNA barcodes for species diagnosis of primate biomaterials
Lorenz, Joseph G; Jackson, Whitney E; Beck, Jeanne C; Hanner, Robert
2005-01-01
The Integrated Primate Biomaterials and Information Resource (www.IPBIR.org) provides essential research reagents to the scientific community by establishing, verifying, maintaining, and distributing DNA and RNA derived from primate cell cultures. The IPBIR uses mitochondrial cytochrome c oxidase subunit I sequences to verify the identity of samples for quality control purposes in the accession, cell culture, DNA extraction processes and prior to shipping to end users. As a result, IPBIR is accumulating a database of ‘DNA barcodes’ for many species of primates. However, this quality control process is complicated by taxon specific patterns of ‘universal primer’ failure, as well as the amplification or co-amplification of nuclear pseudogenes of mitochondrial origins. To overcome these difficulties, taxon specific primers have been developed, and reverse transcriptase PCR is utilized to exclude these extraneous sequences from amplification. DNA barcoding of primates has applications to conservation and law enforcement. Depositing barcode sequences in a public database, along with primer sequences, trace files and associated quality scores, makes this species identification technique widely accessible. Reference DNA barcode sequences should be derived from, and linked to, specimens of known provenance in web-accessible collections in order to validate this system of molecular diagnostics. PMID:16214744
Benford's Law for Quality Assurance of Manner of Death Counts in Small and Large Databases.
Daniels, Jeremy; Caetano, Samantha-Jo; Huyer, Dirk; Stephen, Andrew; Fernandes, John; Lytwyn, Alice; Hoppe, Fred M
2017-09-01
To assess if Benford's law, a mathematical law used for quality assurance in accounting, can be applied as a quality assurance measure for the manner of death determination. We examined a regional forensic pathology service's monthly manner of death counts (N = 2352) from 2011 to 2013, and provincial monthly and weekly death counts from 2009 to 2013 (N = 81,831). We tested whether each dataset's leading digit followed Benford's law via the chi-square test. For each database, we assessed whether number 1 was the most common leading digit. The manner of death counts first digit followed Benford's law in all the three datasets. Two of the three datasets had 1 as the most frequent leading digit. The manner of death data in this study showed qualities consistent with Benford's law. The law has potential as a quality assurance metric in the manner of death determination for both small and large databases. © 2017 American Academy of Forensic Sciences.
In-process fault detection for textile fabric production: onloom imaging
NASA Astrophysics Data System (ADS)
Neumann, Florian; Holtermann, Timm; Schneider, Dorian; Kulczycki, Ashley; Gries, Thomas; Aach, Til
2011-05-01
Constant and traceable high fabric quality is of high importance both for technical and for high-quality conventional fabrics. Usually, quality inspection is carried out by trained personal, whose detection rate and maximum period of concentration are limited. Low resolution automated fabric inspection machines using texture analysis were developed. Since 2003, systems for the in-process inspection on weaving machines ("onloom") are commercially available. With these defects can be detected, but not measured quantitative precisely. Most systems are also prone to inevitable machine vibrations. Feedback loops for fault prevention are not established. Technology has evolved since 2003: Camera and computer prices dropped, resolutions were enhanced, recording speeds increased. These are the preconditions for real-time processing of high-resolution images. So far, these new technological achievements are not used in textile fabric production. For efficient use, a measurement system must be integrated into the weaving process; new algorithms for defect detection and measurement must be developed. The goal of the joint project is the development of a modern machine vision system for nondestructive onloom fabric inspection. The system consists of a vibration-resistant machine integration, a high-resolution machine vision system, and new, reliable, and robust algorithms with quality database for defect documentation. The system is meant to detect, measure, and classify at least 80 % of economically relevant defects. Concepts for feedback loops into the weaving process will be pointed out.
Exploring Antarctic Land Surface Temperature Extremes Using Condensed Anomaly Databases
NASA Astrophysics Data System (ADS)
Grant, Glenn Edwin
Satellite observations have revolutionized the Earth Sciences and climate studies. However, data and imagery continue to accumulate at an accelerating rate, and efficient tools for data discovery, analysis, and quality checking lag behind. In particular, studies of long-term, continental-scale processes at high spatiotemporal resolutions are especially problematic. The traditional technique of downloading an entire dataset and using customized analysis code is often impractical or consumes too many resources. The Condensate Database Project was envisioned as an alternative method for data exploration and quality checking. The project's premise was that much of the data in any satellite dataset is unneeded and can be eliminated, compacting massive datasets into more manageable sizes. Dataset sizes are further reduced by retaining only anomalous data of high interest. Hosting the resulting "condensed" datasets in high-speed databases enables immediate availability for queries and exploration. Proof of the project's success relied on demonstrating that the anomaly database methods can enhance and accelerate scientific investigations. The hypothesis of this dissertation is that the condensed datasets are effective tools for exploring many scientific questions, spurring further investigations and revealing important information that might otherwise remain undetected. This dissertation uses condensed databases containing 17 years of Antarctic land surface temperature anomalies as its primary data. The study demonstrates the utility of the condensate database methods by discovering new information. In particular, the process revealed critical quality problems in the source satellite data. The results are used as the starting point for four case studies, investigating Antarctic temperature extremes, cloud detection errors, and the teleconnections between Antarctic temperature anomalies and climate indices. The results confirm the hypothesis that the condensate databases are a highly useful tool for Earth Science analyses. Moreover, the quality checking capabilities provide an important method for independent evaluation of dataset veracity.
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
ARIANE: integration of information databases within a hospital intranet.
Joubert, M; Aymard, S; Fieschi, D; Volot, F; Staccini, P; Robert, J J; Fieschi, M
1998-05-01
Large information systems handle massive volume of data stored in heterogeneous sources. Each server has its own model of representation of concepts with regard to its aims. One of the main problems end-users encounter when accessing different servers is to match their own viewpoint on biomedical concepts with the various representations that are made in the databases servers. The aim of the project ARIANE is to provide end-users with easy-to-use and natural means to access and query heterogeneous information databases. The objectives of this research work consist in building a conceptual interface by means of the Internet technology inside an enterprise Intranet and to propose a method to realize it. This method is based on the knowledge sources provided by the Unified Medical Language System (UMLS) project of the US National Library of Medicine. Experiments concern queries to three different information servers: PubMed, a Medline server of the NLM; Thériaque, a French database on drugs implemented in the Hospital Intranet; and a Web site dedicated to Internet resources in gastroenterology and nutrition, located at the Faculty of Medicine of Nice (France). Accessing to each of these servers is different according to the kind of information delivered and according to the technology used to query it. Dealing with health care professional workstation, the authors introduced in the ARIANE project quality criteria in order to attempt a homogeneous and efficient way to build a query system able to be integrated in existing information systems and to integrate existing and new information sources.
Designing for Peta-Scale in the LSST Database
NASA Astrophysics Data System (ADS)
Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.
2007-10-01
The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.
Karp, Peter D; Paley, Suzanne; Romero, Pedro
2002-01-01
Bioinformatics requires reusable software tools for creating model-organism databases (MODs). The Pathway Tools is a reusable, production-quality software environment for creating a type of MOD called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc (see http://ecocyc.org) integrates our evolving understanding of the genes, proteins, metabolic network, and genetic network of an organism. This paper provides an overview of the four main components of the Pathway Tools: The PathoLogic component supports creation of new PGDBs from the annotated genome of an organism. The Pathway/Genome Navigator provides query, visualization, and Web-publishing services for PGDBs. The Pathway/Genome Editors support interactive updating of PGDBs. The Pathway Tools ontology defines the schema of PGDBs. The Pathway Tools makes use of the Ocelot object database system for data management services for PGDBs. The Pathway Tools has been used to build PGDBs for 13 organisms within SRI and by external users.
Takamoto, Shinichi; Motomura, Noboru; Miyata, Hiroaki; Tsukihara, Hiroyuki
2018-01-01
The Japan Cardiovascular Surgery Database (JCVSD) was created in 2000 with the support of the Society of Thoracic Surgeons (STS). The STS database content was translated to Japanese using the same disease criteria and in 2001, data entry for adult cardiac surgeries was initiated online using the University Hospital Medical Information Network (UMIN). In 2008, data entry for congenital heart surgeries was initiated in the congenital section of JCVSD and preoperative expected mortality (JapanSCORE) in adult cardiovascular surgeries was first calculated using the risk model of JCVSD. The Japan Surgical Board system merged with JCVSD in 2011, and all cardiovascular surgical data were registered in the JCVSD from 2012 onward. The reports resulting from the data analyses of the JCVSD will encourage further improvements in the quality of cardiovascular surgeries, patient safety, and medical care in Japan.
The Role of Free/Libre and Open Source Software in Learning Health Systems.
Paton, C; Karopka, T
2017-08-01
Objective: To give an overview of the role of Free/Libre and Open Source Software (FLOSS) in the context of secondary use of patient data to enable Learning Health Systems (LHSs). Methods: We conducted an environmental scan of the academic and grey literature utilising the MedFLOSS database of open source systems in healthcare to inform a discussion of the role of open source in developing LHSs that reuse patient data for research and quality improvement. Results: A wide range of FLOSS is identified that contributes to the information technology (IT) infrastructure of LHSs including operating systems, databases, frameworks, interoperability software, and mobile and web apps. The recent literature around the development and use of key clinical data management tools is also reviewed. Conclusions: FLOSS already plays a critical role in modern health IT infrastructure for the collection, storage, and analysis of patient data. The nature of FLOSS systems to be collaborative, modular, and modifiable may make open source approaches appropriate for building the digital infrastructure for a LHS. Georg Thieme Verlag KG Stuttgart.
NASA Astrophysics Data System (ADS)
Ek, M. B.; Xia, Y.; Ford, T.; Wu, Y.; Quiring, S. M.
2015-12-01
The North American Soil Moisture Database (NASMD) was initiated in 2011 to provide support for developing climate forecasting tools, calibrating land surface models and validating satellite-derived soil moisture algorithms. The NASMD has collected data from over 30 soil moisture observation networks providing millions of in situ soil moisture observations in all 50 states as well as Canada and Mexico. It is recognized that the quality of measured soil moisture in NASMD is highly variable due to the diversity of climatological conditions, land cover, soil texture, and topographies of the stations and differences in measurement devices (e.g., sensors) and installation. It is also recognized that error, inaccuracy and imprecision in the data set can have significant impacts on practical operations and scientific studies. Therefore, developing an appropriate quality control procedure is essential to ensure the data is of the best quality. In this study, an automated quality control approach is developed using the North American Land Data Assimilation System phase 2 (NLDAS-2) Noah soil porosity, soil temperature, and fraction of liquid and total soil moisture to flag erroneous and/or spurious measurements. Overall results show that this approach is able to flag unreasonable values when the soil is partially frozen. A validation example using NLDAS-2 multiple model soil moisture products at the 20 cm soil layer showed that the quality control procedure had a significant positive impact in Alabama, North Carolina, and West Texas. It had a greater impact in colder regions, particularly during spring and autumn. Over 433 NASMD stations have been quality controlled using the methodology proposed in this study, and the algorithm will be implemented to control data quality from the other ~1,200 NASMD stations in the near future.
Land, Michael; Kulongoski, Justin T.; Belitz, Kenneth
2012-01-01
Groundwater quality in the approximately 460-square-mile San Fernando--San Gabriel (FG) study unit was investigated as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The study area is in Los Angeles County and includes Tertiary-Quaternary sedimentary basins situated within the Transverse Ranges of southern California. The GAMA Priority Basin Project is being conducted by the California State Water Resources Control Board in collaboration with the U.S. Geological Survey (USGS) and the Lawrence Livermore National Laboratory. The GAMA FG study was designed to provide a spatially unbiased assessment of the quality of untreated (raw) groundwater in the primary aquifer systems (hereinafter referred to as primary aquifers) throughout California. The assessment is based on water-quality and ancillary data collected in 2005 by the USGS from 35 wells and on water-quality data from the California Department of Public Health (CDPH) database. The primary aquifers were defined by the depth interval of the wells listed in the CDPH database for the FG study unit. The quality of groundwater in primary aquifers may be different from that in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. This study assesses the status of the current quality of the groundwater resource by using data from samples analyzed for volatile organic compounds (VOCs), pesticides, and naturally occurring inorganic constituents, such as major ions and trace elements. This status assessment is intended to characterize the quality of groundwater resources in the primary aquifers of the FG study unit, not the treated drinking water delivered to consumers by water purveyors.
Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)
Granato, Gregory E.
2004-01-01
In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of the WSSMP database. All WSSMP information in the database is, ultimately, linked to the individual water suppliers and to a WSSMP 'cycle' (which is currently a 5-year planning cycle for compiling WSSMP information). The database file contains 172 tables - 47 data tables, 61 association tables, 61 domain tables, and 3 example import-link tables. This database is currently implemented in the Microsoft Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. Design documentation facilitates current use and potential modification for future use of the database. Information within the structure of the WSSMP database file (WSSMPv01.mdb), a data dictionary file (WSSMPDD1.pdf), a detailed database-design diagram (WSSMPPL1.pdf), and this database-design report (OFR2004-1231.pdf) documents the design of the database. This report includes a discussion of each WSSMP data structure with an accompanying database-design diagram. Appendix 1 of this report is an index of the diagrams in the report and on the plate; this index is organized by table name in alphabetical order. Each of these products is included in digital format on the enclosed CD-ROM to facilitate use or modification of the database.
NASA Astrophysics Data System (ADS)
Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.
2015-12-01
The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.
The National Eutrophication Survey: lake characteristics and historical nutrient concentrations
NASA Astrophysics Data System (ADS)
Stachelek, Joseph; Ford, Chanse; Kincaid, Dustin; King, Katelyn; Miller, Heather; Nagelkirk, Ryan
2018-01-01
Historical ecological surveys serve as a baseline and provide context for contemporary research, yet many of these records are not preserved in a way that ensures their long-term usability. The National Eutrophication Survey (NES) database is currently only available as scans of the original reports (PDF files) with no embedded character information. This limits its searchability, machine readability, and the ability of current and future scientists to systematically evaluate its contents. The NES data were collected by the US Environmental Protection Agency between 1972 and 1975 as part of an effort to investigate eutrophication in freshwater lakes and reservoirs. Although several studies have manually transcribed small portions of the database in support of specific studies, there have been no systematic attempts to transcribe and preserve the database in its entirety. Here we use a combination of automated optical character recognition and manual quality assurance procedures to make these data available for analysis. The performance of the optical character recognition protocol was found to be linked to variation in the quality (clarity) of the original documents. For each of the four archival scanned reports, our quality assurance protocol found an error rate between 5.9 and 17 %. The goal of our approach was to strike a balance between efficiency and data quality by combining entry of data by hand with digital transcription technologies. The finished database contains information on the physical characteristics, hydrology, and water quality of about 800 lakes in the contiguous US (Stachelek et al.(2017), https://doi.org/10.5063/F1639MVD). Ultimately, this database could be combined with more recent studies to generate meta-analyses of water quality trends and spatial variation across the continental US.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Subscribing to Databases: How Important Is Depth and Quality of Indexing?
ERIC Educational Resources Information Center
Delong, Linwood
2007-01-01
This paper compares the subject indexing on articles pertaining to Immanuel Kant, agriculture, and aging that are found simultaneously in Humanities Index, Academic Search Elite (EBSCO) and Periodicals Research II (Micromedia ProQuest), in order to show that there are substantial variations in the depth and quality of indexing in these databases.…