NASA Astrophysics Data System (ADS)
Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald
2017-04-01
West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the national meteorological services.
Information of urban morphological features at high resolution is needed to properly model and characterize the meteorological and air quality fields in urban areas. We describe a new project called National Urban Database with Access Portal Tool, (NUDAPT) that addresses this nee...
Quality and Safety in Health Care, Part XXVI: The Adult Cardiac Surgery Database.
Harolds, Jay A
2017-09-01
The Adult Cardiac Surgery Database of the Society of Thoracic Surgeons has provided highly useful information in quality and safety in general thoracic surgery, including ratings of the surgeons and institutions participating in this type of surgery. The Adult Cardiac Surgery Database information is very helpful for writing guidelines and determining optimal protocols and for many research projects. This article discusses the history and current status of this database.
The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...
Exploring Antarctic Land Surface Temperature Extremes Using Condensed Anomaly Databases
NASA Astrophysics Data System (ADS)
Grant, Glenn Edwin
Satellite observations have revolutionized the Earth Sciences and climate studies. However, data and imagery continue to accumulate at an accelerating rate, and efficient tools for data discovery, analysis, and quality checking lag behind. In particular, studies of long-term, continental-scale processes at high spatiotemporal resolutions are especially problematic. The traditional technique of downloading an entire dataset and using customized analysis code is often impractical or consumes too many resources. The Condensate Database Project was envisioned as an alternative method for data exploration and quality checking. The project's premise was that much of the data in any satellite dataset is unneeded and can be eliminated, compacting massive datasets into more manageable sizes. Dataset sizes are further reduced by retaining only anomalous data of high interest. Hosting the resulting "condensed" datasets in high-speed databases enables immediate availability for queries and exploration. Proof of the project's success relied on demonstrating that the anomaly database methods can enhance and accelerate scientific investigations. The hypothesis of this dissertation is that the condensed datasets are effective tools for exploring many scientific questions, spurring further investigations and revealing important information that might otherwise remain undetected. This dissertation uses condensed databases containing 17 years of Antarctic land surface temperature anomalies as its primary data. The study demonstrates the utility of the condensate database methods by discovering new information. In particular, the process revealed critical quality problems in the source satellite data. The results are used as the starting point for four case studies, investigating Antarctic temperature extremes, cloud detection errors, and the teleconnections between Antarctic temperature anomalies and climate indices. The results confirm the hypothesis that the condensate databases are a highly useful tool for Earth Science analyses. Moreover, the quality checking capabilities provide an important method for independent evaluation of dataset veracity.
Matsuda, Fumio; Shinbo, Yoko; Oikawa, Akira; Hirai, Masami Yokota; Fiehn, Oliver; Kanaya, Shigehiko; Saito, Kazuki
2009-01-01
Background In metabolomics researches using mass spectrometry (MS), systematic searching of high-resolution mass data against compound databases is often the first step of metabolite annotation to determine elemental compositions possessing similar theoretical mass numbers. However, incorrect hits derived from errors in mass analyses will be included in the results of elemental composition searches. To assess the quality of peak annotation information, a novel methodology for false discovery rates (FDR) evaluation is presented in this study. Based on the FDR analyses, several aspects of an elemental composition search, including setting a threshold, estimating FDR, and the types of elemental composition databases most reliable for searching are discussed. Methodology/Principal Findings The FDR can be determined from one measured value (i.e., the hit rate for search queries) and four parameters determined by Monte Carlo simulation. The results indicate that relatively high FDR values (30–50%) were obtained when searching time-of-flight (TOF)/MS data using the KNApSAcK and KEGG databases. In addition, searches against large all-in-one databases (e.g., PubChem) always produced unacceptable results (FDR >70%). The estimated FDRs suggest that the quality of search results can be improved not only by performing more accurate mass analysis but also by modifying the properties of the compound database. A theoretical analysis indicates that FDR could be improved by using compound database with smaller but higher completeness entries. Conclusions/Significance High accuracy mass analysis, such as Fourier transform (FT)-MS, is needed for reliable annotation (FDR <10%). In addition, a small, customized compound database is preferable for high-quality annotation of metabolome data. PMID:19847304
In formulating hypothesis related to extrapolations across species and/or chemicals, the ECOTOX database provides researchers a means of locating high quality ecological effects data for a wide-range of terrestrial and aquatic receptors. Currently the database includes more than ...
High Throughput Experimental Materials Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zakutayev, Andriy; Perkins, John; Schwarting, Marcus
The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).
The MAREDAT Global Database of High Performance Liquid Chromatography Marine Pigment Measurements
NASA Technical Reports Server (NTRS)
Peloquin, J.; Swan, C.; Gruber, N.; Vogt, M.; Claustre, H.; Ras, J.; Uitz, J.; Barlow, R.; Behrenfeld, M.; Bidigare, R.;
2013-01-01
A global pigment database consisting of 35 634 pigment suites measured by high performance liquid chromatography was assembled in support of the MARine Ecosytem DATa (MAREDAT) initiative. These data originate from 136 field surveys within the global ocean, were solicited from investigators and databases, compiled, and then quality controlled. Nearly one quarter of the data originates from the Laboratoire d'Oc´eanographie de Villefranche (LOV), with an additional 17% and 19% stemming from the US JGOFS and LTER programs, respectively. The MAREDAT pigment database provides high quality measurements of the major taxonomic pigments including chlorophylls a and b, 19'-butanoyloxyfucoxanthin, 19'- hexanoyloxyfucoxanthin, alloxanthin, divinyl chlorophyll a, fucoxanthin, lutein, peridinin, prasinoxanthin, violaxanthin and zeaxanthin, which may be used in varying combinations to estimate phytoplankton community composition. Quality control measures consisted of flagging samples that had a total chlorophyll a concentration of zero, had fewer than four reported accessory pigments, or exceeded two standard deviations of the log-linear regression of total chlorophyll a with total accessory pigment concentrations. We anticipate the MAREDAT pigment database to be of use in the marine ecology, remote sensing and ecological modeling communities, where it will support model validation and advance our global perspective on marine biodiversity. The original dataset together with quality control flags as well as the gridded MAREDAT pigment data may be downloaded from PANGAEA: http://doi.pangaea.de/10. 1594/PANGAEA.793246.
Irwin, Jodi A; Saunier, Jessica L; Strouss, Katharine M; Sturk, Kimberly A; Diegoli, Toni M; Just, Rebecca S; Coble, Michael D; Parson, Walther; Parsons, Thomas J
2007-06-01
In an effort to increase the quantity, breadth and availability of mtDNA databases suitable for forensic comparisons, we have developed a high-throughput process to generate approximately 5000 control region sequences per year from regional US populations, global populations from which the current US population is derived and global populations currently under-represented in available forensic databases. The system utilizes robotic instrumentation for all laboratory steps from pre-extraction through sequence detection, and a rigorous eight-step, multi-laboratory data review process with entirely electronic data transfer. Over the past 3 years, nearly 10,000 control region sequences have been generated using this approach. These data are being made publicly available and should further address the need for consistent, high-quality mtDNA databases for forensic testing.
A Quality-Control-Oriented Database for a Mesoscale Meteorological Observation Network
NASA Astrophysics Data System (ADS)
Lussana, C.; Ranci, M.; Uboldi, F.
2012-04-01
In the operational context of a local weather service, data accessibility and quality related issues must be managed by taking into account a wide set of user needs. This work describes the structure and the operational choices made for the operational implementation of a database system storing data from highly automated observing stations, metadata and information on data quality. Lombardy's environmental protection agency, ARPA Lombardia, manages a highly automated mesoscale meteorological network. A Quality Assurance System (QAS) ensures that reliable observational information is collected and disseminated to the users. The weather unit in ARPA Lombardia, at the same time an important QAS component and an intensive data user, has developed a database specifically aimed to: 1) providing quick access to data for operational activities and 2) ensuring data quality for real-time applications, by means of an Automatic Data Quality Control (ADQC) procedure. Quantities stored in the archive include hourly aggregated observations of: precipitation amount, temperature, wind, relative humidity, pressure, global and net solar radiation. The ADQC performs several independent tests on raw data and compares their results in a decision-making procedure. An important ADQC component is the Spatial Consistency Test based on Optimal Interpolation. Interpolated and Cross-Validation analysis values are also stored in the database, providing further information to human operators and useful estimates in case of missing data. The technical solution adopted is based on a LAMP (Linux, Apache, MySQL and Php) system, constituting an open source environment suitable for both development and operational practice. The ADQC procedure itself is performed by R scripts directly interacting with the MySQL database. Users and network managers can access the database by using a set of web-based Php applications.
The Danish Nonmelanoma Skin Cancer Dermatology Database.
Lamberg, Anna Lei; Sølvsten, Henrik; Lei, Ulrikke; Vinding, Gabrielle Randskov; Stender, Ida Marie; Jemec, Gregor Borut Ernst; Vestergaard, Tine; Thormann, Henrik; Hædersdal, Merete; Dam, Tomas Norman; Olesen, Anne Braae
2016-01-01
The Danish Nonmelanoma Skin Cancer Dermatology Database was established in 2008. The aim of this database was to collect data on nonmelanoma skin cancer (NMSC) treatment and improve its treatment in Denmark. NMSC is the most common malignancy in the western countries and represents a significant challenge in terms of public health management and health care costs. However, high-quality epidemiological and treatment data on NMSC are sparse. The NMSC database includes patients with the following skin tumors: basal cell carcinoma (BCC), squamous cell carcinoma, Bowen's disease, and keratoacanthoma diagnosed by the participating office-based dermatologists in Denmark. Clinical and histological diagnoses, BCC subtype, localization, size, skin cancer history, skin phototype, and evidence of metastases and treatment modality are the main variables in the NMSC database. Information on recurrence, cosmetic results, and complications are registered at two follow-up visits at 3 months (between 0 and 6 months) and 12 months (between 6 and 15 months) after treatment. In 2014, 11,522 patients with 17,575 tumors were registered in the database. Of tumors with a histological diagnosis, 13,571 were BCCs, 840 squamous cell carcinomas, 504 Bowen's disease, and 173 keratoakanthomas. The NMSC database encompasses detailed information on the type of tumor, a variety of prognostic factors, treatment modalities, and outcomes after treatment. The database has revealed that overall, the quality of care of NMSC in Danish dermatological clinics is high, and the database provides the necessary data for continuous quality assurance.
USDA-ARS?s Scientific Manuscript database
For nearly 20 years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture’s (USDA) food composition databases through the collection and analysis of nationally representative food samples. This manuscript d...
dBBQs: dataBase of Bacterial Quality scores.
Wanchai, Visanu; Patumcharoenpol, Preecha; Nookaew, Intawat; Ussery, David
2017-12-28
It is well-known that genome sequencing technologies are becoming significantly cheaper and faster. As a result of this, the exponential growth in sequencing data in public databases allows us to explore ever growing large collections of genome sequences. However, it is less known that the majority of available sequenced genome sequences in public databases are not complete, drafts of varying qualities. We have calculated quality scores for around 100,000 bacterial genomes from all major genome repositories and put them in a fast and easy-to-use database. Prokaryotic genomic data from all sources were collected and combined to make a non-redundant set of bacterial genomes. The genome quality score for each was calculated by four different measurements: assembly quality, number of rRNA and tRNA genes, and the occurrence of conserved functional domains. The dataBase of Bacterial Quality scores (dBBQs) was designed to store and retrieve quality scores. It offers fast searching and download features which the result can be used for further analysis. In addition, the search results are shown in interactive JavaScript chart framework using DC.js. The analysis of quality scores across major public genome databases find that around 68% of the genomes are of acceptable quality for many uses. dBBQs (available at http://arc-gem.uams.edu/dbbqs ) provides genome quality scores for all available prokaryotic genome sequences with a user-friendly Web-interface. These scores can be used as cut-offs to get a high-quality set of genomes for testing bioinformatics tools or improving the analysis. Moreover, all data of the four measurements that were combined to make the quality score for each genome, which can potentially be used for further analysis. dBBQs will be updated regularly and is freely use for non-commercial purpose.
Application of furniture images selection based on neural network
NASA Astrophysics Data System (ADS)
Wang, Yong; Gao, Wenwen; Wang, Ying
2018-05-01
In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.
Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J
2013-09-01
To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.
Evaluating Land-Atmosphere Interactions with the North American Soil Moisture Database
NASA Astrophysics Data System (ADS)
Giles, S. M.; Quiring, S. M.; Ford, T.; Chavez, N.; Galvan, J.
2015-12-01
The North American Soil Moisture Database (NASMD) is a high-quality observational soil moisture database that was developed to study land-atmosphere interactions. It includes over 1,800 monitoring stations the United States, Canada and Mexico. Soil moisture data are collected from multiple sources, quality controlled and integrated into an online database (soilmoisture.tamu.edu). The period of record varies substantially and only a few of these stations have an observation record extending back into the 1990s. Daily soil moisture observations have been quality controlled using the North American Soil Moisture Database QAQC algorithm. The database is designed to facilitate observationally-driven investigations of land-atmosphere interactions, validation of the accuracy of soil moisture simulations in global land surface models, satellite calibration/validation for SMOS and SMAP, and an improved understanding of how soil moisture influences climate on seasonal to interannual timescales. This paper provides some examples of how the NASMD has been utilized to enhance understanding of land-atmosphere interactions in the U.S. Great Plains.
Does High School Facility Quality Affect Student Achievement? A Two-Level Hierarchical Linear Model
ERIC Educational Resources Information Center
Bowers, Alex J.; Urick, Angela
2011-01-01
The purpose of this study is to isolate the independent effects of high school facility quality on student achievement using a large, nationally representative U.S. database of student achievement and school facility quality. Prior research on linking school facility quality to student achievement has been mixed. Studies that relate overall…
EPAs DSSTox Chemical Database: A Resource for the Non-Targeted Testing Community (EPA NTA workshop)
EPA’s DSSTox database project, which includes coverage of the ToxCast and Tox21 high-throughput testing inventories, provides high-quality chemical-structure files for inventories of toxicological and environmental relevance. A feature of the DSSTox project, which differentiates ...
NASA Astrophysics Data System (ADS)
Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe
1999-07-01
In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.
High-Alpha Handling Qualities Flight Research on the NASA F/A-18 High Alpha Research Vehicle
NASA Technical Reports Server (NTRS)
Wichman, Keith D.; Pahle, Joseph W.; Bahm, Catherine; Davidson, John B.; Bacon, Barton J.; Murphy, Patrick C.; Ostroff, Aaron J.; Hoffler, Keith D.
1996-01-01
A flight research study of high-angle-of-attack handling qualities has been conducted at the NASA Dryden Flight Research Center using the F/A-18 High Alpha Research Vehicle (HARV). The objectives were to create a high-angle-of-attack handling qualities flight database, develop appropriate research evaluation maneuvers, and evaluate high-angle-of-attack handling qualities guidelines and criteria. Using linear and nonlinear simulations and flight research data, the predictions from each criterion were compared with the pilot ratings and comments. Proposed high-angle-of-attack nonlinear design guidelines and proposed handling qualities criteria and guidelines developed using piloted simulation were considered. Recently formulated time-domain Neal-Smith guidelines were also considered for application to high-angle-of-attack maneuvering. Conventional envelope criteria were evaluated for possible extension to the high-angle-of-attack regime. Additionally, the maneuvers were studied as potential evaluation techniques, including a limited validation of the proposed standard evaluation maneuver set. This paper gives an overview of these research objectives through examples and summarizes result highlights. The maneuver development is described briefly, the criteria evaluation is emphasized with example results given, and a brief discussion of the database form and content is presented.
Human Connectome Project Informatics: quality control, database services, and data visualization
Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.
2013-01-01
The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591
High-throughput STR analysis for DNA database using direct PCR.
Sim, Jeong Eun; Park, Su Jeong; Lee, Han Chul; Kim, Se-Yong; Kim, Jong Yeol; Lee, Seung Hwan
2013-07-01
Since the Korean criminal DNA database was launched in 2010, we have focused on establishing an automated DNA database profiling system that analyzes short tandem repeat loci in a high-throughput and cost-effective manner. We established a DNA database profiling system without DNA purification using a direct PCR buffer system. The quality of direct PCR procedures was compared with that of conventional PCR system under their respective optimized conditions. The results revealed not only perfect concordance but also an excellent PCR success rate, good electropherogram quality, and an optimal intra/inter-loci peak height ratio. In particular, the proportion of DNA extraction required due to direct PCR failure could be minimized to <3%. In conclusion, the newly developed direct PCR system can be adopted for automated DNA database profiling systems to replace or supplement conventional PCR system in a time- and cost-saving manner. © 2013 American Academy of Forensic Sciences Published 2013. This article is a U.S. Government work and is in the public domain in the U.S.A.
High-quality unsaturated zone hydraulic property data for hydrologic applications
Perkins, Kimberlie; Nimmo, John R.
2009-01-01
In hydrologic studies, especially those using dynamic unsaturated zone moisture modeling, calculations based on property transfer models informed by hydraulic property databases are often used in lieu of measured data from the site of interest. Reliance on database-informed predicted values has become increasingly common with the use of neural networks. High-quality data are needed for databases used in this way and for theoretical and property transfer model development and testing. Hydraulic properties predicted on the basis of existing databases may be adequate in some applications but not others. An obvious problem occurs when the available database has few or no data for samples that are closely related to the medium of interest. The data set presented in this paper includes saturated and unsaturated hydraulic conductivity, water retention, particle-size distributions, and bulk properties. All samples are minimally disturbed, all measurements were performed using the same state of the art techniques and the environments represented are diverse.
Pedersen, Sidsel Arnspang; Schmidt, Sigrun Alba Johannesdottir; Klausen, Siri; Pottegård, Anton; Friis, Søren; Hölmich, Lisbet Rosenkrantz; Gaist, David
2018-05-01
The nationwide Danish Cancer Registry and the Danish Melanoma Database both record data on melanoma for purposes of monitoring, quality assurance, and research. However, the data quality of the Cancer Registry and the Melanoma Database has not been formally evaluated. We estimated the positive predictive value (PPV) of melanoma diagnosis for random samples of 200 patients from the Cancer Registry (n = 200) and the Melanoma Database (n = 200) during 2004-2014, using the Danish Pathology Registry as "gold standard" reference. We further validated tumor characteristics in the Cancer Registry and the Melanoma Database. Additionally, we estimated the PPV of in situ melanoma diagnoses in the Melanoma Database, and the sensitivity of melanoma diagnoses in 2004-2014. The PPVs of melanoma in the Cancer Registry and the Melanoma Database were 97% (95% CI = 94, 99) and 100%. The sensitivity was 90% in the Cancer Registry and 77% in the Melanoma Database. The PPV of in situ melanomas in the Melanoma Database was 97% and the sensitivity was 56%. In the Melanoma Database, we observed PPVs of ulceration of 75% and Breslow thickness of 96%. The PPV of histologic subtypes varied between 87% and 100% in the Cancer Registry and 93% and 100% in the Melanoma Database. The PPVs for anatomical localization were 83%-95% in the Cancer Registry and 93%-100% in the Melanoma Database. The data quality in both the Cancer Registry and the Melanoma Database is high, supporting their use in epidemiologic studies.
Corbellini, Carlo; Andreoni, Bruno; Ansaloni, Luca; Sgroi, Giovanni; Martinotti, Mario; Scandroglio, Ildo; Carzaniga, Pierluigi; Longoni, Mauro; Foschi, Diego; Dionigi, Paolo; Morandi, Eugenio; Agnello, Mauro
2018-01-01
Measurement and monitoring of the quality of care using a core set of quality measures are increasing in health service research. Although administrative databases include limited clinical data, they offer an attractive source for quality measurement. The purpose of this study, therefore, was to evaluate the completeness of different administrative data sources compared to a clinical survey in evaluating rectal cancer cases. Between May 2012 and November 2014, a clinical survey was done on 498 Lombardy patients who had rectal cancer and underwent surgical resection. These collected data were compared with the information extracted from administrative sources including Hospital Discharge Dataset, drug database, daycare activity data, fee-exemption database, and regional screening program database. The agreement evaluation was performed using a set of 12 quality indicators. Patient complexity was a difficult indicator to measure for lack of clinical data. Preoperative staging was another suboptimal indicator due to the frequent missing administrative registration of tests performed. The agreement between the 2 data sources regarding chemoradiotherapy treatments was high. Screening detection, minimally invasive techniques, length of stay, and unpreventable readmissions were detected as reliable quality indicators. Postoperative morbidity could be a useful indicator but its agreement was lower, as expected. Healthcare administrative databases are large and real-time collected repositories of data useful in measuring quality in a healthcare system. Our investigation reveals that the reliability of indicators varies between them. Ideally, a combination of data from both sources could be used in order to improve usefulness of less reliable indicators.
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
DSSTox and Chemical Information Technologies in Support of PredictiveToxicology
The EPA NCCT Distributed Structure-Searchable Toxicity (DSSTox) Database project initially focused on the curation and publication of high-quality, standardized, chemical structure-annotated toxicity databases for use in structure-activity relationship (SAR) modeling. In recent y...
Chaitanya, Lakshmi; van Oven, Mannis; Brauer, Silke; Zimmermann, Bettina; Huber, Gabriela; Xavier, Catarina; Parson, Walther; de Knijff, Peter; Kayser, Manfred
2016-03-01
The use of mitochondrial DNA (mtDNA) for maternal lineage identification often marks the last resort when investigating forensic and missing-person cases involving highly degraded biological materials. As with all comparative DNA testing, a match between evidence and reference sample requires a statistical interpretation, for which high-quality mtDNA population frequency data are crucial. Here, we determined, under high quality standards, the complete mtDNA control-region sequences of 680 individuals from across the Netherlands sampled at 54 sites, covering the entire country with 10 geographic sub-regions. The complete mtDNA control region (nucleotide positions 16,024-16,569 and 1-576) was amplified with two PCR primers and sequenced with ten different sequencing primers using the EMPOP protocol. Haplotype diversity of the entire sample set was very high at 99.63% and, accordingly, the random-match probability was 0.37%. No population substructure within the Netherlands was detected with our dataset. Phylogenetic analyses were performed to determine mtDNA haplogroups. Inclusion of these high-quality data in the EMPOP database (accession number: EMP00666) will improve its overall data content and geographic coverage in the interest of all EMPOP users worldwide. Moreover, this dataset will serve as (the start of) a national reference database for mtDNA applications in forensic and missing person casework in the Netherlands. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bodner, Martin; Bastisch, Ingo; Butler, John M; Fimmers, Rolf; Gill, Peter; Gusmão, Leonor; Morling, Niels; Phillips, Christopher; Prinz, Mechthild; Schneider, Peter M; Parson, Walther
2016-09-01
The statistical evaluation of autosomal Short Tandem Repeat (STR) genotypes is based on allele frequencies. These are empirically determined from sets of randomly selected human samples, compiled into STR databases that have been established in the course of population genetic studies. There is currently no agreed procedure of performing quality control of STR allele frequency databases, and the reliability and accuracy of the data are largely based on the responsibility of the individual contributing research groups. It has been demonstrated with databases of haploid markers (EMPOP for mitochondrial mtDNA, and YHRD for Y-chromosomal loci) that centralized quality control and data curation is essential to minimize error. The concepts employed for quality control involve software-aided likelihood-of-genotype, phylogenetic, and population genetic checks that allow the researchers to compare novel data to established datasets and, thus, maintain the high quality required in forensic genetics. Here, we present STRidER (http://strider.online), a publicly available, centrally curated online allele frequency database and quality control platform for autosomal STRs. STRidER expands on the previously established ENFSI DNA WG STRbASE and applies standard concepts established for haploid and autosomal markers as well as novel tools to reduce error and increase the quality of autosomal STR data. The platform constitutes a significant improvement and innovation for the scientific community, offering autosomal STR data quality control and reliable STR genotype estimates. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Quantifying Data Quality for Clinical Trials Using Electronic Data Capture
Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.
2008-01-01
Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958
Organizing a breast cancer database: data management.
Yi, Min; Hunt, Kelly K
2016-06-01
Developing and organizing a breast cancer database can provide data and serve as valuable research tools for those interested in the etiology, diagnosis, and treatment of cancer. Depending on the research setting, the quality of the data can be a major issue. Assuring that the data collection process does not contribute inaccuracies can help to assure the overall quality of subsequent analyses. Data management is work that involves the planning, development, implementation, and administration of systems for the acquisition, storage, and retrieval of data while protecting it by implementing high security levels. A properly designed database provides you with access to up-to-date, accurate information. Database design is an important component of application design. If you take the time to design your databases properly, you'll be rewarded with a solid application foundation on which you can build the rest of your application.
Brandstätter, Anita; Peterson, Christine T; Irwin, Jodi A; Mpoke, Solomon; Koech, Davy K; Parson, Walther; Parsons, Thomas J
2004-10-01
Large forensic mtDNA databases which adhere to strict guidelines for generation and maintenance, are not available for many populations outside of the United States and western Europe. We have established a high quality mtDNA control region sequence database for urban Nairobi as both a reference database for forensic investigations, and as a tool to examine the genetic variation of Kenyan sequences in the context of known African variation. The Nairobi sequences exhibited high variation and a low random match probability, indicating utility for forensic testing. Haplogroup identification and frequencies were compared with those reported from other published studies on African, or African-origin populations from Mozambique, Sierra Leone, and the United States, and suggest significant differences in the mtDNA compositions of the various populations. The quality of the sequence data in our study was investigated and supported using phylogenetic measures. Our data demonstrate the diversity and distinctiveness of African populations, and underline the importance of establishing additional forensic mtDNA databases of indigenous African populations.
Human Variome Project Quality Assessment Criteria for Variation Databases.
Vihinen, Mauno; Hancock, John M; Maglott, Donna R; Landrum, Melissa J; Schaafsma, Gerard C P; Taschner, Peter
2016-06-01
Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. © 2016 WILEY PERIODICALS, INC.
Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2017-07-11
The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.
NASA Astrophysics Data System (ADS)
Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.
2013-09-01
Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying Lateral Boundary Conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2000-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complimented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone vertical profiles. The results show performance is largely within uncertainty estimates for the Tropospheric Emission Spectrometer (TES) with some exceptions. The major difference shows a high bias in the upper troposphere along the southern boundary in January. This publication documents the global simulation database, the tool for conversion to LBC, and the fidelity of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.
Marchewka, Artur; Zurawski, Łukasz; Jednoróg, Katarzyna; Grabowska, Anna
2014-06-01
Selecting appropriate stimuli to induce emotional states is essential in affective research. Only a few standardized affective stimulus databases have been created for auditory, language, and visual materials. Numerous studies have extensively employed these databases using both behavioral and neuroimaging methods. However, some limitations of the existing databases have recently been reported, including limited numbers of stimuli in specific categories or poor picture quality of the visual stimuli. In the present article, we introduce the Nencki Affective Picture System (NAPS), which consists of 1,356 realistic, high-quality photographs that are divided into five categories (people, faces, animals, objects, and landscapes). Affective ratings were collected from 204 mostly European participants. The pictures were rated according to the valence, arousal, and approach-avoidance dimensions using computerized bipolar semantic slider scales. Normative ratings for the categories are presented for each dimension. Validation of the ratings was obtained by comparing them to ratings generated using the Self-Assessment Manikin and the International Affective Picture System. In addition, the physical properties of the photographs are reported, including luminance, contrast, and entropy. The new database, with accompanying ratings and image parameters, allows researchers to select a variety of visual stimulus materials specific to their experimental questions of interest. The NAPS system is freely accessible to the scientific community for noncommercial use by request at http://naps.nencki.gov.pl .
Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L
2016-01-04
The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Solving the Problem: Genome Annotation Standards before the Data Deluge.
Klimke, William; O'Donovan, Claire; White, Owen; Brister, J Rodney; Clark, Karen; Fedorov, Boris; Mizrachi, Ilene; Pruitt, Kim D; Tatusova, Tatiana
2011-10-15
The promise of genome sequencing was that the vast undiscovered country would be mapped out by comparison of the multitude of sequences available and would aid researchers in deciphering the role of each gene in every organism. Researchers recognize that there is a need for high quality data. However, different annotation procedures, numerous databases, and a diminishing percentage of experimentally determined gene functions have resulted in a spectrum of annotation quality. NCBI in collaboration with sequencing centers, archival databases, and researchers, has developed the first international annotation standards, a fundamental step in ensuring that high quality complete prokaryotic genomes are available as gold standard references. Highlights include the development of annotation assessment tools, community acceptance of protein naming standards, comparison of annotation resources to provide consistent annotation, and improved tracking of the evidence used to generate a particular annotation. The development of a set of minimal standards, including the requirement for annotated complete prokaryotic genomes to contain a full set of ribosomal RNAs, transfer RNAs, and proteins encoding core conserved functions, is an historic milestone. The use of these standards in existing genomes and future submissions will increase the quality of databases, enabling researchers to make accurate biological discoveries.
Solving the Problem: Genome Annotation Standards before the Data Deluge
Klimke, William; O'Donovan, Claire; White, Owen; Brister, J. Rodney; Clark, Karen; Fedorov, Boris; Mizrachi, Ilene; Pruitt, Kim D.; Tatusova, Tatiana
2011-01-01
The promise of genome sequencing was that the vast undiscovered country would be mapped out by comparison of the multitude of sequences available and would aid researchers in deciphering the role of each gene in every organism. Researchers recognize that there is a need for high quality data. However, different annotation procedures, numerous databases, and a diminishing percentage of experimentally determined gene functions have resulted in a spectrum of annotation quality. NCBI in collaboration with sequencing centers, archival databases, and researchers, has developed the first international annotation standards, a fundamental step in ensuring that high quality complete prokaryotic genomes are available as gold standard references. Highlights include the development of annotation assessment tools, community acceptance of protein naming standards, comparison of annotation resources to provide consistent annotation, and improved tracking of the evidence used to generate a particular annotation. The development of a set of minimal standards, including the requirement for annotated complete prokaryotic genomes to contain a full set of ribosomal RNAs, transfer RNAs, and proteins encoding core conserved functions, is an historic milestone. The use of these standards in existing genomes and future submissions will increase the quality of databases, enabling researchers to make accurate biological discoveries. PMID:22180819
Roche, Nicolas; Reddel, Helen; Martin, Richard; Brusselle, Guy; Papi, Alberto; Thomas, Mike; Postma, Dirjke; Thomas, Vicky; Rand, Cynthia; Chisholm, Alison; Price, David
2014-02-01
Real-world research can use observational or clinical trial designs, in both cases putting emphasis on high external validity, to complement the classical efficacy randomized controlled trials (RCTs) with high internal validity. Real-world research is made necessary by the variety of factors that can play an important a role in modulating effectiveness in real life but are often tightly controlled in RCTs, such as comorbidities and concomitant treatments, adherence, inhalation technique, access to care, strength of doctor-caregiver communication, and socio-economic and other organizational factors. Real-world studies belong to two main categories: pragmatic trials and observational studies, which can be prospective or retrospective. Focusing on comparative database observational studies, the process aimed at ensuring high-quality research can be divided into three parts: preparation of research, analyses and reporting, and discussion of results. Key points include a priori planning of data collection and analyses, identification of appropriate database(s), proper outcomes definition, study registration with commitment to publish, bias minimization through matching and adjustment processes accounting for potential confounders, and sensitivity analyses testing the robustness of results. When these conditions are met, observational database studies can reach a sufficient level of evidence to help create guidelines (i.e., clinical and regulatory decision-making).
This presentation will highlight known challenges with the production of high quality chemical databases and outline recent efforts made to address these challenges. Specific examples will be provided illustrating these challenges within the U.S. Environmental Protection Agency ...
Implementation of Three Text to Speech Systems for Kurdish Language
NASA Astrophysics Data System (ADS)
Bahrampour, Anvar; Barkhoda, Wafa; Azami, Bahram Zahir
Nowadays, concatenative method is used in most modern TTS systems to produce artificial speech. The most important challenge in this method is choosing appropriate unit for creating database. This unit must warranty smoothness and high quality speech, and also, creating database for it must reasonable and inexpensive. For example, syllable, phoneme, allophone, and, diphone are appropriate units for all-purpose systems. In this paper, we implemented three synthesis systems for Kurdish language based on syllable, allophone, and diphone and compare their quality using subjective testing.
Mathis, Alexander; Depaquit, Jérôme; Dvořák, Vit; Tuten, Holly; Bañuls, Anne-Laure; Halada, Petr; Zapata, Sonia; Lehrter, Véronique; Hlavačková, Kristýna; Prudhomme, Jorian; Volf, Petr; Sereno, Denis; Kaufmann, Christian; Pflüger, Valentin; Schaffner, Francis
2015-05-10
Rapid, accurate and high-throughput identification of vector arthropods is of paramount importance in surveillance programmes that are becoming more common due to the changing geographic occurrence and extent of many arthropod-borne diseases. Protein profiling by MALDI-TOF mass spectrometry fulfils these requirements for identification, and reference databases have recently been established for several vector taxa, mostly with specimens from laboratory colonies. We established and validated a reference database containing 20 phlebotomine sand fly (Diptera: Psychodidae, Phlebotominae) species by using specimens from colonies or field-collections that had been stored for various periods of time. Identical biomarker mass patterns ('superspectra') were obtained with colony- or field-derived specimens of the same species. In the validation study, high quality spectra (i.e. more than 30 evaluable masses) were obtained with all fresh insects from colonies, and with 55/59 insects deep-frozen (liquid nitrogen/-80 °C) for up to 25 years. In contrast, only 36/52 specimens stored in ethanol could be identified. This resulted in an overall sensitivity of 87 % (140/161); specificity was 100 %. Duration of storage impaired data counts in the high mass range, and thus cluster analyses of closely related specimens might reflect their storage conditions rather than phenotypic distinctness. A major drawback of MALDI-TOF MS is the restricted availability of in-house databases and the fact that mass spectrometers from 2 companies (Bruker, Shimadzu) are widely being used. We have analysed fingerprints of phlebotomine sand flies obtained by automatic routine procedure on a Bruker instrument by using our database and the software established on a Shimadzu system. The sensitivity with 312 specimens from 8 sand fly species from laboratory colonies when evaluating only high quality spectra was 98.3 %; the specificity was 100 %. The corresponding diagnostic values with 55 field-collected specimens from 4 species were 94.7 % and 97.4 %, respectively. A centralized high-quality database (created by expert taxonomists and experienced users of mass spectrometers) that is easily amenable to customer-oriented identification services is a highly desirable resource. As shown in the present work, spectra obtained from different specimens with different instruments can be analysed using a centralized database, which should be available in the near future via an online platform in a cost-efficient manner.
CARD 2017: expansion and model-centric curation of the Comprehensive Antibiotic Resistance Database
USDA-ARS?s Scientific Manuscript database
The Comprehensive Antibiotic Resistance Database (CARD; http://arpcard.mcmaster.ca) is a manually curated resource containing high quality reference data on the molecular basis of antimicrobial resistance (AMR), with an emphasis on the genes, proteins, and mutations involved in AMR. CARD is ontologi...
Wawrzyniak, Zbigniew M; Paczesny, Daniel; Mańczuk, Marta; Zatoński, Witold A
2011-01-01
Large-scale epidemiologic studies can assess health indicators differentiating social groups and important health outcomes of the incidence and mortality of cancer, cardiovascular disease, and others, to establish a solid knowledgebase for the prevention management of premature morbidity and mortality causes. This study presents new advanced methods of data collection and data management systems with current data quality control and security to ensure high quality data assessment of health indicators in the large epidemiologic PONS study (The Polish-Norwegian Study). The material for experiment is the data management design of the large-scale population study in Poland (PONS) and the managed processes are applied into establishing a high quality and solid knowledge. The functional requirements of the PONS study data collection, supported by the advanced IT web-based methods, resulted in medical data of a high quality, data security, with quality data assessment, control process and evolution monitoring are fulfilled and shared by the IT system. Data from disparate and deployed sources of information are integrated into databases via software interfaces, and archived by a multi task secure server. The practical and implemented solution of modern advanced database technologies and remote software/hardware structure successfully supports the research of the big PONS study project. Development and implementation of follow-up control of the consistency and quality of data analysis and the processes of the PONS sub-databases have excellent measurement properties of data consistency of more than 99%. The project itself, by tailored hardware/software application, shows the positive impact of Quality Assurance (QA) on the quality of outcomes analysis results, effective data management within a shorter time. This efficiency ensures the quality of the epidemiological data and indicators of health by the elimination of common errors of research questionnaires and medical measurements.
Depth image enhancement using perceptual texture priors
NASA Astrophysics Data System (ADS)
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Colliers, Annelies; Bartholomeeusen, Stefaan; Remmen, Roy; Coenen, Samuel; Michiels, Barbara; Bastiaens, Hilde; Van Royen, Paul; Verhoeven, Veronique; Holmgren, Philip; De Ruyck, Bernard; Philips, Hilde
2016-05-04
Primary out-of-hours care is developing throughout Europe. High-quality databases with linked data from primary health services can help to improve research and future health services. In 2014, a central clinical research database infrastructure was established (iCAREdata: Improving Care And Research Electronic Data Trust Antwerp, www.icaredata.eu ) for primary and interdisciplinary health care at the University of Antwerp, linking data from General Practice Cooperatives, Emergency Departments and Pharmacies during out-of-hours care. Medical data are pseudonymised using the services of a Trusted Third Party, which encodes private information about patients and physicians before data is sent to iCAREdata. iCAREdata provides many new research opportunities in the fields of clinical epidemiology, health care management and quality of care. A key aspect will be to ensure the quality of data registration by all health care providers. This article describes the establishment of a research database and the possibilities of linking data from different primary out-of-hours care providers, with the potential to help to improve research and the quality of health care services.
Kılıç, Sefa; Sagitova, Dinara M; Wolfish, Shoshannah; Bely, Benoit; Courtot, Mélanie; Ciufo, Stacy; Tatusova, Tatiana; O'Donovan, Claire; Chibucos, Marcus C; Martin, Maria J; Erill, Ivan
2016-01-01
Domain-specific databases are essential resources for the biomedical community, leveraging expert knowledge to curate published literature and provide access to referenced data and knowledge. The limited scope of these databases, however, poses important challenges on their infrastructure, visibility, funding and usefulness to the broader scientific community. CollecTF is a community-oriented database documenting experimentally validated transcription factor (TF)-binding sites in the Bacteria domain. In its quest to become a community resource for the annotation of transcriptional regulatory elements in bacterial genomes, CollecTF aims to move away from the conventional data-repository paradigm of domain-specific databases. Through the adoption of well-established ontologies, identifiers and collaborations, CollecTF has progressively become also a portal for the annotation and submission of information on transcriptional regulatory elements to major biological sequence resources (RefSeq, UniProtKB and the Gene Ontology Consortium). This fundamental change in database conception capitalizes on the domain-specific knowledge of contributing communities to provide high-quality annotations, while leveraging the availability of stable information hubs to promote long-term access and provide high-visibility to the data. As a submission portal, CollecTF generates TF-binding site information through direct annotation of RefSeq genome records, definition of TF-based regulatory networks in UniProtKB entries and submission of functional annotations to the Gene Ontology. As a database, CollecTF provides enhanced search and browsing, targeted data exports, binding motif analysis tools and integration with motif discovery and search platforms. This innovative approach will allow CollecTF to focus its limited resources on the generation of high-quality information and the provision of specialized access to the data.Database URL: http://www.collectf.org/. © The Author(s) 2016. Published by Oxford University Press.
Arlet, Vincent; Shilt, Jeffrey; Bersusky, Ernesto; Abel, Mark; Ouellet, Jean Albert; Evans, Davis; Menon, K V; Kandziora, Frank; Shen, Frank; Lamartina, Claudio; Adams, Marc; Reddi, Vasantha
2008-11-01
Considerable variability exists in the surgical treatment and outcomes of adolescent idiopathic scoliosis (AIS). This is due to the lack of evidence-based treatment guidelines and outcome measures. Although clinical trials have been extolled as the highest form of evidence for evaluating treatment efficacy, the disadvantage of cost, time, lack of feasibility, and ethical considerations indicate a need for a new paradigm for evidence based research in this spinal deformity. High quality clinical databases offer an alternative approach for evidence-based research in medicine. So, we developed and established Scolisoft, an international, multidimensional and relational database designed to be a repository of surgical cases for AIS, and an active vehicle for standardized surgical information in a format that would permit qualitative and quantitative research and analysis. Here, we describe and discuss the utility of Scolisoft as a new paradigm for evidence-based research on AIS. Scolisoft was developed using dot.net platform and SQL server from Microsoft. All data is deidentified to protect patient privacy. Scolisoft can be accessed at (www.scolisoft.org). Collection of high quality data on surgical cases of AIS is a priority and processes continue to improve the database quality. The database currently has 67 registered users from 21 countries. To date, Scolisoft has 200 detailed surgical cases with pre, post, and follow up data. Scolisoft provides a structured process and practical information for surgeons to benchmark their treatment methods against other like treatments. Scolisoft is multifaceted and its use extends to education of health care providers in training, patients, ability to mine important data to stimulate research and quality improvement initiatives of healthcare organizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, D
Purpose: A unified database system was developed to allow accumulation, review and analysis of quality assurance (QA) data for measurement, treatment, imaging and simulation equipment in our department. Recording these data in a database allows a unified and structured approach to review and analysis of data gathered using commercial database tools. Methods: A clinical database was developed to track records of quality assurance operations on linear accelerators, a computed tomography (CT) scanner, high dose rate (HDR) afterloader and imaging systems such as on-board imaging (OBI) and Calypso in our department. The database was developed using Microsoft Access database and visualmore » basic for applications (VBA) programming interface. Separate modules were written for accumulation, review and analysis of daily, monthly and annual QA data. All modules were designed to use structured query language (SQL) as the basis of data accumulation and review. The SQL strings are dynamically re-written at run time. The database also features embedded documentation, storage of documents produced during QA activities and the ability to annotate all data within the database. Tests are defined in a set of tables that define test type, specific value, and schedule. Results: Daily, Monthly and Annual QA data has been taken in parallel with established procedures to test MQA. The database has been used to aggregate data across machines to examine the consistency of machine parameters and operations within the clinic for several months. Conclusion: The MQA application has been developed as an interface to a commercially available SQL engine (JET 5.0) and a standard database back-end. The MQA system has been used for several months for routine data collection.. The system is robust, relatively simple to extend and can be migrated to a commercial SQL server.« less
Guidelines for establishing and maintaining construction quality databases : tech brief.
DOT National Transportation Integrated Search
2006-12-01
Construction quality databases contain a variety of construction-related data that characterize the quality of materials and workmanship. The primary purpose of construction quality databases is to help State highway agencies (SHAs) assess the qualit...
Monitoring outcomes with relational databases: does it improve quality of care?
Clemmer, Terry P
2004-12-01
There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.
Spatiotemporal database of US congressional elections, 1896–2014
Wolf, Levi John
2017-01-01
High-quality historical data about US Congressional elections has long provided common ground for electoral studies. However, advances in geographic information science have recently made it efficient to compile, distribute, and analyze large spatio-temporal data sets on the structure of US Congressional districts. A single spatio-temporal data set that relates US Congressional election results to the spatial extent of the constituencies has not yet been developed. To address this, existing high-quality data sets of elections returns were combined with a spatiotemporal data set on Congressional district boundaries to generate a new spatio-temporal database of US Congressional election results that are explicitly linked to the geospatial data about the districts themselves. PMID:28809849
Stewart, Moira; Thind, Amardeep; Terry, Amanda L; Chevendra, Vijaya; Marshall, J Neil
2009-11-01
Electronic medical records (EMRs) are posited as a tool for improving practice, policy and research in primary healthcare. This paper describes the Deliver Primary Healthcare Information (DELPHI) Project at the Department of Family Medicine at the University of Western Ontario, focusing on its development, current status and research potential in order to share experiences with researchers in similar contexts. The project progressed through four stages: (a) participant recruitment, (b) EMR software modification and implementation, (c) database creation and (d) data quality assessment. Currently, the DELPHI database holds more than two years of high-quality, de-identified data from 10 practices, with 30,000 patients and nearly a quarter of a million encounters.
Sample size determination for bibliographic retrieval studies
Yao, Xiaomei; Wilczynski, Nancy L; Walter, Stephen D; Haynes, R Brian
2008-01-01
Background Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies. Methods The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset. Results For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%). Conclusion The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach. PMID:18823538
Li, Mouduo; Qiao, Cuixia; Qin, Liping; Zhang, Junyong; Ling, Changquan
2012-09-01
To investigate the application of Traditional Chinese Medicine Injections (TCMIs) for treatment of primary liver cancer (PLC). A literature review was conducted using PubMed/Medline, Cochrane Library Controlled Clinical Trials Database, China National Knowledge Infrastructure (CNKI), China Scientific Journal Database (CSJD) and China Biology Medicine (CBM). Online websites including journal websites and databases of ongoing trials, as well as some Traditional Chinese Medicine journals that are not indexed in the electronic databases were also searched. as adjunctive medication for the treatment of PLC could regulate patient immunity, reduce bone marrow suppression, relieve clinical symptoms, and improve quality of life, as well as control disease progression and prolong survival time. Within the limitations of this review, we conclude that application of TCMIs as adjunctive medication may provide benefits for patients with PLC. Further large, high-quality trials are warranted.
The Application and Future of Big Database Studies in Cardiology: A Single-Center Experience.
Lee, Kuang-Tso; Hour, Ai-Ling; Shia, Ben-Chang; Chu, Pao-Hsien
2017-11-01
As medical research techniques and quality have improved, it is apparent that cardiovascular problems could be better resolved by more strict experiment design. In fact, substantial time and resources should be expended to fulfill the requirements of high quality studies. Many worthy ideas and hypotheses were unable to be verified or proven due to ethical or economic limitations. In recent years, new and various applications and uses of databases have received increasing attention. Important information regarding certain issues such as rare cardiovascular diseases, women's heart health, post-marketing analysis of different medications, or a combination of clinical and regional cardiac features could be obtained by the use of rigorous statistical methods. However, there are limitations that exist among all databases. One of the key essentials to creating and correctly addressing this research is through reliable processes of analyzing and interpreting these cardiologic databases.
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
Case-based learning in education of Traditional Chinese Medicine: a systematic review.
Chen, Ji; Li, Ying; Tang, Yong; Zeng, Fang; Wu, Xi; Liang, Fanrong
2013-10-01
To assess the effect of case-based learning (CBL) in the education of Traditional Chinese Medicine (TCM). The studies concerning TCM courses designed with CBL were included by searching the databases of EBSCO, Pubmed, Science Citation Index, China National Knowledge Infrastructure, Chongqing VIP database. The valid data was extracted in accordance with the included criteria. The quality of the studies was assessed with Gemma Flores-Masteo. A total of 22 articles were retrieved that met the selection criteria: one was of high quality; two were of low quality; the rest were categorized as moderate quality. The majority of the studies demonstrated the better effect produced by CBL, while a few studies showed no difference, compared with the didactic format. All included studies confirmed the favorable effect on learners' attitude, skills and ability. CBL showed the desirable results in achieving the goal of learning. Compared to didactic approach, it played a more active role in promoting students' competency. Since the quality of the articles on which the study was based was not so high, the findings still need further research to become substantiated.
Space Launch System Ascent Static Aerodynamic Database Development
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T.; Bennett, David W.; Blevins, John A.; Erickson, Gary E.; Favaregh, Noah M.; Houlden, Heather P.; Tomek, William G.
2014-01-01
This paper describes the wind tunnel testing work and data analysis required to characterize the static aerodynamic environment of NASA's Space Launch System (SLS) ascent portion of flight. Scaled models of the SLS have been tested in transonic and supersonic wind tunnels to gather the high fidelity data that is used to build aerodynamic databases. A detailed description of the wind tunnel test that was conducted to produce the latest version of the database is presented, and a representative set of aerodynamic data is shown. The wind tunnel data quality remains very high, however some concerns with wall interference effects through transonic Mach numbers are also discussed. Post-processing and analysis of the wind tunnel dataset are crucial for the development of a formal ascent aerodynamics database.
NASA Astrophysics Data System (ADS)
Henderson, B. H.; Akhtar, F.; Pye, H. O. T.; Napelenok, S. L.; Hutzell, W. T.
2014-02-01
Transported air pollutants receive increasing attention as regulations tighten and global concentrations increase. The need to represent international transport in regional air quality assessments requires improved representation of boundary concentrations. Currently available observations are too sparse vertically to provide boundary information, particularly for ozone precursors, but global simulations can be used to generate spatially and temporally varying lateral boundary conditions (LBC). This study presents a public database of global simulations designed and evaluated for use as LBC for air quality models (AQMs). The database covers the contiguous United States (CONUS) for the years 2001-2010 and contains hourly varying concentrations of ozone, aerosols, and their precursors. The database is complemented by a tool for configuring the global results as inputs to regional scale models (e.g., Community Multiscale Air Quality or Comprehensive Air quality Model with extensions). This study also presents an example application based on the CONUS domain, which is evaluated against satellite retrieved ozone and carbon monoxide vertical profiles. The results show performance is largely within uncertainty estimates for ozone from the Ozone Monitoring Instrument and carbon monoxide from the Measurements Of Pollution In The Troposphere (MOPITT), but there were some notable biases compared with Tropospheric Emission Spectrometer (TES) ozone. Compared with TES, our ozone predictions are high-biased in the upper troposphere, particularly in the south during January. This publication documents the global simulation database, the tool for conversion to LBC, and the evaluation of concentrations on the boundaries. This documentation is intended to support applications that require representation of long-range transport of air pollutants.
Large scale database scrubbing using object oriented software components.
Herting, R L; Barnes, M R
1998-01-01
Now that case managers, quality improvement teams, and researchers use medical databases extensively, the ability to share and disseminate such databases while maintaining patient confidentiality is paramount. A process called scrubbing addresses this problem by removing personally identifying information while keeping the integrity of the medical information intact. Scrubbing entire databases, containing multiple tables, requires that the implicit relationships between data elements in different tables of the database be maintained. To address this issue we developed DBScrub, a Java program that interfaces with any JDBC compliant database and scrubs the database while maintaining the implicit relationships within it. DBScrub uses a small number of highly configurable object-oriented software components to carry out the scrubbing. We describe the structure of these software components and how they maintain the implicit relationships within the database.
Winsor, Geoffrey L; Van Rossum, Thea; Lo, Raymond; Khaira, Bhavjinder; Whiteside, Matthew D; Hancock, Robert E W; Brinkman, Fiona S L
2009-01-01
Pseudomonas aeruginosa is a well-studied opportunistic pathogen that is particularly known for its intrinsic antimicrobial resistance, diverse metabolic capacity, and its ability to cause life threatening infections in cystic fibrosis patients. The Pseudomonas Genome Database (http://www.pseudomonas.com) was originally developed as a resource for peer-reviewed, continually updated annotation for the Pseudomonas aeruginosa PAO1 reference strain genome. In order to facilitate cross-strain and cross-species genome comparisons with other Pseudomonas species of importance, we have now expanded the database capabilities to include all Pseudomonas species, and have developed or incorporated methods to facilitate high quality comparative genomics. The database contains robust assessment of orthologs, a novel ortholog clustering method, and incorporates five views of the data at the sequence and annotation levels (Gbrowse, Mauve and custom views) to facilitate genome comparisons. A choice of simple and more flexible user-friendly Boolean search features allows researchers to search and compare annotations or sequences within or between genomes. Other features include more accurate protein subcellular localization predictions and a user-friendly, Boolean searchable log file of updates for the reference strain PAO1. This database aims to continue to provide a high quality, annotated genome resource for the research community and is available under an open source license.
Systematic review of scope and quality of electronic patient record data in primary care
Thiru, Krish; Hassey, Alan; Sullivan, Frank
2003-01-01
Objective To systematically review measures of data quality in electronic patient records (EPRs) in primary care. Design Systematic review of English language publications, 1980-2001. Data sources Bibliographic searches of medical databases, specialist medical informatics databases, conference proceedings, and institutional contacts. Study selection Studies selected according to a predefined framework for categorising review papers. Data extraction Reference standards and measurements used to judge quality. Results Bibliographic searches identified 4589 publications. After primary exclusions 174 articles were classified, 52 of which met the inclusion criteria for review. Selected studies were primarily descriptive surveys. Variability in methods prevented meta-analysis of results. Forty eight publications were concerned with diagnostic data, 37 studies measured data quality, and 15 scoped EPR quality. Reliability of data was assessed with rate comparison. Measures of sensitivity were highly dependent on the element of EPR data being investigated, while the positive predictive value was consistently high, indicating good validity. Prescribing data were generally of better quality than diagnostic or lifestyle data. Conclusion The lack of standardised methods for assessment of quality of data in electronic patient records makes it difficult to compare results between studies. Studies should present data quality measures with clear numerators, denominators, and confidence intervals. Ambiguous terms such as “accuracy” should be avoided unless precisely defined. PMID:12750210
Read-across predictions require high quality measured data for source analogues. These data are typically retrieved from structured databases, but biomedical literature data are often untapped because current literature mining approaches are resource intensive. Our high-throughpu...
The need for high-quality whole-genome sequence databases in microbial forensics.
Sjödin, Andreas; Broman, Tina; Melefors, Öjar; Andersson, Gunnar; Rasmusson, Birgitta; Knutsson, Rickard; Forsman, Mats
2013-09-01
Microbial forensics is an important part of a strengthened capability to respond to biocrime and bioterrorism incidents to aid in the complex task of distinguishing between natural outbreaks and deliberate acts. The goal of a microbial forensic investigation is to identify and criminally prosecute those responsible for a biological attack, and it involves a detailed analysis of the weapon--that is, the pathogen. The recent development of next-generation sequencing (NGS) technologies has greatly increased the resolution that can be achieved in microbial forensic analyses. It is now possible to identify, quickly and in an unbiased manner, previously undetectable genome differences between closely related isolates. This development is particularly relevant for the most deadly bacterial diseases that are caused by bacterial lineages with extremely low levels of genetic diversity. Whole-genome analysis of pathogens is envisaged to be increasingly essential for this purpose. In a microbial forensic context, whole-genome sequence analysis is the ultimate method for strain comparisons as it is informative during identification, characterization, and attribution--all 3 major stages of the investigation--and at all levels of microbial strain identity resolution (ie, it resolves the full spectrum from family to isolate). Given these capabilities, one bottleneck in microbial forensics investigations is the availability of high-quality reference databases of bacterial whole-genome sequences. To be of high quality, databases need to be curated and accurate in terms of sequences, metadata, and genetic diversity coverage. The development of whole-genome sequence databases will be instrumental in successfully tracing pathogens in the future.
Planas, M; Rodríguez, T; Lecha, M
2004-01-01
Decisions have to be made about what data on patient characteristics and processes and outcome need to be collected, and standard definitions of these data items need to be developed to identify data quality concerns as promptly as possible and to establish ways to improve data quality. The usefulness of any clinical database depends strongly on the quality of the collected data. If the data quality is poor, the results of studies using the database might be biased and unreliable. Furthermore, if the quality of the database has not been verified, the results might be given little credence, especially if they are unwelcome or unexpected. To assure the quality of clinical database is essential the clear definition of the uses to which the database is going to be put; the database should to be developed that is comprehensive in terms of its usefulness but limited in its size.
Asadi, S S; Vuppala, Padmaja; Reddy, M Anji
2005-01-01
A preliminary survey of area under Zone-III of MCH was undertaken to assess the ground water quality, demonstrate its spatial distribution and correlate with the land use patterns using advance techniques of remote sensing and geographical information system (GIS). Twenty-seven ground water samples were collected and their chemical analysis was done to form the attribute database. Water quality index was calculated from the measured parameters, based on which the study area was classified into five groups with respect to suitability of water for drinking purpose. Thematic maps viz., base map, road network, drainage and land use/land cover were prepared from IRS ID PAN + LISS III merged satellite imagery forming the spatial database. Attribute database was integrated with spatial sampling locations map in Arc/Info and maps showing spatial distribution of water quality parameters were prepared in Arc View. Results indicated that high concentrations of total dissolved solids (TDS), nitrates, fluorides and total hardness were observed in few industrial and densely populated areas indicating deteriorated water quality while the other areas exhibited moderate to good water quality.
Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai
2015-10-01
Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.
Takahashi, Arata; Kumamaru, Hiraku; Tomotaki, Ai; Matsumura, Goki; Fukuchi, Eriko; Hirata, Yasutaka; Murakami, Arata; Hashimoto, Hideki; Ono, Minoru; Miyata, Hiroaki
2018-03-01
Japan Congenital Cardiovascluar Surgical Database (JCCVSD) is a nationwide registry whose data are used for health quality assessment and clinical research in Japan. We evaluated the completeness of case registration and the accuracy of recorded data components including postprocedural mortality and complications in the database via on-site data adjudication. We validated the records from JCCVSD 2010 to 2012 containing congenital cardiovascular surgery data performed in 111 facilities throughout Japan. We randomly chose nine facilities for site visit by the auditor team and conducted on-site data adjudication. We assessed whether the records in JCCVSD matched the data in the source materials. We identified 1,928 cases of eligible surgeries performed at the facilities, of which 1,910 were registered (99.1% completeness), with 6 cases of duplication and 1 inappropriate case registration. Data components including gender, age, and surgery time (hours) were highly accurate with 98% to 100% concordance. Mortality at discharge and at 30 and 90 postoperative days was 100% accurate. Among the five complications studied, reoperation was the most frequently observed, with 16 and 21 cases recorded in the database and source materials, respectively, having a sensitivity of 0.67 and a specificity of 0.99. Validation of JCCVSD database showed high registration completeness and high accuracy especially in the categorical data components. Adjudicated mortality was 100% accurate. While limited in numbers, the recorded cases of postoperative complications all had high specificities but had lower sensitivity (0.67-1.00). Continued activities for data quality improvement and assessment are necessary for optimizing the utility of these registries.
Ho, Robin S T; Wu, Xinyin; Yuan, Jinqiu; Liu, Siya; Lai, Xin; Wong, Samuel Y S; Chung, Vincent C H
2015-01-08
Meta-analysis (MA) of randomised trials is considered to be one of the best approaches for summarising high-quality evidence on the efficacy and safety of treatments. However, methodological flaws in MAs can reduce the validity of conclusions, subsequently impairing the quality of decision making. To assess the methodological quality of MAs on COPD treatments. A cross-sectional study on MAs of COPD trials. MAs published during 2000-2013 were sampled from the Cochrane Database of Systematic Reviews and Database of Abstracts of Reviews of Effect. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. Seventy-nine MAs were sampled. Only 18% considered the scientific quality of primary studies when formulating conclusions and 49% used appropriate meta-analytic methods to combine findings. The problems were particularly acute among MAs on pharmacological treatments. In 48% of MAs the authors did not report conflict of interest. Fifty-eight percent reported harmful effects of treatment. Publication bias was not assessed in 65% of MAs, and only 10% had searched non-English databases. The methodological quality of the included MAs was disappointing. Consideration of scientific quality when formulating conclusions should be made explicit. Future MAs should improve on reporting conflict of interest and harm, assessment of publication bias, prevention of language bias and use of appropriate meta-analytic methods.
Ho, Robin ST; Wu, Xinyin; Yuan, Jinqiu; Liu, Siya; Lai, Xin; Wong, Samuel YS; Chung, Vincent CH
2015-01-01
Background: Meta-analysis (MA) of randomised trials is considered to be one of the best approaches for summarising high-quality evidence on the efficacy and safety of treatments. However, methodological flaws in MAs can reduce the validity of conclusions, subsequently impairing the quality of decision making. Aims: To assess the methodological quality of MAs on COPD treatments. Methods: A cross-sectional study on MAs of COPD trials. MAs published during 2000–2013 were sampled from the Cochrane Database of Systematic Reviews and Database of Abstracts of Reviews of Effect. Methodological quality was assessed using the validated AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool. Results: Seventy-nine MAs were sampled. Only 18% considered the scientific quality of primary studies when formulating conclusions and 49% used appropriate meta-analytic methods to combine findings. The problems were particularly acute among MAs on pharmacological treatments. In 48% of MAs the authors did not report conflict of interest. Fifty-eight percent reported harmful effects of treatment. Publication bias was not assessed in 65% of MAs, and only 10% had searched non-English databases. Conclusions: The methodological quality of the included MAs was disappointing. Consideration of scientific quality when formulating conclusions should be made explicit. Future MAs should improve on reporting conflict of interest and harm, assessment of publication bias, prevention of language bias and use of appropriate meta-analytic methods. PMID:25569783
A comprehensive global genotype-phenotype database for rare diseases.
Trujillano, Daniel; Oprea, Gabriela-Elena; Schmitz, Yvonne; Bertoli-Avella, Aida M; Abou Jamra, Rami; Rolfs, Arndt
2017-01-01
The ability to discover genetic variants in a patient runs far ahead of the ability to interpret them. Databases with accurate descriptions of the causal relationship between the variants and the phenotype are valuable since these are critical tools in clinical genetic diagnostics. Here, we introduce a comprehensive and global genotype-phenotype database focusing on rare diseases. This database (CentoMD ® ) is a browser-based tool that enables access to a comprehensive, independently curated system utilizing stringent high-quality criteria and a quickly growing repository of genetic and human phenotype ontology (HPO)-based clinical information. Its main goals are to aid the evaluation of genetic variants, to enhance the validity of the genetic analytical workflow, to increase the quality of genetic diagnoses, and to improve evaluation of treatment options for patients with hereditary diseases. The database software correlates clinical information from consented patients and probands of different geographical backgrounds with a large dataset of genetic variants and, when available, biomarker information. An automated follow-up tool is incorporated that informs all users whenever a variant classification has changed. These unique features fully embedded in a CLIA/CAP-accredited quality management system allow appropriate data quality and enhanced patient safety. More than 100,000 genetically screened individuals are documented in the database, resulting in more than 470 million variant detections. Approximately, 57% of the clinically relevant and uncertain variants in the database are novel. Notably, 3% of the genetic variants identified and previously reported in the literature as being associated with a particular rare disease were reclassified, based on internal evidence, as clinically irrelevant. The database offers a comprehensive summary of the clinical validity and causality of detected gene variants with their associated phenotypes, and is a valuable tool for identifying new disease genes through the correlation of novel genetic variants with specific, well-defined phenotypes.
NASA Technical Reports Server (NTRS)
Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William
2010-01-01
A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.
Tao, Huan; Zhang, Yueyuan; Li, Qian; Chen, Jin
2017-11-01
To assess the methodological quality of systematic reviews (SRs) or meta-analysis concerning the predictive value of ERCC1 in platinum chemotherapy of non-small cell lung cancer. We searched the PubMed, EMbase, Cochrane library, international prospective register of systematic reviews, Chinese BioMedical Literature Database, China National Knowledge Infrastructure, Wan Fang and VIP database for SRs or meta-analysis. The methodological quality of included literatures was evaluated by risk of bias in systematic review (ROBIS) scale. Nineteen eligible SRs/meta-analysis were included. The most frequently searched databases were EMbase (74%), PubMed, Medline and CNKI. Fifteen SRs did additional retrieval manually, but none of them retrieved the registration platform. 47% described the two-reviewers model in the screening for eligible original articles, and seven SRs described the two reviewers to extract data. In methodological quality assessment, inter-rater reliability Kappa was 0.87 between two reviewers. Research question were well related to all SRs in phase 1 and the eligibility criteria was suitable for each SR, and rated as 'low' risk bias. But the 'high' risk bias existed in all the SRs regarding methods used to identify and/or select studies, and data collection and study appraisal. More than two-third of SRs or meta-analysis were finished with high risk of bias in the synthesis, findings and the final phase. The study demonstrated poor methodological quality of SRs/meta-analysis assessing the predictive value of ERCC1 in chemotherapy among the NSCLC patients, especially the high performance bias. Registration or publishing the protocol is recommended in future research.
Database for chemical contents of streams on the White Mountain National Forest.
James W. Hornbeck; Michelle M. Alexander; Christopher Eagar; Joan Y. Carlson; Robert B. Smith
2001-01-01
Producing and protecting high-quality streamwater requires background or baseline data from which one can evaluate the impacts of natural and human disturbances. A database was created for chemical analyses of streamwater samples collected during the past several decades from 446 locations on the White Mountain National Forest (304,000 ha in New Hampshire and Maine)....
Protein sequence annotation in the genome era: the annotation concept of SWISS-PROT+TREMBL.
Apweiler, R; Gateau, A; Contrino, S; Martin, M J; Junker, V; O'Donovan, C; Lang, F; Mitaritonna, N; Kappus, S; Bairoch, A
1997-01-01
SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation, a minimal level of redundancy and high level of integration with other databases. Ongoing genome sequencing projects have dramatically increased the number of protein sequences to be incorporated into SWISS-PROT. Since we do not want to dilute the quality standards of SWISS-PROT by incorporating sequences without proper sequence analysis and annotation, we cannot speed up the incorporation of new incoming data indefinitely. However, as we also want to make the sequences available as fast as possible, we introduced TREMBL (TRanslation of EMBL nucleotide sequence database), a supplement to SWISS-PROT. TREMBL consists of computer-annotated entries in SWISS-PROT format derived from the translation of all coding sequences (CDS) in the EMBL nucleotide sequence database, except for CDS already included in SWISS-PROT. While TREMBL is already of immense value, its computer-generated annotation does not match the quality of SWISS-PROTs. The main difference is in the protein functional information attached to sequences. With this in mind, we are dedicating substantial effort to develop and apply computer methods to enhance the functional information attached to TREMBL entries.
The impact of database quality on keystroke dynamics authentication
NASA Astrophysics Data System (ADS)
Panasiuk, Piotr; Rybnik, Mariusz; Saeed, Khalid; Rogowski, Marcin
2016-06-01
This paper concerns keystroke dynamics, also partially in the context of touchscreen devices. The authors concentrate on the impact of database quality and propose their algorithm to test database quality issues. The algorithm is used on their own
Haytowitz, David B; Pehrsson, Pamela R
2018-01-01
For nearly 20years, the National Food and Nutrient Analysis Program (NFNAP) has expanded and improved the quantity and quality of data in US Department of Agriculture's (USDA) food composition databases (FCDB) through the collection and analysis of nationally representative food samples. NFNAP employs statistically valid sampling plans, the Key Foods approach to identify and prioritize foods and nutrients, comprehensive quality control protocols, and analytical oversight to generate new and updated analytical data for food components. NFNAP has allowed the Nutrient Data Laboratory to keep up with the dynamic US food supply and emerging scientific research. Recently generated results for nationally representative food samples show marked changes compared to previous database values for selected nutrients. Monitoring changes in the composition of foods is critical in keeping FCDB up-to-date, so that they remain a vital tool in assessing the nutrient intake of national populations, as well as for providing dietary advice. Published by Elsevier Ltd.
Proposal for a unified selection to medical residency programs.
Toffoli, Sônia Ferreira Lopes; Ferreira Filho, Olavo Franco; Andrade, Dalton Francisco de
2013-01-01
This paper proposes the unification of entrance exams to medical residency programs (MRP) in Brazil. Problems related to MRP and its interface with public health problems in Brazil are highlighted and how this proposal are able to help solving these problems. The proposal is to create a database to be applied in MRP unified exams. Some advantages of using the Item Response Theory (IRT) in this database are highlighted. The MRP entrance exams are developed and applied decentralized where each school is responsible for its examination. These exams quality are questionable. Reviews about items quality, validity and reliability of appliances are not common disclosed. Evaluation is important in every education system bringing on required changes and control of teaching and learning. The proposal of MRP entrance exams unification, besides offering high quality exams to institutions participants, could be as an extra source to rate medical school and cause improvements, provide studies with a database and allow a regional mobility. Copyright © 2013 Elsevier Editora Ltda. All rights reserved.
Frost, Rachael; Levati, Sara; McClurg, Doreen; Brady, Marian; Williams, Brian
2017-06-01
To systematically review methods for measuring adherence used in home-based rehabilitation trials and to evaluate their validity, reliability, and acceptability. In phase 1 we searched the CENTRAL database, NHS Economic Evaluation Database, and Health Technology Assessment Database (January 2000 to April 2013) to identify adherence measures used in randomized controlled trials of allied health professional home-based rehabilitation interventions. In phase 2 we searched the databases of MEDLINE, Embase, CINAHL, Allied and Complementary Medicine Database, PsycINFO, CENTRAL, ProQuest Nursing and Allied Health, and Web of Science (inception to April 2015) for measurement property assessments for each measure. Studies assessing the validity, reliability, or acceptability of adherence measures. Two reviewers independently extracted data on participant and measure characteristics, measurement properties evaluated, evaluation methods, and outcome statistics and assessed study quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist. In phase 1 we included 8 adherence measures (56 trials). In phase 2, from the 222 measurement property assessments identified in 109 studies, 22 high-quality measurement property assessments were narratively synthesized. Low-quality studies were used as supporting data. StepWatch Activity Monitor validly and acceptably measured short-term step count adherence. The Problematic Experiences of Therapy Scale validly and reliably assessed adherence to vestibular rehabilitation exercises. Adherence diaries had moderately high validity and acceptability across limited populations. The Borg 6 to 20 scale, Bassett and Prapavessis scale, and Yamax CW series had insufficient validity. Low-quality evidence supported use of the Joint Protection Behaviour Assessment. Polar A1 series heart monitors were considered acceptable by 1 study. Current rehabilitation adherence measures are limited. Some possess promising validity and acceptability for certain parameters of adherence, situations, and populations and should be used in these situations. Rigorous evaluation of adherence measures in a broader range of populations is needed. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Risk model of valve surgery in Japan using the Japan Adult Cardiovascular Surgery Database.
Motomura, Noboru; Miyata, Hiroaki; Tsukihara, Hiroyuki; Takamoto, Shinichi
2010-11-01
Risk models of cardiac valve surgery using a large database are useful for improving surgical quality. In order to obtain accurate, high-quality assessments of surgical outcome, each geographic area should maintain its own database. The study aim was to collect Japanese data and to prepare a risk stratification of cardiac valve procedures, using the Japan Adult Cardiovascular Surgery Database (JACVSD). A total of 6562 valve procedure records from 97 participating sites throughout Japan was analyzed, using a data entry form with 255 variables that was sent to the JACVSD office from a web-based data collection system. The statistical model was constructed using multiple logistic regression. Model discrimination was tested using the area under the receiver operating characteristic curve (C-index). The model calibration was tested using the Hosmer-Lemeshow (H-L) test. Among 6562 operated cases, 15% had diabetes mellitus, 5% were urgent, and 12% involved preoperative renal failure. The observed 30-day and operative mortality rates were 2.9% and 4.0%, respectively. Significant variables with high odds ratios included emergent or salvage status (3.83), reoperation (3.43), and left ventricular dysfunction (3.01). The H-L test and C-index values for 30-day mortality were satisfactory (0.44 and 0.80, respectively). The results obtained in Japan were at least as good as those reported elsewhere. The performance of this risk model also matched that of the STS National Adult Cardiac Database and the European Society Database.
Lauricella, Leticia L; Costa, Priscila B; Salati, Michele; Pego-Fernandes, Paulo M; Terra, Ricardo M
2018-06-01
Database quality measurement should be considered a mandatory step to ensure an adequate level of confidence in data used for research and quality improvement. Several metrics have been described in the literature, but no standardized approach has been established. We aimed to describe a methodological approach applied to measure the quality and inter-rater reliability of a regional multicentric thoracic surgical database (Paulista Lung Cancer Registry). Data from the first 3 years of the Paulista Lung Cancer Registry underwent an audit process with 3 metrics: completeness, consistency, and inter-rater reliability. The first 2 methods were applied to the whole data set, and the last method was calculated using 100 cases randomized for direct auditing. Inter-rater reliability was evaluated using percentage of agreement between the data collector and auditor and through calculation of Cohen's κ and intraclass correlation. The overall completeness per section ranged from 0.88 to 1.00, and the overall consistency was 0.96. Inter-rater reliability showed many variables with high disagreement (>10%). For numerical variables, intraclass correlation was a better metric than inter-rater reliability. Cohen's κ showed that most variables had moderate to substantial agreement. The methodological approach applied to the Paulista Lung Cancer Registry showed that completeness and consistency metrics did not sufficiently reflect the real quality status of a database. The inter-rater reliability associated with κ and intraclass correlation was a better quality metric than completeness and consistency metrics because it could determine the reliability of specific variables used in research or benchmark reports. This report can be a paradigm for future studies of data quality measurement. Copyright © 2018 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Quality Analysis of Open Street Map Data
NASA Astrophysics Data System (ADS)
Wang, M.; Li, Q.; Hu, Q.; Zhou, M.
2013-05-01
Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.
Customer and household matching: resolving entity identity in data warehouses
NASA Astrophysics Data System (ADS)
Berndt, Donald J.; Satterfield, Ronald K.
2000-04-01
The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.
Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu
2015-01-01
The microbial genome database for comparative analysis (MBGD) (available at http://mbgd.genome.ad.jp/) is a comprehensive ortholog database for flexible comparative analysis of microbial genomes, where the users are allowed to create an ortholog table among any specified set of organisms. Because of the rapid increase in microbial genome data owing to the next-generation sequencing technology, it becomes increasingly challenging to maintain high-quality orthology relationships while allowing the users to incorporate the latest genomic data available into an analysis. Because many of the recently accumulating genomic data are draft genome sequences for which some complete genome sequences of the same or closely related species are available, MBGD now stores draft genome data and allows the users to incorporate them into a user-specific ortholog database using the MyMBGD functionality. In this function, draft genome data are incorporated into an existing ortholog table created only from the complete genome data in an incremental manner to prevent low-quality draft data from affecting clustering results. In addition, to provide high-quality orthology relationships, the standard ortholog table containing all the representative genomes, which is first created by the rapid classification program DomClust, is now refined using DomRefine, a recently developed program for improving domain-level clustering using multiple sequence alignment information. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
USDA-ARS?s Scientific Manuscript database
High frequency in situ measurements of nitrate can greatly reduce the uncertainty in nitrate flux estimates. Water quality databases maintained by various federal and state agencies often consist of pollutant concentration data obtained from periodic grab samples collected from gauged reaches of a s...
ACToR Chemical Structure processing using Open Source ...
ACToR (Aggregated Computational Toxicology Resource) is a centralized database repository developed by the National Center for Computational Toxicology (NCCT) at the U.S. Environmental Protection Agency (EPA). Free and open source tools were used to compile toxicity data from over 1,950 public sources. ACToR contains chemical structure information and toxicological data for over 558,000 unique chemicals. The database primarily includes data from NCCT research programs, in vivo toxicity data from ToxRef, human exposure data from ExpoCast, high-throughput screening data from ToxCast and high quality chemical structure information from the EPA DSSTox program. The DSSTox database is a chemical structure inventory for the NCCT programs and currently has about 16,000 unique structures. Included are also data from PubChem, ChemSpider, USDA, FDA, NIH and several other public data sources. ACToR has been a resource to various international and national research groups. Most of our recent efforts on ACToR are focused on improving the structural identifiers and Physico-Chemical properties of the chemicals in the database. Organizing this huge collection of data and improving the chemical structure quality of the database has posed some major challenges. Workflows have been developed to process structures, calculate chemical properties and identify relationships between CAS numbers. The Structure processing workflow integrates web services (PubChem and NIH NCI Cactus) to d
Ginkgo Biloba extract for angina pectoris: a systematic review.
Sun, Tian; Wang, Xian; Xu, Hao
2015-07-01
To evaluate the efficacy and safety of Ginkgo Biloba extract for patients with angina pectoris according to the available evidence. Electronic databases were searched for all of the randomized controlled trials (RCTs) of angina pectoris treatments with Ginkgo Biloba extract, either alone or combined with routine Western medicine (RWM), and controlled by untreated, placebo, Chinese patent medicine, or RWM treatment. The RCTs were retrieved from the following electronic databases: PubMed/MEDLINE, ProQuest Health and Medical Complete, Springer, Elsevier, and ProQuest Dissertations and Theses, Wanfang Data, China National Knowledge Infrastructure (CNKI), VIP database, China Biology Medicine (CBM), Chinese Medical Citation Index (CMCI), from the earliest database records to December 2012. No language restriction was applied. Study selection, data extraction, quality assessment, and data analyses were conducted according to the Cochrane standards. RevMan 5.1.0 provided by Cochrane Collaboration The data were analysed by using. A total of 23 RCTs (involving 2,529 patients) were included and the methodological quality was evaluated as generally low. Ginkgo Biloba extract with RWM was more effective in angina relief and electrocardiogram improvement than RWM alone. Reported adverse events included epigastric discomfort, nausea, gastrointestinal reaction, and bitter taste. Ginkgo Biloba extract may have beneficial effects on patients with angina pectoris, although the low quality of existing trials makes it difficult to draw a satisfactory conclusion. More rigorous, high quality clinical trials are needed to provide conclusive evidence.
Development of forensic-quality full mtGenome haplotypes: success rates with low template specimens.
Just, Rebecca S; Scheible, Melissa K; Fast, Spence A; Sturk-Andreaggi, Kimberly; Higginbotham, Jennifer L; Lyons, Elizabeth A; Bush, Jocelyn M; Peck, Michelle A; Ring, Joseph D; Diegoli, Toni M; Röck, Alexander W; Huber, Gabriela E; Nagl, Simone; Strobl, Christina; Zimmermann, Bettina; Parson, Walther; Irwin, Jodi A
2014-05-01
Forensic mitochondrial DNA (mtDNA) testing requires appropriate, high quality reference population data for estimating the rarity of questioned haplotypes and, in turn, the strength of the mtDNA evidence. Available reference databases (SWGDAM, EMPOP) currently include information from the mtDNA control region; however, novel methods that quickly and easily recover mtDNA coding region data are becoming increasingly available. Though these assays promise to both facilitate the acquisition of mitochondrial genome (mtGenome) data and maximize the general utility of mtDNA testing in forensics, the appropriate reference data and database tools required for their routine application in forensic casework are lacking. To address this deficiency, we have undertaken an effort to: (1) increase the large-scale availability of high-quality entire mtGenome reference population data, and (2) improve the information technology infrastructure required to access/search mtGenome data and employ them in forensic casework. Here, we describe the application of a data generation and analysis workflow to the development of more than 400 complete, forensic-quality mtGenomes from low DNA quantity blood serum specimens as part of a U.S. National Institute of Justice funded reference population databasing initiative. We discuss the minor modifications made to a published mtGenome Sanger sequencing protocol to maintain a high rate of throughput while minimizing manual reprocessing with these low template samples. The successful use of this semi-automated strategy on forensic-like samples provides practical insight into the feasibility of producing complete mtGenome data in a routine casework environment, and demonstrates that large (>2kb) mtDNA fragments can regularly be recovered from high quality but very low DNA quantity specimens. Further, the detailed empirical data we provide on the amplification success rates across a range of DNA input quantities will be useful moving forward as PCR-based strategies for mtDNA enrichment are considered for targeted next-generation sequencing workflows. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
CVD2014-A Database for Evaluating No-Reference Video Quality Assessment Algorithms.
Nuutinen, Mikko; Virtanen, Toni; Vaahteranoksa, Mikko; Vuori, Tero; Oittinen, Pirkko; Hakkinen, Jukka
2016-07-01
In this paper, we present a new video database: CVD2014-Camera Video Database. In contrast to previous video databases, this database uses real cameras rather than introducing distortions via post-processing, which results in a complex distortion space in regard to the video acquisition process. CVD2014 contains a total of 234 videos that are recorded using 78 different cameras. Moreover, this database contains the observer-specific quality evaluation scores rather than only providing mean opinion scores. We have also collected open-ended quality descriptions that are provided by the observers. These descriptions were used to define the quality dimensions for the videos in CVD2014. The dimensions included sharpness, graininess, color balance, darkness, and jerkiness. At the end of this paper, a performance study of image and video quality algorithms for predicting the subjective video quality is reported. For this performance study, we proposed a new performance measure that accounts for observer variance. The performance study revealed that there is room for improvement regarding the video quality assessment algorithms. The CVD2014 video database has been made publicly available for the research community. All video sequences and corresponding subjective ratings can be obtained from the CVD2014 project page (http://www.helsinki.fi/psychology/groups/visualcognition/).
Amaratunga, Thelina; Dobranowski, Julian
2016-09-01
Preventable yet clinically significant rates of medical error remain systemic, while health care spending is at a historic high. Industry-based quality improvement (QI) methodologies show potential for utility in health care and radiology because they use an empirical approach to reduce variability and improve workflow. The aim of this review was to systematically assess the literature with regard to the use and efficacy of Lean and Six Sigma (the most popular of the industrial QI methodologies) within radiology. MEDLINE, the Allied & Complementary Medicine Database, Embase Classic + Embase, Health and Psychosocial Instruments, and the Ovid HealthStar database, alongside the Cochrane Library databases, were searched on June 2015. Empirical studies in peer-reviewed journals were included if they assessed the use of Lean, Six Sigma, or Lean Six Sigma with regard to their ability to improve a variety of quality metrics in a radiology-centered clinical setting. Of the 278 articles returned, 23 studies were suitable for inclusion. Of these, 10 assessed Six Sigma, 7 assessed Lean, and 6 assessed Lean Six Sigma. The diverse range of measured outcomes can be organized into 7 common aims: cost savings, reducing appointment wait time, reducing in-department wait time, increasing patient volume, reducing cycle time, reducing defects, and increasing staff and patient safety and satisfaction. All of the included studies demonstrated improvements across a variety of outcomes. However, there were high rates of systematic bias and imprecision as per the Grading of Recommendations Assessment, Development and Evaluation guidelines. Lean and Six Sigma QI methodologies have the potential to reduce error and costs and improve quality within radiology. However, there is a pressing need to conduct high-quality studies in order to realize the true potential of these QI methodologies in health care and radiology. Recommendations on how to improve the quality of the literature are proposed. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Unlicensed pharmaceutical preparations for clinical patient care: Ensuring safety.
de Wilde, Sofieke; de Jong, Maria G H; Le Brun, Paul P H; Guchelaar, Henk-Jan; Schimmel, Kirsten J M
2018-01-01
Most medicinal products dispensed to patients have marketing authorization (MA) to ensure high quality of the product, safety, and efficacy. However, in daily practice, to treat patients adequately, there is a medical need for drugs that do not hold MA. To meet this medical need, medicinal products are used in clinical care without MA (unlicensed), such as products prepared by (local) pharmacies: the pharmaceutical preparations. Three types of pharmaceutical preparations are distinguished: (i) reconstitution in excess of summary of product characteristics; (ii) adaptation of a licensed medicinal product (outside its official labeling); (iii) medicinal products from an active pharmaceutical ingredient. Although unlicensed, patients may expect the same quality for these unlicensed pharmaceutical preparations as for the licensed medicinal products. To assure this quality, a proper risk-benefit assessment and proper documentation in (centralized) patient registries and linking to a national pharmacovigilance database should be in place. Based on a risk assessment matrix, requirements for quality assurance can be determined, which has impact on the level of documentation of a pharmaceutical preparation. In this paper, the approach for good documentation including quality assurance and benefit-risk assessment will be discussed and possibilities for patient registries are described to make these crucial preparations available for regular patient care. KEY POINTS Ensuring pharmaceutical quality and performing a proper benefit-risk assessment will guarantee safe use of pharmaceutical preparations. Good documentation of (ultra-)orphan treatments can be collected in centralized patient registries and should be combined with existing information in (inter)national databases and self-reflection of patients. Linking patient registries to a centralized database for adverse drug events is highly recommended as it increases safety control of the (ultra) orphan pharmaceutical preparations. Copyright © 2017 John Wiley & Sons, Ltd.
Sensory re-education after nerve injury of the upper limb: a systematic review.
Oud, Tanja; Beelen, Anita; Eijffinger, Elianne; Nollet, Frans
2007-06-01
To systematically review the available evidence for the effectiveness of sensory re-education to improve the sensibility of the hand in patients with a peripheral nerve injury of the upper limb. Studies were identified by an electronic search in the databases MEDLINE, Cumulative Index to Nursing & Allied Health Literature (CINAHL), EMBASE, the Cochrane Library, the Physiotherapy Evidence Database (PEDro), and the database of the Dutch National Institute of Allied Health Professions (Doconline) and by screening the reference lists of relevant articles. Two reviewers selected studies that met the following inclusion criteria: all designs except case reports, adults with impaired sensibility of the hand due to a peripheral nerve injury of the upper limb, and sensibility and functional sensibility as outcome measures. The methodological quality of the included studies was independently assessed by two reviewers. A best-evidence synthesis was performed, based on design, methodological quality and significant findings on outcome measures. Seven studies, with sample sizes ranging from 11 to 49, were included in the systematic review and appraised for content. Five of these studies were of poor methodological quality. One uncontrolled study (N = 1 3 ) was considered to be of sufficient methodological quality, and one randomized controlled trial (N = 49) was of high methodological quality. Best-evidence synthesis showed that there is limited evidence for the effectiveness of sensory re-education, provided by a statistically significant improvement in sensibility found in one high-quality randomized controlled trial. There is a need for further well-defined clinical trials to assess the effectiveness of sensory re-education of patients with impaired sensibility of the hand due to a peripheral nerve injury.
Compilation of historical water-quality data for selected springs in Texas, by ecoregion
Heitmuller, Franklin T.; Williams, Iona P.
2006-01-01
Springs are important hydrologic features in Texas. A database of about 2,000 historically documented springs and available spring-flow measurements previously has been compiled and published, but water-quality data remain scattered in published sources. This report by the U.S. Geological Survey, in cooperation with the Texas Parks and Wildlife Department, documents the compilation of data for 232 springs in Texas on the basis of a set of criteria and the development of a water-quality database for the selected springs. The selection of springs for compilation of historical water-quality data in Texas was made using existing digital and hard-copy data, responses to mailed surveys, selection criteria established by various stakeholders, geographic information systems, and digital database queries. Most springs were selected by computing the highest mean spring flows for each Texas level III ecoregion. A brief assessment of the water-quality data for springs in Texas shows that few data are available in the Arizona/New Mexico Mountains, High Plains, East Central Texas Plains, Western Gulf Coastal Plain, and South Central Plains ecoregions. Water-quality data are more abundant for the Chihuahuan Deserts, Edwards Plateau, and Texas Blackland Prairies ecoregions. Selected constituent concentrations in Texas springs, including silica, calcium, magnesium, sodium, potassium, strontium, sulfate, chloride, fluoride, nitrate (nitrogen), dissolved solids, and hardness (as calcium carbonate) are comparatively high in the Chihuahuan Deserts, Southwestern Tablelands, Central Great Plains, and Cross Timbers ecoregions, mostly as a result of subsurface geology. Comparatively low concentrations of selected constituents in Texas springs are associated with the Arizona/New Mexico Mountains, Southern Texas Plains, East Central Texas Plains, and South Central Plains ecoregions.
1998-01-01
sand and gravel outcrops - led to a database of hydraulic conductivities, porosities and kinetic parameters for each lithologjcal fades present in...sedimentological methods. The resulting 2D high-resolution data sets represent a veiy detailed database of excellent quality. On the basis of one example...from an outcrop in southwest Germany the process of building up the database is explained and the results of modelling of transport kinetics in such
NASA Astrophysics Data System (ADS)
Gentry, Jeffery D.
2000-05-01
A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.
Salati, Michele; Pompili, Cecilia; Refai, Majed; Xiumè, Francesco; Sabbatini, Armando; Brunelli, Alessandro
2014-06-01
The aim of the present study was to verify whether the implementation of an electronic health record (EHR) in our thoracic surgery unit allows creation of a high-quality clinical database saving time and costs. Before August 2011, multiple individuals compiled the on-paper documents/records and a single data manager inputted selected data into the database (traditional database, tDB). Since the adoption of an EHR in August 2011, multiple individuals have been responsible for compiling the EHR, which automatically generates a real-time database (EHR-based database, eDB), without the need for a data manager. During the initial period of implementation of the EHR, periodic meetings were held with all physicians involved in the use of the EHR in order to monitor and standardize the data registration process. Data quality of the first 100 anatomical lung resections recorded in the eDB was assessed by measuring the total number of missing values (MVs: existing non-reported value) and inaccurate values (wrong data) occurring in 95 core variables. The average MV of the eDB was compared with the one occurring in the same variables of the last 100 records registered in the tDB. A learning curve was constructed by plotting the number of MVs in the electronic database and tDB with the patients arranged by the date of registration. The tDB and eDB had similar MVs (0.74 vs 1, P = 0.13). The learning curve showed an initial phase including about 35 records, where MV in the eDB was higher than that in the tDB (1.9 vs 0.74, P = 0.03), and a subsequent phase, where the MV was similar in the two databases (0.7 vs 0.74, P = 0.6). The inaccuracy rate of these two phases in the eDB was stable (0.5 vs 0.3, P = 0.3). Using EHR saved an average of 9 min per patient, totalling 15 h saved for obtaining a dataset of 100 patients with respect to the tDB. The implementation of EHR allowed streamlining the process of clinical data recording. It saved time and human resource costs, without compromising the quality of data. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Timing of high-quality child care and cognitive, language, and preacademic development.
Li, Weilin; Farkas, George; Duncan, Greg J; Burchinal, Margaret R; Vandell, Deborah Lowe
2013-08-01
The effects of high- versus low-quality child care during 2 developmental periods (infant-toddlerhood and preschool) were examined using data from the National Institute of Child Health and Human Development Study of Early Child Care. Propensity score matching was used to account for differences in families who used different combinations of child care quality during the 2 developmental periods. Findings indicated that cognitive, language, and preacademic skills prior to school entry were highest among children who experienced high-quality care in both the infant-toddler and preschool periods, somewhat lower among children who experienced high-quality child care during only 1 of these periods, and lowest among children who experienced low-quality care during both periods. Irrespective of the care received during infancy-toddlerhood, high-quality preschool care was related to better language and preacademic outcomes at the end of the preschool period; high-quality infant-toddler care, irrespective of preschool care, was related to better memory skills at the end of the preschool period. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Factors associated with high-quality/low-cost hospital performance.
Jiang, H Joanna; Friedman, Bernard; Begun, James W
2006-01-01
This study explores organizational and market characteristics associated with superior hospital performance in both quality and cost of care, using the Healthcare Cost and Utilization Project State Inpatient Databases for ten states in 1997 and 2001. After controlling for a variety of patient factors, we found that for-profit ownership, hospital competition, and the number of HMOs were positively associated with the likelihood of attaining high-quality/low-cost performance. Furthermore, we examined interactions between organizational and market characteristics and identified a number of significant interactions. For example, the positive likelihood associated with for-profit hospitals diminished in markets with high HMO penetration.
Study of Temporal Effects on Subjective Video Quality of Experience.
Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad
2017-11-01
HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.
Childs, Kevin L; Konganti, Kranti; Buell, C Robin
2012-01-01
Major feedstock sources for future biofuel production are likely to be high biomass producing plant species such as poplar, pine, switchgrass, sorghum and maize. One active area of research in these species is genome-enabled improvement of lignocellulosic biofuel feedstock quality and yield. To facilitate genomic-based investigations in these species, we developed the Biofuel Feedstock Genomic Resource (BFGR), a database and web-portal that provides high-quality, uniform and integrated functional annotation of gene and transcript assembly sequences from species of interest to lignocellulosic biofuel feedstock researchers. The BFGR includes sequence data from 54 species and permits researchers to view, analyze and obtain annotation at the gene, transcript, protein and genome level. Annotation of biochemical pathways permits the identification of key genes and transcripts central to the improvement of lignocellulosic properties in these species. The integrated nature of the BFGR in terms of annotation methods, orthologous/paralogous relationships and linkage to seven species with complete genome sequences allows comparative analyses for biofuel feedstock species with limited sequence resources. Database URL: http://bfgr.plantbiology.msu.edu.
Moran, Jean M; Feng, Mary; Benedetti, Lisa A; Marsh, Robin; Griffith, Kent A; Matuszak, Martha M; Hess, Michael; McMullen, Matthew; Fisher, Jennifer H; Nurushev, Teamour; Grubb, Margaret; Gardner, Stephen; Nielsen, Daniel; Jagsi, Reshma; Hayman, James A; Pierce, Lori J
A database in which patient data are compiled allows analytic opportunities for continuous improvements in treatment quality and comparative effectiveness research. We describe the development of a novel, web-based system that supports the collection of complex radiation treatment planning information from centers that use diverse techniques, software, and hardware for radiation oncology care in a statewide quality collaborative, the Michigan Radiation Oncology Quality Consortium (MROQC). The MROQC database seeks to enable assessment of physician- and patient-reported outcomes and quality improvement as a function of treatment planning and delivery techniques for breast and lung cancer patients. We created tools to collect anonymized data based on all plans. The MROQC system representing 24 institutions has been successfully deployed in the state of Michigan. Since 2012, dose-volume histogram and Digital Imaging and Communications in Medicine-radiation therapy plan data and information on simulation, planning, and delivery techniques have been collected. Audits indicated >90% accurate data submission and spurred refinements to data collection methodology. This model web-based system captures detailed, high-quality radiation therapy dosimetry data along with patient- and physician-reported outcomes and clinical data for a radiation therapy collaborative quality initiative. The collaborative nature of the project has been integral to its success. Our methodology can be applied to setting up analogous consortiums and databases. Copyright © 2016 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Pölkki, Tarja; Kanste, Outi; Kääriäinen, Maria; Elo, Satu; Kyngäs, Helvi
2014-02-01
To analyse systematic review articles published in the top 10 nursing journals to determine the quality of the methods employed within them. Systematic review is defined as a scientific research method that synthesises high-quality scientific knowledge on a given topic. The number of such reviews in nursing science has increased dramatically during recent years, but their methodological quality has not previously been assessed. A review of the literature using a narrative approach. Ranked impact factor scores for nursing journals were obtained from the Journal Citation Report database of the Institute of Scientific Information (ISI Web of Knowledge). All issues from the years 2009 and 2010 of the top 10 ranked journals were included. CINAHL and MEDLINE databases were searched to locate studies using the search terms 'systematic review' and 'systematic literature review'. A total of 39 eligible studies were identified. Their methodological quality was evaluated through the specific criteria of quality assessment, description of synthesis and strengths and weaknesses reported in the included studies. Most of the eligible systematic reviews included several different designs or types of quantitative study. The majority included a quality assessment, and a total of 17 different criteria were identified. The method of synthesis was mentioned in about half of the reviews, the most common being narrative synthesis. The weaknesses of reviews were discussed, while strengths were rarely highlighted. The methodological quality of the systematic reviews examined varied considerably, although they were all published in nursing journals with a high-impact factor. Despite the fact that systematic reviews are considered the most robust source of research evidence, they vary in methodological quality. This point is important to consider in clinical practice when applying the results to patient care. © 2013 Blackwell Publishing Ltd.
The Facility Registry System (FRS) is a centrally managed database that identifies facilities, sites or places subject to environmental regulations or of environmental interest. FRS creates high-quality, accurate, and authoritative facility identification records through rigorous...
Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient.
Brossier, David; El Taani, Redha; Sauthier, Michael; Roumeliotis, Nadia; Emeriaud, Guillaume; Jouvet, Philippe
2018-04-01
Our objective was to construct a prospective high-quality and high-frequency database combining patient therapeutics and clinical variables in real time, automatically fed by the information system and network architecture available through fully electronic charting in our PICU. The purpose of this article is to describe the data acquisition process from bedside to the research electronic database. Descriptive report and analysis of a prospective database. A 24-bed PICU, medical ICU, surgical ICU, and cardiac ICU in a tertiary care free-standing maternal child health center in Canada. All patients less than 18 years old were included at admission to the PICU. None. Between May 21, 2015, and December 31, 2016, 1,386 consecutive PICU stays from 1,194 patients were recorded in the database. Data were prospectively collected from admission to discharge, every 5 seconds from monitors and every 30 seconds from mechanical ventilators and infusion pumps. These data were linked to the patient's electronic medical record. The database total volume was 241 GB. The patients' median age was 2.0 years (interquartile range, 0.0-9.0). Data were available for all mechanically ventilated patients (n = 511; recorded duration, 77,678 hr), and respiratory failure was the most frequent reason for admission (n = 360). The complete pharmacologic profile was synched to database for all PICU stays. Following this implementation, a validation phase is in process and several research projects are ongoing using this high-fidelity database. Using the existing bedside information system and network architecture of our PICU, we implemented an ongoing high-fidelity prospectively collected electronic database, preventing the continuous loss of scientific information. This offers the opportunity to develop research on clinical decision support systems and computational models of cardiorespiratory physiology for example.
A review of data quality assessment methods for public health information systems.
Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping
2014-05-14
High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users' concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process.
A Review of Data Quality Assessment Methods for Public Health Information Systems
Chen, Hong; Hailey, David; Wang, Ning; Yu, Ping
2014-01-01
High quality data and effective data quality assessment are required for accurately evaluating the impact of public health interventions and measuring public health outcomes. Data, data use, and data collection process, as the three dimensions of data quality, all need to be assessed for overall data quality assessment. We reviewed current data quality assessment methods. The relevant study was identified in major databases and well-known institutional websites. We found the dimension of data was most frequently assessed. Completeness, accuracy, and timeliness were the three most-used attributes among a total of 49 attributes of data quality. The major quantitative assessment methods were descriptive surveys and data audits, whereas the common qualitative assessment methods were interview and documentation review. The limitations of the reviewed studies included inattentiveness to data use and data collection process, inconsistency in the definition of attributes of data quality, failure to address data users’ concerns and a lack of systematic procedures in data quality assessment. This review study is limited by the coverage of the databases and the breadth of public health information systems. Further research could develop consistent data quality definitions and attributes. More research efforts should be given to assess the quality of data use and the quality of data collection process. PMID:24830450
CEBS: a comprehensive annotated database of toxicological data
Lea, Isabel A.; Gong, Hui; Paleja, Anand; Rashid, Asif; Fostel, Jennifer
2017-01-01
The Chemical Effects in Biological Systems database (CEBS) is a comprehensive and unique toxicology resource that compiles individual and summary animal data from the National Toxicology Program (NTP) testing program and other depositors into a single electronic repository. CEBS has undergone significant updates in recent years and currently contains over 11 000 test articles (exposure agents) and over 8000 studies including all available NTP carcinogenicity, short-term toxicity and genetic toxicity studies. Study data provided to CEBS are manually curated, accessioned and subject to quality assurance review prior to release to ensure high quality. The CEBS database has two main components: data collection and data delivery. To accommodate the breadth of data produced by NTP, the CEBS data collection component is an integrated relational design that allows the flexibility to capture any type of electronic data (to date). The data delivery component of the database comprises a series of dedicated user interface tables containing pre-processed data that support each component of the user interface. The user interface has been updated to include a series of nine Guided Search tools that allow access to NTP summary and conclusion data and larger non-NTP datasets. The CEBS database can be accessed online at http://www.niehs.nih.gov/research/resources/databases/cebs/. PMID:27899660
A comprehensive database of quality-rated fossil ages for Sahul's Quaternary vertebrates.
Rodríguez-Rey, Marta; Herrando-Pérez, Salvador; Brook, Barry W; Saltré, Frédérik; Alroy, John; Beeton, Nicholas; Bird, Michael I; Cooper, Alan; Gillespie, Richard; Jacobs, Zenobia; Johnson, Christopher N; Miller, Gifford H; Prideaux, Gavin J; Roberts, Richard G; Turney, Chris S M; Bradshaw, Corey J A
2016-07-19
The study of palaeo-chronologies using fossil data provides evidence for past ecological and evolutionary processes, and is therefore useful for predicting patterns and impacts of future environmental change. However, the robustness of inferences made from fossil ages relies heavily on both the quantity and quality of available data. We compiled Quaternary non-human vertebrate fossil ages from Sahul published up to 2013. This, the FosSahul database, includes 9,302 fossil records from 363 deposits, for a total of 478 species within 215 genera, of which 27 are from extinct and extant megafaunal species (2,559 records). We also provide a rating of reliability of individual absolute age based on the dating protocols and association between the dated materials and the fossil remains. Our proposed rating system identified 2,422 records with high-quality ages (i.e., a reduction of 74%). There are many applications of the database, including disentangling the confounding influences of hypothetical extinction drivers, better spatial distribution estimates of species relative to palaeo-climates, and potentially identifying new areas for fossil discovery.
A comprehensive database of quality-rated fossil ages for Sahul’s Quaternary vertebrates
Rodríguez-Rey, Marta; Herrando-Pérez, Salvador; Brook, Barry W.; Saltré, Frédérik; Alroy, John; Beeton, Nicholas; Bird, Michael I.; Cooper, Alan; Gillespie, Richard; Jacobs, Zenobia; Johnson, Christopher N.; Miller, Gifford H.; Prideaux, Gavin J.; Roberts, Richard G.; Turney, Chris S.M.; Bradshaw, Corey J.A.
2016-01-01
The study of palaeo-chronologies using fossil data provides evidence for past ecological and evolutionary processes, and is therefore useful for predicting patterns and impacts of future environmental change. However, the robustness of inferences made from fossil ages relies heavily on both the quantity and quality of available data. We compiled Quaternary non-human vertebrate fossil ages from Sahul published up to 2013. This, the FosSahul database, includes 9,302 fossil records from 363 deposits, for a total of 478 species within 215 genera, of which 27 are from extinct and extant megafaunal species (2,559 records). We also provide a rating of reliability of individual absolute age based on the dating protocols and association between the dated materials and the fossil remains. Our proposed rating system identified 2,422 records with high-quality ages (i.e., a reduction of 74%). There are many applications of the database, including disentangling the confounding influences of hypothetical extinction drivers, better spatial distribution estimates of species relative to palaeo-climates, and potentially identifying new areas for fossil discovery. PMID:27434208
... compound (VOC) emissions, and more. U.S. Department of Agriculture (USDA) Water Quality Information Center Databases : online databases that may be related to water and agriculture. National Park Service (NPS) Water Quality Program : NPS ...
Nørgaard, M; Johnsen, S P
2016-02-01
In Denmark, the need for monitoring of clinical quality and patient safety with feedback to the clinical, administrative and political systems has resulted in the establishment of a network of more than 60 publicly financed nationwide clinical quality databases. Although primarily devoted to monitoring and improving quality of care, the potential of these databases as data sources in clinical research is increasingly being recognized. In this review, we describe these databases focusing on their use as data sources for clinical research, including their strengths and weaknesses as well as future concerns and opportunities. The research potential of the clinical quality databases is substantial but has so far only been explored to a limited extent. Efforts related to technical, legal and financial challenges are needed in order to take full advantage of this potential. © 2016 The Association for the Publication of the Journal of Internal Medicine.
Tacutu, Robi; Craig, Thomas; Budovsky, Arie; Wuttke, Daniel; Lehmann, Gilad; Taranukha, Dmitri; Costa, Joana; Fraifeld, Vadim E.; de Magalhães, João Pedro
2013-01-01
The Human Ageing Genomic Resources (HAGR, http://genomics.senescence.info) is a freely available online collection of research databases and tools for the biology and genetics of ageing. HAGR features now several databases with high-quality manually curated data: (i) GenAge, a database of genes associated with ageing in humans and model organisms; (ii) AnAge, an extensive collection of longevity records and complementary traits for >4000 vertebrate species; and (iii) GenDR, a newly incorporated database, containing both gene mutations that interfere with dietary restriction-mediated lifespan extension and consistent gene expression changes induced by dietary restriction. Since its creation about 10 years ago, major efforts have been undertaken to maintain the quality of data in HAGR, while further continuing to develop, improve and extend it. This article briefly describes the content of HAGR and details the major updates since its previous publications, in terms of both structure and content. The completely redesigned interface, more intuitive and more integrative of HAGR resources, is also presented. Altogether, we hope that through its improvements, the current version of HAGR will continue to provide users with the most comprehensive and accessible resources available today in the field of biogerontology. PMID:23193293
Current databases on biological variation: pros, cons and progress.
Ricós, C; Alvarez, V; Cava, F; García-Lario, J V; Hernández, A; Jiménez, C V; Minchinela, J; Perich, C; Simón, M
1999-11-01
A database with reliable information to derive definitive analytical quality specifications for a large number of clinical laboratory tests was prepared in this work. This was achieved by comparing and correlating descriptive data and relevant observations with the biological variation information, an approach that had not been used in the previous efforts of this type. The material compiled in the database was obtained from published articles referenced in BIOS, CURRENT CONTENTS, EMBASE and MEDLINE using "biological variation & laboratory medicine" as key words, as well as books and doctoral theses provided by their authors. The database covers 316 quantities and reviews 191 articles, fewer than 10 of which had to be rejected. The within- and between-subject coefficients of variation and the subsequent desirable quality specifications for precision, bias and total error for all the quantities accepted are presented. Sex-related stratification of results was justified for only four quantities and, in these cases, quality specifications were derived from the group with lower within-subject variation. For certain quantities, biological variation in pathological states was higher than in the healthy state. In these cases, quality specifications were derived only from the healthy population (most stringent). Several quantities (particularly hormones) have been treated in very few articles and the results found are highly discrepant. Therefore, professionals in laboratory medicine should be strongly encouraged to study the quantities for which results are discrepant, the 90 quantities described in only one paper and the numerous quantities that have not been the subject of study.
Sousa Nanji, Liliana; Torres Cardoso, André; Costa, João; Vaz-Carneiro, António
2015-01-01
Impairment of the upper limbs is quite frequent after stroke, making rehabilitation an essential step towards clinical recovery and patient empowerment. This review aimed to synthetize existing evidence regarding interventions for upper limb function improvement after Stroke and to assess which would bring some benefit. The Cochrane Database of Systematic Reviews, the Database of Reviews of Effects and PROSPERO databases were searched until June 2013 and 40 reviews have been included, covering 503 studies, 18 078 participants and 18 interventions, as well as different doses and settings of interventions. The main results were: 1- Information currently available is insufficient to assess effectiveness of each intervention and to enable comparison of interventions; 2- Transcranial direct current stimulation brings no benefit for outcomes of activities of daily living; 3- Moderate-quality evidence showed a beneficial effect of constraint-induced movement therapy, mental practice, mirror therapy, interventions for sensory impairment, virtual reality and repetitive task practice; 4- Unilateral arm training may be more effective than bilateral arm training; 5- Moderate-quality evidence showed a beneficial effect of robotics on measures of impairment and ADLs; 6- There is no evidence of benefit or harm for technics such as repetitive transcranial magnetic stimulation, music therapy, pharmacological interventions, electrical stimulation and other therapies. Currently available evidence is insufficient and of low quality, not supporting clear clinical decisions. High-quality studies are still needed.
Gómez-García, Francisco; Ruano, Juan; Gay-Mimbrera, Jesus; Aguilar-Luque, Macarena; Sanz-Cabanillas, Juan Luis; Alcalde-Mellado, Patricia; Maestre-López, Beatriz; Carmona-Fernández, Pedro Jesús; González-Padilla, Marcelino; García-Nieto, Antonio Vélez; Isla-Tejera, Beatriz
2017-12-01
No gold standard exists to assess methodological quality of systematic reviews (SRs). Although Assessing the Methodological Quality of Systematic Reviews (AMSTAR) is widely accepted for analyzing quality, the ROBIS instrument has recently been developed. This study aimed to compare the capacity of both instruments to capture the quality of SRs concerning psoriasis interventions. Systematic literature searches were undertaken on relevant databases. For each review, methodological quality and bias risk were evaluated using the AMSTAR and ROBIS tools. Descriptive and principal component analyses were conducted to describe similarities and discrepancies between both assessment tools. We classified 139 intervention SRs as displaying high/moderate/low methodological quality and as high/low risk of bias. A high risk of bias was detected for most SRs classified as displaying high or moderate methodological quality by AMSTAR. When comparing ROBIS result profiles, responses to domain 4 signaling questions showed the greatest differences between bias risk assessments, whereas domain 2 items showed the least. When considering SRs published about psoriasis, methodological quality remains suboptimal, and the risk of bias is elevated, even for SRs exhibiting high methodological quality. Furthermore, the AMSTAR and ROBIS tools may be considered as complementary when conducting quality assessment of SRs. Copyright © 2017 Elsevier Inc. All rights reserved.
High-integrity databases for helicopter operations
NASA Astrophysics Data System (ADS)
Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg
2009-05-01
Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.
[Quality management and participation into clinical database].
Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi
2013-07-01
Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients.
The NLCD-MODIS land cover-albedo database integrates high-quality MODIS albedo observations with areas of homogeneous land cover from NLCD. The spatial resolution (pixel size) of the database is 480m-x-480m aligned to the standardized UGSG Albers Equal-Area projection. The spatial extent of the database is the continental United States. This dataset is associated with the following publication:Wickham , J., C.A. Barnes, and T. Wade. Combining NLCD and MODIS to Create a Land Cover-Albedo Dataset for the Continental United States. REMOTE SENSING OF ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 170(0): 143-153, (2015).
NASA Technical Reports Server (NTRS)
Holben, Brent; Slutsker, Ilya; Giles, David; Eck, Thomas; Smirnov, Alexander; Sinyuk, Aliaksandr; Schafer, Joel; Sorokin, Mikhail; Rodriguez, Jon; Kraft, Jason;
2016-01-01
Aerosols are highly variable in space, time and properties. Global assessment from satellite platforms and model predictions rely on validation from AERONET, a highly accurate ground-based network. Ver. 3 represents a significant improvement in accuracy and quality.
An evaluation of information retrieval accuracy with simulated OCR output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, W.B.; Harding, S.M.; Taghva, K.
Optical Character Recognition (OCR) is a critical part of many text-based applications. Although some commercial systems use the output from OCR devices to index documents without editing, there is very little quantitative data on the impact of OCR errors on the accuracy of a text retrieval system. Because of the difficulty of constructing test collections to obtain this data, we have carried out evaluation using simulated OCR output on a variety of databases. The results show that high quality OCR devices have little effect on the accuracy of retrieval, but low quality devices used with databases of short documents canmore » result in significant degradation.« less
EQUIP: A European Survey of Quality Criteria for the Evaluation of Databases.
ERIC Educational Resources Information Center
Wilson, T. D.
1998-01-01
Reports on two stages of an investigation into the perceived quality of online databases. Presents data from 989 questionnaires from 600 database users in 12 European and Scandinavian countries and results of a test of the SERVQUAL methodology for identifying user expectations about database services. Lists statements used in the SERVQUAL survey.…
Waiho, Khor; Fazhan, Hanafiah; Shahreza, Md Sheriff; Moh, Julia Hwei Zhong; Noorbaiduri, Shaibani; Wong, Li Lian; Sinnasamy, Saranya
2017-01-01
Adequate genetic information is essential for sustainable crustacean fisheries and aquaculture management. The commercially important orange mud crab, Scylla olivacea, is prevalent in Southeast Asia region and is highly sought after. Although it is a suitable aquaculture candidate, full domestication of this species is hampered by the lack of knowledge about the sexual maturation process and the molecular mechanisms behind it, especially in males. To date, data on its whole genome is yet to be reported for S. olivacea. The available transcriptome data published previously on this species focus primarily on females and the role of central nervous system in reproductive development. De novo transcriptome sequencing for the testes of S. olivacea from immature, maturing and mature stages were performed. A total of approximately 144 million high-quality reads were generated and de novo assembled into 160,569 transcripts with a total length of 142.2 Mb. Approximately 15–23% of the total assembled transcripts were annotated when compared to public protein sequence databases (i.e. UniProt database, Interpro database, Pfam database and Drosophila melanogaster protein database), and GO-categorised with GO Ontology terms. A total of 156,181 high-quality Single-Nucleotide Polymorphisms (SNPs) were mined from the transcriptome data of present study. Transcriptome comparison among the testes of different maturation stages revealed one gene (beta crystallin like gene) with the most significant differential expression—up-regulated in immature stage and down-regulated in maturing and mature stages. This was further validated by qRT-PCR. In conclusion, a comprehensive transcriptome of the testis of orange mud crabs from different maturation stages were obtained. This report provides an invaluable resource for enhancing our understanding of this species’ genome structure and biology, as expressed and controlled by their gonads. PMID:28135340
The Influence of Hospital Market Competition on Patient Mortality and Total Performance Score.
Haley, Donald Robert; Zhao, Mei; Spaulding, Aaron; Hamadi, Hanadi; Xu, Jing; Yeomans, Katelyn
2016-01-01
The Affordable Care Act of 2010 launch of Medicare Value-Based Purchasing has become the platform for payment reform. It is a mechanism by which buyers of health care services hold providers accountable for high-quality and cost-effective care. The objective of the study was to examine the relationship between quality of hospital care and hospital competition using the quality-quantity behavioral model of hospital behavior. The quality-quantity behavioral model of hospital behavior was used as the conceptual framework for this study. Data from the American Hospital Association database, the Hospital Compare database, and the Area Health Resources Files database were used. Multivariate regression analysis was used to examine the effect of hospital competition on patient mortality. Hospital market competition was significantly and negatively related to the 3 mortality rates. Consistent with the literature, hospitals located in more competitive markets had lower mortality rates for patients with acute myocardial infarction, heart failure, and pneumonia. The results suggest that hospitals may be more readily to compete on quality of care and patient outcomes. The findings are important because policies that seek to control and negatively influence a competitive hospital environment, such as Certificate of Need legislation, may negatively affect patient mortality rates. Therefore, policymakers should encourage the development of policies that facilitate a more competitive and transparent health care marketplace to potentially and significantly improve patient mortality.
2016-01-01
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480
Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N
2016-08-05
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .
Five Librarians Talk about Quality Control and the OCLC Database.
ERIC Educational Resources Information Center
Helge, Brian; And Others
1987-01-01
Five librarians considered authorities on quality cataloging in the OCLC Online Union Catalog were interviewed to obtain their views on the current level of quality control in the OCLC database, the responsibilities of OCLC and individual libraries in improving the quality of records, and the consequences of quality control problems. (CLB)
Vanhoorne, Bart; Decock, Wim; Vranken, Sofie; Lanssens, Thomas; Dekeyzer, Stefanie; Verfaille, Kevin; Horton, Tammy; Kroh, Andreas; Hernandez, Francisco; Mees, Jan
2018-01-01
The World Register of Marine Species (WoRMS) celebrated its 10th anniversary in 2017. WoRMS is a unique database: there is no comparable global database for marine species, which is driven by a large, global expert community, is supported by a Data Management Team and can rely on a permanent host institute, dedicated to keeping WoRMS online. Over the past ten years, the content of WoRMS has grown steadily, and the system currently contains more than 242,000 accepted marine species. WoRMS has not yet reached completeness: approximately 2,000 newly described species per year are added, and editors also enter the remaining missing older names–both accepted and unaccepted–an effort amounting to approximately 20,000 taxon name additions per year. WoRMS is used extensively, through different channels, indicating that it is recognized as a high-quality database on marine species information. It is updated on a daily basis by its Editorial Board, which currently consists of 490 taxonomic and thematic experts located around the world. Owing to its unique qualities, WoRMS has become a partner in many large-scale initiatives including OBIS, LifeWatch and the Catalogue of Life, where it is recognized as a high-quality and reliable source of information for marine taxonomy. PMID:29624577
Blind prediction of natural video quality.
Saad, Michele A; Bovik, Alan C; Charrier, Christophe
2014-03-01
We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.
A database for assessment of effect of lossy compression on digital mammograms
NASA Astrophysics Data System (ADS)
Wang, Jiheng; Sahiner, Berkman; Petrick, Nicholas; Pezeshk, Aria
2018-03-01
With widespread use of screening digital mammography, efficient storage of the vast amounts of data has become a challenge. While lossless image compression causes no risk to the interpretation of the data, it does not allow for high compression rates. Lossy compression and the associated higher compression ratios are therefore more desirable. The U.S. Food and Drug Administration (FDA) currently interprets the Mammography Quality Standards Act as prohibiting lossy compression of digital mammograms for primary image interpretation, image retention, or transfer to the patient or her designated recipient. Previous work has used reader studies to determine proper usage criteria for evaluating lossy image compression in mammography, and utilized different measures and metrics to characterize medical image quality. The drawback of such studies is that they rely on a threshold on compression ratio as the fundamental criterion for preserving the quality of images. However, compression ratio is not a useful indicator of image quality. On the other hand, many objective image quality metrics (IQMs) have shown excellent performance for natural image content for consumer electronic applications. In this paper, we create a new synthetic mammogram database with several unique features. We compare and characterize the impact of image compression on several clinically relevant image attributes such as perceived contrast and mass appearance for different kinds of masses. We plan to use this database to develop a new objective IQM for measuring the quality of compressed mammographic images to help determine the allowed maximum compression for different kinds of breasts and masses in terms of visual and diagnostic quality.
The European general thoracic surgery database project.
Falcoz, Pierre Emmanuel; Brunelli, Alessandro
2014-05-01
The European Society of Thoracic Surgeons (ESTS) Database is a free registry created by ESTS in 2001. The current online version was launched in 2007. It runs currently on a Dendrite platform with extensive data security and frequent backups. The main features are a specialty-specific, procedure-specific, prospectively maintained, periodically audited and web-based electronic database, designed for quality control and performance monitoring, which allows for the collection of all general thoracic procedures. Data collection is the "backbone" of the ESTS database. It includes many risk factors, processes of care and outcomes, which are specially designed for quality control and performance audit. The user can download and export their own data and use them for internal analyses and quality control audits. The ESTS database represents the gold standard of clinical data collection for European General Thoracic Surgery. Over the past years, the ESTS database has achieved many accomplishments. In particular, the database hit two major milestones: it now includes more than 235 participating centers and 70,000 surgical procedures. The ESTS database is a snapshot of surgical practice that aims at improving patient care. In other words, data capture should become integral to routine patient care, with the final objective of improving quality of care within Europe.
Epstein, Richard H; Dexter, Franklin
2018-07-01
For this special article, we reviewed the computer code, used to extract the data, and the text of all 47 studies published between January 2006 and August 2017 using anesthesia information management system (AIMS) data from Thomas Jefferson University Hospital (TJUH). Data from this institution were used in the largest number (P = .0007) of papers describing the use of AIMS published in this time frame. The AIMS was replaced in April 2017, making this finite sample finite. The objective of the current article was to identify factors that made TJUH successful in publishing anesthesia informatics studies. We examined the structured query language used for each study to examine the extent to which databases outside of the AIMS were used. We examined data quality from the perspectives of completeness, correctness, concordance, plausibility, and currency. Our results were that most could not have been completed without external database sources (36/47, 76.6%; P = .0003 compared with 50%). The operating room management system was linked to the AIMS and was used significantly more frequently (26/36, 72%) than other external sources. Access to these external data sources was provided, allowing exploration of data quality. The TJUH AIMS used high-resolution timestamps (to the nearest 3 milliseconds) and created audit tables to track changes to clinical documentation. Automatic data were recorded at 1-minute intervals and were not editable; data cleaning occurred during analysis. Few paired events with an expected order were out of sequence. Although most data elements were of high quality, there were notable exceptions, such as frequent missing values for estimated blood loss, height, and weight. Some values were duplicated with different units, and others were stored in varying locations. Our conclusions are that linking the TJUH AIMS to the operating room management system was a critical step in enabling publication of multiple studies using AIMS data. Access to this and other external databases by analysts with a high degree of anesthesia domain knowledge was necessary to be able to assess the quality of the AIMS data and ensure that the data pulled for studies were appropriate. For anesthesia departments seeking to increase their academic productivity using their AIMS as a data source, our experiences may provide helpful guidance.
Analysis of high accuracy, quantitative proteomics data in the MaxQB database.
Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias
2012-03-01
MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database scores. The information contained in MaxQB, including high resolution fragment spectra, is accessible to the community via a user-friendly web interface at http://www.biochem.mpg.de/maxqb.
National Water Quality Standards Database (NWQSD)
The National Water Quality Standards Database (WQSDB) provides access to EPA and state water quality standards (WQS) information in text, tables, and maps. This data source was last updated in December 2007 and will no longer be updated.
NASA Astrophysics Data System (ADS)
Verdoodt, Ann; Baert, Geert; Van Ranst, Eric
2014-05-01
Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based on analyses done in Rwanda, but can be complemented with ongoing research results or prospects for Burundi.
Reduced reference image quality assessment via sub-image similarity based redundancy measurement
NASA Astrophysics Data System (ADS)
Mou, Xuanqin; Xue, Wufeng; Zhang, Lei
2012-03-01
The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.
Baxter, Siyan; Sanderson, Kristy; Venn, Alison J; Blizzard, C Leigh; Palmer, Andrew J
2014-01-01
To determine the relationship between return on investment (ROI) and quality of study methodology in workplace health promotion programs. Data were obtained through a systematic literature search of National Health Service Economic Evaluation Database (NHS EED), Database of Abstracts of Reviews of Effects (DARE), Health Technology Database (HTA), Cost Effectiveness Analysis (CEA) Registry, EconLit, PubMed, Embase, Wiley, and Scopus. Included were articles written in English or German reporting cost(s) and benefit(s) and single or multicomponent health promotion programs on working adults. Return-to-work and workplace injury prevention studies were excluded. Methodological quality was graded using British Medical Journal Economic Evaluation Working Party checklist. Economic outcomes were presented as ROI. ROI was calculated as ROI = (benefits - costs of program)/costs of program. Results were weighted by study size and combined using meta-analysis techniques. Sensitivity analysis was performed using two additional methodological quality checklists. The influences of quality score and important study characteristics on ROI were explored. Fifty-one studies (61 intervention arms) published between 1984 and 2012 included 261,901 participants and 122,242 controls from nine industry types across 12 countries. Methodological quality scores were highly correlated between checklists (r = .84-.93). Methodological quality improved over time. Overall weighted ROI [mean ± standard deviation (confidence interval)] was 1.38 ± 1.97 (1.38-1.39), which indicated a 138% return on investment. When accounting for methodological quality, an inverse relationship to ROI was found. High-quality studies (n = 18) had a smaller mean ROI, 0.26 ± 1.74 (.23-.30), compared to moderate (n = 16) 0.90 ± 1.25 (.90-.91) and low-quality (n = 27) 2.32 ± 2.14 (2.30-2.33) studies. Randomized control trials (RCTs) (n = 12) exhibited negative ROI, -0.22 ± 2.41(-.27 to -.16). Financial returns become increasingly positive across quasi-experimental, nonexperimental, and modeled studies: 1.12 ± 2.16 (1.11-1.14), 1.61 ± 0.91 (1.56-1.65), and 2.05 ± 0.88 (2.04-2.06), respectively. Overall, mean weighted ROI in workplace health promotion demonstrated a positive ROI. Higher methodological quality studies provided evidence of smaller financial returns. Methodological quality and study design are important determinants.
Development and application of basis database for materials life cycle assessment in china
NASA Astrophysics Data System (ADS)
Li, Xiaoqing; Gong, Xianzheng; Liu, Yu
2017-03-01
As the data intensive method, high quality environmental burden data is an important premise of carrying out materials life cycle assessment (MLCA), and the reliability of data directly influences the reliability of the assessment results and its application performance. Therefore, building Chinese MLCA database is the basic data needs and technical supports for carrying out and improving LCA practice. Firstly, some new progress on database which related to materials life cycle assessment research and development are introduced. Secondly, according to requirement of ISO 14040 series standards, the database framework and main datasets of the materials life cycle assessment are studied. Thirdly, MLCA data platform based on big data is developed. Finally, the future research works were proposed and discussed.
Bohl, Daniel D; Russo, Glenn S; Basques, Bryce A; Golinvaux, Nicholas S; Fu, Michael C; Long, William D; Grauer, Jonathan N
2014-12-03
There has been an increasing use of national databases to conduct orthopaedic research. Questions regarding the validity and consistency of these studies have not been fully addressed. The purpose of this study was to test for similarity in reported measures between two national databases commonly used for orthopaedic research. A retrospective cohort study of patients undergoing lumbar spinal fusion procedures during 2009 to 2011 was performed in two national databases: the Nationwide Inpatient Sample and the National Surgical Quality Improvement Program. Demographic characteristics, comorbidities, and inpatient adverse events were directly compared between databases. The total numbers of patients included were 144,098 from the Nationwide Inpatient Sample and 8434 from the National Surgical Quality Improvement Program. There were only small differences in demographic characteristics between the two databases. There were large differences between databases in the rates at which specific comorbidities were documented. Non-morbid obesity was documented at rates of 9.33% in the Nationwide Inpatient Sample and 36.93% in the National Surgical Quality Improvement Program (relative risk, 0.25; p < 0.05). Peripheral vascular disease was documented at rates of 2.35% in the Nationwide Inpatient Sample and 0.60% in the National Surgical Quality Improvement Program (relative risk, 3.89; p < 0.05). Similarly, there were large differences between databases in the rates at which specific inpatient adverse events were documented. Sepsis was documented at rates of 0.38% in the Nationwide Inpatient Sample and 0.81% in the National Surgical Quality Improvement Program (relative risk, 0.47; p < 0.05). Acute kidney injury was documented at rates of 1.79% in the Nationwide Inpatient Sample and 0.21% in the National Surgical Quality Improvement Program (relative risk, 8.54; p < 0.05). As database studies become more prevalent in orthopaedic surgery, authors, reviewers, and readers should view these studies with caution. This study shows that two commonly used databases can identify demographically similar patients undergoing a common orthopaedic procedure; however, the databases document markedly different rates of comorbidities and inpatient adverse events. The differences are likely the result of the very different mechanisms through which the databases collect their comorbidity and adverse event data. Findings highlight concerns regarding the validity of orthopaedic database research. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.
Rangé, G; Chassaing, S; Marcollet, P; Saint-Étienne, C; Dequenne, P; Goralski, M; Bardiére, P; Beverilli, F; Godillon, L; Sabine, B; Laure, C; Gautier, S; Hakim, R; Albert, F; Angoulvant, D; Grammatico-Guillon, L
2018-05-01
To assess the reliability and low cost of a computerized interventional cardiology (IC) registry to prospectively and systematically collect high-quality data for all consecutive coronary patients referred for coronary angiogram or/and coronary angioplasty. Rigorous clinical practice assessment is a key factor to improve prognosis in IC. A prospective and permanent registry could achieve this goal but, presumably, at high cost and low level of data quality. One multicentric IC registry (CRAC registry), fully integrated to usual coronary activity report software, started in the centre Val-de-Loire (CVL) French region in 2014. Quality assessment of CRAC registry was conducted on five IC CathLab of the CVL region, from January 1st to December 31st 2014. Quality of collected data was evaluated by measuring procedure exhaustivity (comparing with data from hospital information system), data completeness (quality controls) and data consistency (by checking complete medical charts as gold standard). Cost per procedure (global registry operating cost/number of collected procedures) was also estimated. CRAC model provided a high-quality level with 98.2% procedure completeness, 99.6% data completeness and 89% data consistency. The operating cost per procedure was €14.70 ($16.51) for data collection and quality control, including ST-segment elevation myocardial infarction (STEMI) preadmission information and one-year follow-up after angioplasty. This integrated computerized IC registry led to the construction of an exhaustive, reliable and costless database, including all coronary patients entering in participating IC centers in the CVL region. This solution will be developed in other French regions, setting up a national IC database for coronary patients in 2020: France PCI. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Clinical research in a hospital--from the lone rider to teamwork.
Hannisdal, E
1996-01-01
Clinical research of high international standard is very demanding and requires clinical data of high quality, software, hardware and competence in research design and statistical treatment of data. Most busy clinicians have little time allocated for clinical research and this increases the need for a potent infrastructure. This paper describes how the Norwegian Radium Hospital, a specialized cancer hospital, has reorganized the clinical research process. This includes a new department, the Clinical Research Office, which serves the formal framework, a central Diagnosis Registry, clinical databases and multicentre studies. The department assists about 120 users, mainly clinicians. Installation of a network software package with over 10 programs has strongly provided an internal standardization, reduced the costs and saved clinicians a great deal of time. The hospital is building up about 40 diagnosis-specific clinical databases with up to 200 variables registered. These databases are shared by the treatment group and seem to be important tools for quality assurance. We conclude that the clinical research process benefits from a firm infrastructure facilitating teamwork through extensive use of modern information technology. We are now ready for the next phase, which is to work for a better external technical framework for cooperation with other institutions throughout the world.
Janamian, Tina; Upham, Susan J; Crossland, Lisa; Jackson, Claire L
2016-04-18
To conduct a systematic review of the literature to identify existing online primary care quality improvement tools and resources to support organisational improvement related to the seven elements in the Primary Care Practice Improvement Tool (PC-PIT), with the identified tools and resources to progress to a Delphi study for further assessment of relevance and utility. Systematic review of the international published and grey literature. CINAHL, Embase and PubMed databases were searched in March 2014 for articles published between January 2004 and December 2013. GreyNet International and other relevant websites and repositories were also searched in March-April 2014 for documents dated between 1992 and 2012. All citations were imported into a bibliographic database. Published and unpublished tools and resources were included in the review if they were in English, related to primary care quality improvement and addressed any of the seven PC-PIT elements of a high-performing practice. Tools and resources that met the eligibility criteria were then evaluated for their accessibility, relevance, utility and comprehensiveness using a four-criteria appraisal framework. We used a data extraction template to systematically extract information from eligible tools and resources. A content analysis approach was used to explore the tools and resources and collate relevant information: name of the tool or resource, year and country of development, author, name of the organisation that provided access and its URL, accessibility information or problems, overview of each tool or resource and the quality improvement element(s) it addresses. If available, a copy of the tool or resource was downloaded into the bibliographic database, along with supporting evidence (published or unpublished) on its use in primary care. This systematic review identified 53 tools and resources that can potentially be provided as part of a suite of tools and resources to support primary care practices in improving the quality of their practice, to achieve improved health outcomes.
A comprehensive and scalable database search system for metaproteomics.
Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W
2016-08-16
Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.
Taraban, Lindsay; Shaw, Daniel S; Leve, Leslie D; Wilson, Melvin N; Dishion, Thomas J; Natsuaki, Misaki N; Neiderhiser, Jenae M; Reiss, David
2017-03-01
Marital quality and social support satisfaction were tested as moderators of the association between maternal depressive symptoms and parenting during early childhood (18-36 months) among 2 large, divergent, longitudinal samples (n = 526; n = 570). Unexpectedly, in both samples the association between maternal depressive symptoms and reduced parenting quality was strongest in the context of high marital quality and high social support, and largely nonsignificant in the context of low marital quality and low social support. Possible explanations for these surprising findings are discussed. Results point to the importance of accounting for factors in the broader family context in predicting the association between depressive symptoms and maternal parenting. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Gans, Jeremy; Falco, Mathea; Schackman, Bruce R.; Winters, Ken C.
2010-01-01
Aims: To examine the quality of screening and assessment practices at some of the most highly regarded adolescent substance use treatment programs in the United States. Methods: Between March and September 2005, telephone surveys were administered to directors of highly regarded programs. Several different publications and databases were then used…
EFICAz2.5: application of a high-precision enzyme function predictor to 396 proteomes.
Kumar, Narendra; Skolnick, Jeffrey
2012-10-15
High-quality enzyme function annotation is essential for understanding the biochemistry, metabolism and disease processes of organisms. Previously, we developed a multi-component high-precision enzyme function predictor, EFICAz(2) (enzyme function inference by a combined approach). Here, we present an updated improved version, EFICAz(2.5), that is trained on a significantly larger data set of enzyme sequences and PROSITE patterns. We also present the results of the application of EFICAz(2.5) to the enzyme reannotation of 396 genomes cataloged in the ENSEMBL database. The EFICAz(2.5) server and database is freely available with a use-friendly interface at http://cssb.biology.gatech.edu/EFICAz2.5.
Use of national surgical quality improvement program data as a catalyst for quality improvement.
Rowell, Katherine S; Turrentine, Florence E; Hutter, Matthew M; Khuri, Shukri F; Henderson, William G
2007-06-01
Semiannually, the National Surgical Quality Improvement Program (NSQIP) provides its participating sites with observed-to-expected (O/E) ratios for 30-day postoperative mortality and morbidity. At each reporting period, there is typically a small group of hospitals with statistically significantly high O/E ratios, meaning that their patients have experienced more adverse events than would be expected on the basis of the population characteristics. An important issue is to determine which actions a surgical service should take in the presence of a high O/E ratio. This article reviews case studies of how some of the Department of Veterans Affairs and private-sector NSQIP participating sites used the clinically rich NSQIP database for local quality improvement efforts. Data on postoperative adverse events before and after these local quality improvement efforts are presented. After local quality improvement efforts, wound complication rates were reduced at the Salt Lake City Veterans Affairs medical center by 47%, surgical site infections in patients undergoing intraabdominal surgery were reduced at the University of Virginia by 36%, and urinary tract infections in vascular patients were reduced at the Massachusetts General Hospital by 74%. At some sites participating in the NSQIP, notably the Massachusetts General Hospital and the University of Virginia, the NSQIP has served as the basis for surgical service-wide outcomes research and quality improvement programs. The NSQIP not only provides participating sites with risk-adjusted surgical mortality and morbidity outcomes semiannually, but the clinically rich NSQIP database can also serve as a catalyst for local quality improvement programs to significantly reduce postoperative adverse event rates.
NGSmethDB 2017: enhanced methylomes and differential methylation
Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L.
2017-01-01
The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. PMID:27794041
Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana
2002-01-01
Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.
Flow unsteadiness effects on boundary layers
NASA Technical Reports Server (NTRS)
Murthy, Sreedhara V.
1989-01-01
The development of boundary layers at high subsonic speeds in the presence of either mass flux fluctuations or acoustic disturbances (the two most important parameters in the unsteadiness environment affecting the aerodynamics of a flight vehicle) was investigated. A high quality database for generating detailed information concerning free-stream flow unsteadiness effects on boundary layer growth and transition in high subsonic and transonic speeds is described. The database will be generated with a two-pronged approach: (1) from a detailed review of existing literature on research and wind tunnel calibration database, and (2) from detailed tests in the Boundary Layer Apparatus for Subsonic and Transonic flow Affected by Noise Environment (BLASTANE). Special instrumentation, including hot wire anemometry, the buried wire gage technique, and laser velocimetry were used to obtain skin friction and turbulent shear stress data along the entire boundary layer for various free stream noise levels, turbulence content, and pressure gradients. This database will be useful for improving the correction methodology of applying wind tunnel test data to flight predictions and will be helpful for making improvements in turbulence modeling laws.
Development of a Multidisciplinary and Telemedicine Focused System Database.
Paštěka, Richard; Forjan, Mathias; Sauermann, Stefan
2017-01-01
Tele-rehabilitation at home is one of the promising approaches in increasing rehabilitative success and simultaneously decreasing the financial burden on the healthcare system. Novel and mostly mobile devices are already in use, but shall be used in the future to a higher extent for allowing at home rehabilitation processes at a high quality level. The combination of exercises, assessments and available equipment is the basic objective of the presented database. The database has been structured in order to allow easy-to-use and fast access for the three main user groups. Therapists - looking for exercise and equipment combinations - patients - rechecking their tasks for home exercises - and manufacturers - entering their equipment for specific use cases. The database has been evaluated by a proof of concept study and shows a high degree of applicability for the field of rehabilitative medicine. Currently it contains 110 exercises/assessments and 111 equipment/systems. Foundations of presented database are already established in the rehabilitative field of application, but can and will be enhanced in its functionality to be usable for a higher variety of medical fields and specifications.
Suh, Chang-Ok; Oh, Se Jeong; Hong, Sung-Tae
2013-05-01
The article overviews some achievements and problems of Korean medical journals published in the highly competitive journal environment. Activities of Korean Association of Medical Journal Editors (KAMJE) are viewed as instrumental for improving the quality of Korean articles, indexing large number of local journals in prestigious bibliographic databases and launching new abstract and citation tracking databases or platforms (eg KoreaMed, KoreaMed Synapse, the Western Pacific Regional Index Medicus [WPRIM]). KAMJE encourages its member journals to upgrade science editing standards and to legitimately increase citation rates, primarily by publishing more great articles with global influence. Experience gained by KAMJE and problems faced by Korean editors may have global implications.
U.S. Geological Survey coal quality (COALQUAL) database; version 2.0
Bragg, L.J.; Oman, J.K.; Tewalt, S.J.; Oman, C.L.; Rega, N.H.; Washington, P.M.; Finkelman, R.B.
1997-01-01
The USGS Coal Quality database is an interactive, computerized component of the NCRDS. It contains comprehensive analyses of more than 13,000 samples of coal and associated rocks from every major coal-bearing basin and coal bed in the U.S. The data in the coal quality database represent analyses of the coal as it exists in the ground. The data commonly are presented on an as-received whole-coal basis.
Brimhall, Bradley B; Hall, Timothy E; Walczak, Steven
2006-01-01
A hospital laboratory relational database, developed over eight years, has demonstrated significant cost savings and a substantial financial return on investment (ROI). In addition, the database has been used to measurably improve laboratory operations and the quality of patient care.
Promise and Limitations of Big Data Research in Plastic Surgery.
Zhu, Victor Zhang; Tuggle, Charles Thompson; Au, Alexander Francis
2016-04-01
The use of "Big Data" in plastic surgery outcomes research has increased dramatically in the last 5 years. This article addresses some of the benefits and limitations of such research. This is a narrative review of large database studies in plastic surgery. There are several benefits to database research as compared with traditional forms of research, such as randomized controlled studies and cohort studies. These include the ease in patient recruitment, reduction in selection bias, and increased generalizability. As such, the types of outcomes research that are particularly suited for database studies include determination of geographic variations in practice, volume outcome analysis, evaluation of how sociodemographic factors affect access to health care, and trend analyses over time. The limitations of database research include data which are limited only to what was captured in the database, high power which can cause clinically insignificant differences to achieve statistical significance, and fishing which can lead to increased type I errors. The National Surgical Quality Improvement Project is an important general surgery database that may be useful for plastic surgeons because it is validated and has a large number of patients after over a decade of collecting data. The Tracking Operations and Outcomes for Plastic Surgeons Program is a newer database specific to plastic surgery. Databases are a powerful tool for plastic surgery outcomes research. It is critically important to understand their benefits and limitations when designing research projects or interpreting studies whose data have been drawn from them. For plastic surgeons, National Surgical Quality Improvement Project has a greater number of publications, but Tracking Operations and Outcomes for Plastic Surgeons Program is the most applicable database for plastic surgery research.
[The Brazilian Hospital Information System and the acute myocardial infarction hospital care].
Escosteguy, Claudia Caminha; Portela, Margareth Crisóstomo; Medronho, Roberto de Andrade; de Vasconcellos, Maurício Teixeira Leite
2002-08-01
To analyze the applicability of the Brazilian Unified Health System's national hospital database to evaluate the quality of acute myocardial infarction hospital care. It was evaluated 1,936 hospital admission forms having acute myocardial infarction (AMI) as primary diagnosis in the municipal district of Rio de Janeiro, Brazil, in 1997. Data was collected from the national hospital database. A stratified random sampling of 391 medical records was also evaluated. AMI diagnosis agreement followed the literature criteria. Variable accuracy analysis was performed using kappa index agreement. The quality of AMI diagnosis registered in hospital admission forms was satisfactory according to the gold standard of the literature. In general, the accuracy of the variables demographics (sex, age group), process (medical procedures and interventions), and outcome (hospital death) was satisfactory. The accuracy of demographics and outcome variables was higher than the one of process variables. Under registration of secondary diagnosis was high in the forms and it was the main limiting factor. Given the study findings and the widespread availability of the national hospital database, it is pertinent its use as an instrument in the evaluation of the quality of AMI medical care.
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
Soranno, Patricia A; Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Cheruvelil, Kendra S; Christel, Samuel T; Claucherty, Matt; Collins, Sarah M; Conroy, Joseph D; Downing, John A; Dukett, Jed; Fergus, C Emi; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Skaff, Nicholas K; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Tan, Pang-Ning; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai
2017-12-01
Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600-12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. © The Author 2017. Published by Oxford University Press.
Bacon, Linda C; Beauchene, Michael; Bednar, Karen E; Bissell, Edward G; Boudreau, Claire K; Boyer, Marvin G; Bremigan, Mary T; Carpenter, Stephen R; Carr, Jamie W; Christel, Samuel T; Claucherty, Matt; Conroy, Joseph D; Downing, John A; Dukett, Jed; Filstrup, Christopher T; Funk, Clara; Gonzalez, Maria J; Green, Linda T; Gries, Corinna; Halfman, John D; Hamilton, Stephen K; Hanson, Paul C; Henry, Emily N; Herron, Elizabeth M; Hockings, Celeste; Jackson, James R; Jacobson-Hedin, Kari; Janus, Lorraine L; Jones, William W; Jones, John R; Keson, Caroline M; King, Katelyn B S; Kishbaugh, Scott A; Lathrop, Barbara; Latimore, Jo A; Lee, Yuehlin; Lottig, Noah R; Lynch, Jason A; Matthews, Leslie J; McDowell, William H; Moore, Karen E B; Neff, Brian P; Nelson, Sarah J; Oliver, Samantha K; Pace, Michael L; Pierson, Donald C; Poisson, Autumn C; Pollard, Amina I; Post, David M; Reyes, Paul O; Rosenberry, Donald O; Roy, Karen M; Rudstam, Lars G; Sarnelle, Orlando; Schuldt, Nancy J; Scott, Caren E; Smith, Nicole J; Spinelli, Nick R; Stachelek, Joseph J; Stanley, Emily H; Stoddard, John L; Stopyak, Scott B; Stow, Craig A; Tallant, Jason M; Thorpe, Anthony P; Vanni, Michael J; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C; Webster, Katherine E; White, Jeffrey D; Wilmes, Marcy K; Yuan, Shuai
2017-01-01
Abstract Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states. LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales. PMID:29053868
Soranno, Patricia A.; Bacon, Linda C.; Beauchene, Michael; Bednar, Karen E.; Bissell, Edward G.; Boudreau, Claire K.; Boyer, Marvin G.; Bremigan, Mary T.; Carpenter, Stephen R.; Carr, Jamie W.; Cheruvelil, Kendra S.; Christel, Samuel T.; Claucherty, Matt; Collins, Sarah M.; Conroy, Joseph D.; Downing, John A.; Dukett, Jed; Fergus, C. Emi; Filstrup, Christopher T.; Funk, Clara; Gonzalez, Maria J.; Green, Linda T.; Gries, Corinna; Halfman, John D.; Hamilton, Stephen K.; Hanson, Paul C.; Henry, Emily N.; Herron, Elizabeth M.; Hockings, Celeste; Jackson, James R.; Jacobson-Hedin, Kari; Janus, Lorraine L.; Jones, William W.; Jones, John R.; Keson, Caroline M.; King, Katelyn B.S.; Kishbaugh, Scott A.; Lapierre, Jean-Francois; Lathrop, Barbara; Latimore, Jo A.; Lee, Yuehlin; Lottig, Noah R.; Lynch, Jason A.; Matthews, Leslie J.; McDowell, William H.; Moore, Karen E.B.; Neff, Brian; Nelson, Sarah J.; Oliver, Samantha K.; Pace, Michael L.; Pierson, Donald C.; Poisson, Autumn C.; Pollard, Amina I.; Post, David M.; Reyes, Paul O.; Rosenberry, Donald; Roy, Karen M.; Rudstam, Lars G.; Sarnelle, Orlando; Schuldt, Nancy J.; Scott, Caren E.; Skaff, Nicholas K.; Smith, Nicole J.; Spinelli, Nick R.; Stachelek, Joseph J.; Stanley, Emily H.; Stoddard, John L.; Stopyak, Scott B.; Stow, Craig A.; Tallant, Jason M.; Tan, Pang-Ning; Thorpe, Anthony P.; Vanni, Michael J.; Wagner, Tyler; Watkins, Gretchen; Weathers, Kathleen C.; Webster, Katherine E.; White, Jeffrey D.; Wilmes, Marcy K.; Yuan, Shuai
2017-01-01
Understanding the factors that affect water quality and the ecological services provided by freshwater ecosystems is an urgent global environmental issue. Predicting how water quality will respond to global changes not only requires water quality data, but also information about the ecological context of individual water bodies across broad spatial extents. Because lake water quality is usually sampled in limited geographic regions, often for limited time periods, assessing the environmental controls of water quality requires compilation of many data sets across broad regions and across time into an integrated database. LAGOS-NE accomplishes this goal for lakes in the northeastern-most 17 US states.LAGOS-NE contains data for 51 101 lakes and reservoirs larger than 4 ha in 17 lake-rich US states. The database includes 3 data modules for: lake location and physical characteristics for all lakes; ecological context (i.e., the land use, geologic, climatic, and hydrologic setting of lakes) for all lakes; and in situ measurements of lake water quality for a subset of the lakes from the past 3 decades for approximately 2600–12 000 lakes depending on the variable. The database contains approximately 150 000 measures of total phosphorus, 200 000 measures of chlorophyll, and 900 000 measures of Secchi depth. The water quality data were compiled from 87 lake water quality data sets from federal, state, tribal, and non-profit agencies, university researchers, and citizen scientists. This database is one of the largest and most comprehensive databases of its type because it includes both in situ measurements and ecological context data. Because ecological context can be used to study a variety of other questions about lakes, streams, and wetlands, this database can also be used as the foundation for other studies of freshwaters at broad spatial and ecological scales.
National Urban Database and Access Portal Tool
Based on the need for advanced treatments of high resolution urban morphological features (e.g., buildings, trees) in meteorological, dispersion, air quality and human exposure modeling systems for future urban applications, a new project was launched called the National Urban Da...
PSD Applicability: TEX-USS High Density Polyethylene Plant
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Users’ guide to the surgical literature: how to perform a high-quality literature search
Waltho, Daniel; Kaur, Manraj Nirmal; Haynes, R. Brian; Farrokhyar, Forough; Thoma, Achilleas
2015-01-01
Summary The article “Users’ guide to the surgical literature: how to perform a literature search” was published in 2003, but the continuing technological developments in databases and search filters have rendered that guide out of date. The present guide fills an existing gap in this area; it provides the reader with strategies for developing a searchable clinical question, creating an efficient search strategy, accessing appropriate databases, and skillfully retrieving the best evidence to address the research question. PMID:26384150
Danish Palliative Care Database.
Groenvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang
2016-01-01
The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. The study population is all patients in Denmark referred to and/or in contact with SPC after January 1, 2010. The main variables in DPD are data about referral for patients admitted and not admitted to SPC, type of the first SPC contact, clinical and sociodemographic factors, multidisciplinary conference, and the patient-reported European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care questionnaire, assessing health-related quality of life. The data support the estimation of currently five quality of care indicators, ie, the proportions of 1) referred and eligible patients who were actually admitted to SPC, 2) patients who waited <10 days before admission to SPC, 3) patients who died from cancer and who obtained contact with SPC, 4) patients who were screened with European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care at admission to SPC, and 5) patients who were discussed at a multidisciplinary conference. In 2014, all 43 SPC units in Denmark reported their data to DPD, and all 9,434 cancer patients (100%) referred to SPC were registered in DPD. In total, 41,104 unique cancer patients were registered in DPD during the 5 years 2010-2014. Of those registered, 96% had cancer. DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness.
You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong
2017-01-01
It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.
Li, Qing-na; Huang, Xiu-ling; Gao, Rui; Lu, Fang
2012-08-01
Data management has significant impact on the quality control of clinical studies. Every clinical study should have a data management plan to provide overall work instructions and ensure that all of these tasks are completed according to the Good Clinical Data Management Practice (GCDMP). Meanwhile, the data management plan (DMP) is an auditable document requested by regulatory inspectors and must be written in a manner that is realistic and of high quality. The significance of DMP, the minimum standards and the best practices provided by GCDMP, the main contents of DMP based on electronic data capture (EDC) and some key factors of DMP influencing the quality of clinical study were elaborated in this paper. Specifically, DMP generally consists of 15 parts, namely, the approval page, the protocol summary, role and training, timelines, database design, creation, maintenance and security, data entry, data validation, quality control and quality assurance, the management of external data, serious adverse event data reconciliation, coding, database lock, data management reports, the communication plan and the abbreviated terms. Among them, the following three parts are regarded as the key factors: designing a standardized database of the clinical study, entering data in time and cleansing data efficiently. In the last part of this article, the authors also analyzed the problems in clinical research of traditional Chinese medicine using the EDC system and put forward some suggestions for improvement.
Xing, Lu; Chen, Ruiqi; Diao, Yongshu; Qian, Jiahui; You, Chao; Jiang, Xiaolian
2016-08-01
Depression is highly prevalent in hemodialysis patients and results in poor patient outcomes. Although psychological interventions are being developed and used for these patients, there is uncertainty regarding the effectiveness of these interventions. The purpose of this meta-analysis is to evaluate the effects of psychological interventions on depression treatment in hemodialysis patients. All randomized controlled trials (RCTs) relevant to the depression treatment of hemodialysis patients through psychological interventions were retrieved from the following databases: Embase, Pubmed, PsycINFO, the Cochrane Database of Systematic Reviews, and the Cochrane Central Register of Controlled Trials. The reference lists of identified RCTs were also screened. The Cochrane risk of bias tool was used to evaluate the quality of the studies, RevMan (5.3) was used to analyze the data, and the evidence quality of the combined results was evaluated using GRADE (3.6.1). Eight RCTs were included. The combined results showed that psychological interventions significantly reduced the scores of the Beck Depression Inventory (P<0.001) and interdialysis weight gain (P<0.001). However, due to the high heterogeneity, effect size combinations of sleep quality and quality of life were not performed. Psychological interventions may reduce the degree of depression and improve fluid intake restriction adherence. More rigorously designed research is needed.
ECG signal quality during arrhythmia and its application to false alarm reduction.
Behar, Joachim; Oster, Julien; Li, Qiao; Clifford, Gari D
2013-06-01
An automated algorithm to assess electrocardiogram (ECG) quality for both normal and abnormal rhythms is presented for false arrhythmia alarm suppression of intensive care unit (ICU) monitors. A particular focus is given to the quality assessment of a wide variety of arrhythmias. Data from three databases were used: the Physionet Challenge 2011 dataset, the MIT-BIH arrhythmia database, and the MIMIC II database. The quality of more than 33 000 single-lead 10 s ECG segments were manually assessed and another 12 000 bad-quality single-lead ECG segments were generated using the Physionet noise stress test database. Signal quality indices (SQIs) were derived from the ECGs segments and used as the inputs to a support vector machine classifier with a Gaussian kernel. This classifier was trained to estimate the quality of an ECG segment. Classification accuracies of up to 99% on the training and test set were obtained for normal sinus rhythm and up to 95% for arrhythmias, although performance varied greatly depending on the type of rhythm. Additionally, the association between 4050 ICU alarms from the MIMIC II database and the signal quality, as evaluated by the classifier, was studied. Results suggest that the SQIs should be rhythm specific and that the classifier should be trained for each rhythm call independently. This would require a substantially increased set of labeled data in order to train an accurate algorithm.
"TPSX: Thermal Protection System Expert and Material Property Database"
NASA Technical Reports Server (NTRS)
Squire, Thomas H.; Milos, Frank S.; Rasky, Daniel J. (Technical Monitor)
1997-01-01
The Thermal Protection Branch at NASA Ames Research Center has developed a computer program for storing, organizing, and accessing information about thermal protection materials. The program, called Thermal Protection Systems Expert and Material Property Database, or TPSX, is available for the Microsoft Windows operating system. An "on-line" version is also accessible on the World Wide Web. TPSX is designed to be a high-quality source for TPS material properties presented in a convenient, easily accessible form for use by engineers and researchers in the field of high-speed vehicle design. Data can be displayed and printed in several formats. An information window displays a brief description of the material with properties at standard pressure and temperature. A spread sheet window displays complete, detailed property information. Properties which are a function of temperature and/or pressure can be displayed as graphs. In any display the data can be converted from English to SI units with the click of a button. Two material databases included with TPSX are: 1) materials used and/or developed by the Thermal Protection Branch at NASA Ames Research Center, and 2) a database compiled by NASA Johnson Space Center 9JSC). The Ames database contains over 60 advanced TPS materials including flexible blankets, rigid ceramic tiles, and ultra-high temperature ceramics. The JSC database contains over 130 insulative and structural materials. The Ames database is periodically updated and expanded as required to include newly developed materials and material property refinements.
DamaGIS: a multisource geodatabase for collection of flood-related damage data
NASA Astrophysics Data System (ADS)
Saint-Martin, Clotilde; Javelle, Pierre; Vinet, Freddy
2018-06-01
Every year in France, recurring flood events result in several million euros of damage, and reducing the heavy consequences of floods has become a high priority. However, actions to reduce the impact of floods are often hindered by the lack of damage data on past flood events. The present paper introduces a new database for collection and assessment of flood-related damage. The DamaGIS database offers an innovative bottom-up approach to gather and identify damage data from multiple sources, including new media. The study area has been defined as the south of France considering the high frequency of floods over the past years. This paper presents the structure and contents of the database. It also presents operating instructions in order to keep collecting damage data within the database. This paper also describes an easily reproducible method to assess the severity of flood damage regardless of the location or date of occurrence. A first analysis of the damage contents is also provided in order to assess data quality and the relevance of the database. According to this analysis, despite its lack of comprehensiveness, the DamaGIS database presents many advantages. Indeed, DamaGIS provides a high accuracy of data as well as simplicity of use. It also has the additional benefit of being accessible in multiple formats and is open access. The DamaGIS database is available at https://doi.org/10.5281/zenodo.1241089.
Powell, Kimberly R; Peterson, Shenita R
Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of auditory cues on gait initiation and turning in patients with Parkinson's disease.
Gómez-González, J; Martín-Casas, P; Cano-de-la-Cuerda, R
2016-12-08
To review the available scientific evidence about the effectiveness of auditory cues during gait initiation and turning in patients with Parkinson's disease. We conducted a literature search in the following databases: Brain, PubMed, Medline, CINAHL, Scopus, Science Direct, Web of Science, Cochrane Database of Systematic Reviews, Cochrane Library Plus, CENTRAL, Trip Database, PEDro, DARE, OTseeker, and Google Scholar. We included all studies published between 2007 and 2016 and evaluating the influence of auditory cues on independent gait initiation and turning in patients with Parkinson's disease. The methodological quality of the studies was assessed with the Jadad scale. We included 13 studies, all of which had a low methodological quality (Jadad scale score≤2). In these studies, high-intensity, high-frequency auditory cues had a positive impact on gait initiation and turning. More specifically, they 1) improved spatiotemporal and kinematic parameters; 2) decreased freezing, turning duration, and falls; and 3) increased gait initiation speed, muscle activation, and gait speed and cadence in patients with Parkinson's disease. We need studies of better methodological quality to establish the Parkinson's disease stage in which auditory cues are most beneficial, as well as to determine the most effective type and frequency of the auditory cue during gait initiation and turning in patients with Parkinson's disease. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Stoop, Rahel; Clijsen, Ron; Leoni, Diego; Soldini, Emiliano; Castellini, Greta; Redaelli, Valentina; Barbero, Marco
2017-08-01
The methodological quality of controlled clinical trials (CCTs) of physiotherapeutic treatment modalities for myofascial trigger points (MTrP) has not been investigated yet. To detect the methodological quality of CCTs for physiotherapy treatments of MTrPs and demonstrating the possible increase over time. Systematic review. A systematic search was conducted in two databases, Physiotherapy Evidence Database (PEDro) and Medicine Medical Literature Analysis and Retrieval System online (MEDLINE), using the same keywords and selection procedure corresponding to pre-defined inclusion criteria. The methodological quality, assessed by the 11-item PEDro scale, served as outcome measure. The CCTs had to compare at least two interventions, where one intervention had to lay within the scope of physiotherapy. Participants had to be diagnosed with myofascial pain syndrome or trigger points (active or latent). A total of n = 230 studies was analysed. The cervico-thoracic region was the most frequently treated body part (n = 143). Electrophysical agent applications was the most frequent intervention. The average methodological quality reached 5.5 on the PEDro scale. A total of n = 6 studies scored the value of 9. The average PEDro score increased by 0.7 points per decade between 1978 and 2015. The average PEDro score of CCTs for MTrP treatments does not reach the cut-off of 6 proposed for moderate to high methodological quality. Nevertheless, a promising trend towards an increase of the average methodological quality of CCTs for MTrPs was recorded. More high-quality CCT studies with thorough research procedures are recommended to enhance methodological quality. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
The liver tissue bank and clinical database in China.
Yang, Yuan; Liu, Yi-Min; Wei, Ming-Yue; Wu, Yi-Fei; Gao, Jun-Hui; Liu, Lei; Zhou, Wei-Ping; Wang, Hong-Yang; Wu, Meng-Chao
2010-12-01
To develop a standardized and well-rounded material available for hepatology research, the National Liver Tissue Bank (NLTB) Project began in 2008 in China to make well-characterized and optimally preserved liver tumor tissue and clinical database. From Dec 2008 to Jun 2010, over 3000 individuals have been enrolled as liver tumor donors to the NLTB, including 2317 cases of newly diagnosed hepatocellular carcinoma (HCC) and about 1000 cases of diagnosed benign or malignant liver tumors. The clinical database and sample store can be managed easily and correctly with the data management platform used. We believe that the high-quality samples with detailed information database will become the cornerstone of hepatology research especially in studies exploring the diagnosis and new treatments for HCC and other liver diseases.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
NASA Astrophysics Data System (ADS)
Boyer, T.; Sun, L.; Locarnini, R. A.; Mishonov, A. V.; Hall, N.; Ouellet, M.
2016-02-01
The World Ocean Database (WOD) contains systematically quality controlled historical and recent ocean profile data (temperature, salinity, oxygen, nutrients, carbon cycle variables, biological variables) ranging from Captain Cooks second voyage (1773) to this year's Argo floats. The US National Centers for Environmental Information (NCEI) also hosts the Global Temperature and Salinity Profile Program (GTSPP) Continuously Managed Database (CMD) which provides quality controlled near-real time ocean profile data and higher level quality controlled temperature and salinity profiles from 1990 to present. Both databases are used extensively for ocean and climate studies. Synchronization of these two databases will allow easier access and use of comprehensive regional and global ocean profile data sets for ocean and climate studies. Synchronizing consists of two distinct phases: 1) a retrospective comparison of data in WOD and GTSPP to ensure that the most comprehensive and highest quality data set is available to researchers without the need to individually combine and contrast the two datasets and 2) web services to allow the constantly accruing near-real time data in the GTSPP CMD and the continuous addition and quality control of historical data in WOD to be made available to researchers together, seamlessly.
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
MIPS: curated databases and comprehensive secondary data resources in 2010.
Mewes, H Werner; Ruepp, Andreas; Theis, Fabian; Rattei, Thomas; Walter, Mathias; Frishman, Dmitrij; Suhre, Karsten; Spannagl, Manuel; Mayer, Klaus F X; Stümpflen, Volker; Antonov, Alexey
2011-01-01
The Munich Information Center for Protein Sequences (MIPS at the Helmholtz Center for Environmental Health, Neuherberg, Germany) has many years of experience in providing annotated collections of biological data. Selected data sets of high relevance, such as model genomes, are subjected to careful manual curation, while the bulk of high-throughput data is annotated by automatic means. High-quality reference resources developed in the past and still actively maintained include Saccharomyces cerevisiae, Neurospora crassa and Arabidopsis thaliana genome databases as well as several protein interaction data sets (MPACT, MPPI and CORUM). More recent projects are PhenomiR, the database on microRNA-related phenotypes, and MIPS PlantsDB for integrative and comparative plant genome research. The interlinked resources SIMAP and PEDANT provide homology relationships as well as up-to-date and consistent annotation for 38,000,000 protein sequences. PPLIPS and CCancer are versatile tools for proteomics and functional genomics interfacing to a database of compilations from gene lists extracted from literature. A novel literature-mining tool, EXCERBT, gives access to structured information on classified relations between genes, proteins, phenotypes and diseases extracted from Medline abstracts by semantic analysis. All databases described here, as well as the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.helmholtz-muenchen.de).
MIPS: curated databases and comprehensive secondary data resources in 2010
Mewes, H. Werner; Ruepp, Andreas; Theis, Fabian; Rattei, Thomas; Walter, Mathias; Frishman, Dmitrij; Suhre, Karsten; Spannagl, Manuel; Mayer, Klaus F.X.; Stümpflen, Volker; Antonov, Alexey
2011-01-01
The Munich Information Center for Protein Sequences (MIPS at the Helmholtz Center for Environmental Health, Neuherberg, Germany) has many years of experience in providing annotated collections of biological data. Selected data sets of high relevance, such as model genomes, are subjected to careful manual curation, while the bulk of high-throughput data is annotated by automatic means. High-quality reference resources developed in the past and still actively maintained include Saccharomyces cerevisiae, Neurospora crassa and Arabidopsis thaliana genome databases as well as several protein interaction data sets (MPACT, MPPI and CORUM). More recent projects are PhenomiR, the database on microRNA-related phenotypes, and MIPS PlantsDB for integrative and comparative plant genome research. The interlinked resources SIMAP and PEDANT provide homology relationships as well as up-to-date and consistent annotation for 38 000 000 protein sequences. PPLIPS and CCancer are versatile tools for proteomics and functional genomics interfacing to a database of compilations from gene lists extracted from literature. A novel literature-mining tool, EXCERBT, gives access to structured information on classified relations between genes, proteins, phenotypes and diseases extracted from Medline abstracts by semantic analysis. All databases described here, as well as the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.helmholtz-muenchen.de). PMID:21109531
Rawal, Hukam C.; Kumar, Shrawan; Mithra S.V., Amitha; Solanke, Amolkumar U.; Saxena, Swati; Tyagi, Anshika; V., Sureshkumar; Yadav, Neelam R.; Kalia, Pritam; Singh, Narendra Pratap; Singh, Nagendra Kumar; Sharma, Tilak Raj; Gaikwad, Kishor
2017-01-01
Clusterbean (Cyamopsis tetragonoloba L. Taub), is an important industrial, vegetable and forage crop. This crop owes its commercial importance to the presence of guar gum (galactomannans) in its endosperm which is used as a lubricant in a range of industries. Despite its relevance to agriculture and industry, genomic resources available in this crop are limited. Therefore, the present study was undertaken to generate RNA-Seq based transcriptome from leaf, shoot, and flower tissues. A total of 145 million high quality Illumina reads were assembled using Trinity into 127,706 transcripts and 48,007 non-redundant high quality (HQ) unigenes. We annotated 79% unigenes against Plant Genes from the National Center for Biotechnology Information (NCBI), Swiss-Prot, Pfam, gene ontology (GO) and KEGG databases. Among the annotated unigenes, 30,020 were assigned with 116,964 GO terms, 9984 with EC and 6111 with 137 KEGG pathways. At different fragments per kilobase of transcript per millions fragments sequenced (FPKM) levels, genes were found expressed higher in flower tissue followed by shoot and leaf. Additionally, we identified 8687 potential simple sequence repeats (SSRs) with an average frequency of one SSR per 8.75 kb. A total of 28 amplified SSRs in 21 clusterbean genotypes resulted in polymorphism in 13 markers with average polymorphic information content (PIC) of 0.21. We also constructed a database named ‘ClustergeneDB’ for easy retrieval of unigenes and the microsatellite markers. The tissue specific genes identified and the molecular marker resources developed in this study is expected to aid in genetic improvement of clusterbean for its end use. PMID:29120386
Upgrades to the TPSX Material Properties Database
NASA Technical Reports Server (NTRS)
Squire, T. H.; Milos, F. S.; Partridge, Harry (Technical Monitor)
2001-01-01
The TPSX Material Properties Database is a web-based tool that serves as a database for properties of advanced thermal protection materials. TPSX provides an easy user interface for retrieving material property information in a variety of forms, both graphical and text. The primary purpose and advantage of TPSX is to maintain a high quality source of often used thermal protection material properties in a convenient, easily accessible form, for distribution to government and aerospace industry communities. Last year a major upgrade to the TPSX web site was completed. This year, through the efforts of researchers at several NASA centers, the Office of the Chief Engineer awarded funds to update and expand the databases in TPSX. The FY01 effort focuses on updating correcting the Ames and Johnson thermal protection materials databases. In this session we will summarize the improvements made to the web site last year, report on the status of the on-going database updates, describe the planned upgrades for FY02 and FY03, and provide a demonstration of TPSX.
Scale effects of STATSGO and SSURGO databases on flow and water quality predictions
USDA-ARS?s Scientific Manuscript database
Soil information is one of the crucial inputs needed to assess the impacts of existing and alternative agricultural management practices on water quality. Therefore, it is important to understand the effects of spatial scale at which soil databases are developed on water quality evaluations. In the ...
Li, Wei; Li, Wei; Wan, Yumei; Ren, Juanjuan; Li, Ting; Li, Chunbo
2014-10-01
Several systematic reviews have been published about the relationship of the use of selective serotonin reuptake inhibitors (SSRIs) and risk of suicidal ideation or behavior but there has been no formal assessment of the quality of these reports. Assess the methodological quality of systematic reviews about the relationship of SSRI use and suicidal ideation and behavior; and provide overall conclusions based on this assessment. Systematic reviews of RCTs that compared SSRIs to placebo and used suicidal ideation or behavior as a key outcome variable were identified by searching Pubmed, Embase, The Cochrane Library, EBSCO, PsycINFO, Chinese National Knowledge Infrastructure, Chongqing VIP database for Chinese Technical Periodicals, WANFANG DATA, and the Chinese Biological Medical Literature Database. The methodological quality of included reviews was independently assessed by two expert raters using the 11-item Assessment of Multiple Systematic Reviews (AMSTAR) scale. Twelve systematic reviews and meta-analyses were identified. The inter-rater reliability of the overall AMSTAR quality score was excellent (ICC=0.86) but the inter-rater reliability of 5 of the 11 AMSTAR items was poor (Kappa <0.60). Based on the AMSTAR total score, there was one high-quality review, eight moderate-quality reviews, and three low-quality reviews. The high-quality review and three of the moderate-quality reviews reported a significantly increased risk of suicidal ideation or behavior in the SSRI group compared to the placebo group. Three of the four reviews limited to children and adolescents found a significantly increased risk of suicidal ideation or behavior with SSRI use which was most evident in teenagers taking paroxetine and in teenagers with depressive disorders. The available evidence suggests that adolescents may experience an increase in suicidal ideation and behavior with SSRI use, particularly those who have a depressive disorder and those treated with paroxetine. However, there are few high-quality reviews on this issue, so some doubt about the evidence remains. The AMSTAR scale may be useful in the ongoing efforts to improve the quality of systematic reviews, but further work is needed on tightening the operational criteria for some of the items in the scale.
Expert database system for quality control
NASA Astrophysics Data System (ADS)
Wang, Anne J.; Li, Zhi-Cheng
1993-09-01
There are more competitors today. Markets are not homogeneous they are fragmented into increasingly focused niches requiring greater flexibility in the product mix shorter manufacturing production runs and above allhigher quality. In this paper the author identified a real-time expert system as a way to improve plantwide quality management. The quality control expert database system (QCEDS) by integrating knowledge of experts in operations quality management and computer systems use all information relevant to quality managementfacts as well as rulesto determine if a product meets quality standards. Keywords: expert system quality control data base
CARDS - comprehensive aerological reference data set. Station history, Version 2.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-03-01
The possibility of anthropogenic climate change has reached the attention of Government officials and researchers. However, one cannot study climate change without climate data. The CARDS project will produce high-quality upper-air data for the research community and for policy-makers. The authors intend to produce a dataset which is: easy to use, as complete as possible, as free of random errors as possible. They will also attempt to identify biases and remove them whenever possible. In this report, they relate progress toward their goal. They created a robust new format for archiving upper-air data, and designed a relational database structure tomore » hold them. The authors have converted 13 datasets to the new format and have archived over 10,000,000 individual soundings from 10 separate data sources. They produce and archive a metadata summary of each sounding they load. They have researched station histories, and have built a preliminary upper-air station history database. They have converted station-sorted data from their primary database into synoptic-sorted data in a parallel database. They have tested and will soon implement an advanced quality-control procedure, capable of detecting and often repairing errors in geopotential height, temperature, humidity, and wind. This unique quality-control method uses simultaneous vertical, horizontal, and temporal checks of several meteorological variables. It can detect errors other methods cannot. This report contains the station histories for the CARDS data set.« less
Protein Information Resource: a community resource for expert annotation of protein data
Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy
2001-01-01
The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-International databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041
A BRDF-BPDF database for the analysis of Earth target reflectances
NASA Astrophysics Data System (ADS)
Breon, Francois-Marie; Maignan, Fabienne
2017-01-01
Land surface reflectance is not isotropic. It varies with the observation geometry that is defined by the sun, view zenith angles, and the relative azimuth. In addition, the reflectance is linearly polarized. The reflectance anisotropy is quantified by the bidirectional reflectance distribution function (BRDF), while its polarization properties are defined by the bidirectional polarization distribution function (BPDF). The POLDER radiometer that flew onboard the PARASOL microsatellite remains the only space instrument that measured numerous samples of the BRDF and BPDF of Earth targets. Here, we describe a database of representative BRDFs and BPDFs derived from the POLDER measurements. From the huge number of data acquired by the spaceborne instrument over a period of 7 years, we selected a set of targets with high-quality observations. The selection aimed for a large number of observations, free of significant cloud or aerosol contamination, acquired in diverse observation geometries with a focus on the backscatter direction that shows the specific hot spot signature. The targets are sorted according to the 16-class International Geosphere-Biosphere Programme (IGBP) land cover classification system, and the target selection aims at a spatial representativeness within the class. The database thus provides a set of high-quality BRDF and BPDF samples that can be used to assess the typical variability of natural surface reflectances or to evaluate models. It is available freely from the PANGAEA website (doi:10.1594/PANGAEA.864090). In addition to the database, we provide a visualization and analysis tool based on the Interactive Data Language (IDL). It allows an interactive analysis of the measurements and a comparison against various BRDF and BPDF analytical models. The present paper describes the input data, the selection principles, the database format, and the analysis tool
The CompTox Chemistry Dashboard - A Community Data Resource for Environmental Chemistry
Despite an abundance of online databases providing access to chemical data, there is increasing demand for high-quality, structure-curated, open data to meet the various needs of the environmental sciences and computational toxicology communities. The U.S. Environmental Protectio...
Code of Federal Regulations, 2014 CFR
2014-07-01
... other libraries or on-line databases and the extent to which teachers, students, and faculty from other... program of high quality; and (2) The extent to which the Center provides academic and career advising...
Code of Federal Regulations, 2013 CFR
2013-07-01
... other libraries or on-line databases and the extent to which teachers, students, and faculty from other... program of high quality; and (2) The extent to which the Center provides academic and career advising...
Code of Federal Regulations, 2011 CFR
2011-07-01
... other libraries or on-line databases and the extent to which teachers, students, and faculty from other... program of high quality; and (2) The extent to which the Center provides academic and career advising...
Code of Federal Regulations, 2012 CFR
2012-07-01
... other libraries or on-line databases and the extent to which teachers, students, and faculty from other... program of high quality; and (2) The extent to which the Center provides academic and career advising...
Code of Federal Regulations, 2010 CFR
2010-07-01
... other libraries or on-line databases and the extent to which teachers, students, and faculty from other... program of high quality; and (2) The extent to which the Center provides academic and career advising...
Application of machine vision to pup loaf bread evaluation
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Chung, O. K.
1996-12-01
Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.
Reduction of Powerplex(®) Y23 reaction volume for genotyping buccal cell samples on FTA(TM) cards.
Raziel, Aliza; Dell'Ariccia-Carmon, Aviva; Zamir, Ashira
2015-01-01
PowerPlex(®) Y23 is a novel kit for Y-STR typing that includes new highly discriminating loci. The Israel DNA Database laboratory has recently adopted it for routine Y-STR analysis. This study examined PCR amplification from 1.2-mm FTA punch in reduced volumes of 5 and 10 μL. Direct amplification and washing of the FTA punches were examined in different PCR cycle numbers. One short robotically performed wash was found to improve the quality and the percent of profiles obtained. The optimal PCR cycle number was determined for 5 and 10 μL reaction volumes. The percent of obtained profiles, color balance, and reproducibility were examined. High-quality profiles were achieved in 90% and 88% of the samples amplified in 5 and 10 μL, respectively, in the first attempt. Volume reduction to 5 μL has a vast economic impact especially for DNA database laboratories. © 2014 American Academy of Forensic Sciences.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
Image-based diagnostic aid for interstitial lung disease with secondary data integration
NASA Astrophysics Data System (ADS)
Depeursinge, Adrien; Müller, Henning; Hidki, Asmâa; Poletti, Pierre-Alexandre; Platon, Alexandra; Geissbuhler, Antoine
2007-03-01
Interstitial lung diseases (ILDs) are a relatively heterogeneous group of around 150 illnesses with often very unspecific symptoms. The most complete imaging method for the characterisation of ILDs is the high-resolution computed tomography (HRCT) of the chest but a correct interpretation of these images is difficult even for specialists as many diseases are rare and thus little experience exists. Moreover, interpreting HRCT images requires knowledge of the context defined by clinical data of the studied case. A computerised diagnostic aid tool based on HRCT images with associated medical data to retrieve similar cases of ILDs from a dedicated database can bring quick and precious information for example for emergency radiologists. The experience from a pilot project highlighted the need for detailed database containing high-quality annotations in addition to clinical data. The state of the art is studied to identify requirements for image-based diagnostic aid for interstitial lung disease with secondary data integration. The data acquisition steps are detailed. The selection of the most relevant clinical parameters is done in collaboration with lung specialists from current literature, along with knowledge bases of computer-based diagnostic decision support systems. In order to perform high-quality annotations of the interstitial lung tissue in the HRCT images an annotation software and its own file format is implemented for DICOM images. A multimedia database is implemented to store ILD cases with clinical data and annotated image series. Cases from the University & University Hospitals of Geneva (HUG) are retrospectively and prospectively collected to populate the database. Currently, 59 cases with certified diagnosis and their clinical parameters are stored in the database as well as 254 image series of which 26 have their regions of interest annotated. The available data was used to test primary visual features for the classification of lung tissue patterns. These features show good discriminative properties for the separation of five classes of visual observations.
Lenz, Bernard N.
1997-01-01
An important part of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program is the analysis of existing data in each of the NAWQA study areas. The Wisconsin Department of Natural Resources (WDNR) has an extensive aquatic benthic macroinvertebrate communities in streams (benthic invertebrates) database maintained by the University of Wisconsin-Stevens Point. This database has data which date back to 1984 and includes data from streams within the Western Lake Michigan Drainages (WMIC) study area (fig. 1). This report looks at the feasibility of USGS scientists supplementing the data they collect with data from the WDNR database when assessing water quality in the study area.
Information management systems for pharmacogenomics.
Thallinger, Gerhard G; Trajanoski, Slave; Stocker, Gernot; Trajanoski, Zlatko
2002-09-01
The value of high-throughput genomic research is dramatically enhanced by association with key patient data. These data are generally available but of disparate quality and not typically directly associated. A system that could bring these disparate data sources into a common resource connected with functional genomic data would be tremendously advantageous. However, the integration of clinical and accurate interpretation of the generated functional genomic data requires the development of information management systems capable of effectively capturing the data as well as tools to make that data accessible to the laboratory scientist or to the clinician. In this review these challenges and current information technology solutions associated with the management, storage and analysis of high-throughput data are highlighted. It is suggested that the development of a pharmacogenomic data management system which integrates public and proprietary databases, clinical datasets, and data mining tools embedded in a high-performance computing environment should include the following components: parallel processing systems, storage technologies, network technologies, databases and database management systems (DBMS), and application services.
Human Ageing Genomic Resources: new and updated databases
Tacutu, Robi; Thornton, Daniel; Johnson, Emily; Budovsky, Arie; Barardo, Diogo; Craig, Thomas; Diana, Eugene; Lehmann, Gilad; Toren, Dmitri; Wang, Jingwei; Fraifeld, Vadim E
2018-01-01
Abstract In spite of a growing body of research and data, human ageing remains a poorly understood process. Over 10 years ago we developed the Human Ageing Genomic Resources (HAGR), a collection of databases and tools for studying the biology and genetics of ageing. Here, we present HAGR’s main functionalities, highlighting new additions and improvements. HAGR consists of six core databases: (i) the GenAge database of ageing-related genes, in turn composed of a dataset of >300 human ageing-related genes and a dataset with >2000 genes associated with ageing or longevity in model organisms; (ii) the AnAge database of animal ageing and longevity, featuring >4000 species; (iii) the GenDR database with >200 genes associated with the life-extending effects of dietary restriction; (iv) the LongevityMap database of human genetic association studies of longevity with >500 entries; (v) the DrugAge database with >400 ageing or longevity-associated drugs or compounds; (vi) the CellAge database with >200 genes associated with cell senescence. All our databases are manually curated by experts and regularly updated to ensure a high quality data. Cross-links across our databases and to external resources help researchers locate and integrate relevant information. HAGR is freely available online (http://genomics.senescence.info/). PMID:29121237
NGSmethDB 2017: enhanced methylomes and differential methylation.
Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L
2017-01-04
The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Disbiome database: linking the microbiome to disease.
Janssens, Yorick; Nielandt, Joachim; Bronselaer, Antoon; Debunne, Nathan; Verbeke, Frederick; Wynendaele, Evelien; Van Immerseel, Filip; Vandewynckel, Yves-Paul; De Tré, Guy; De Spiegeleer, Bart
2018-06-04
Recent research has provided fascinating indications and evidence that the host health is linked to its microbial inhabitants. Due to the development of high-throughput sequencing technologies, more and more data covering microbial composition changes in different disease types are emerging. However, this information is dispersed over a wide variety of medical and biomedical disciplines. Disbiome is a database which collects and presents published microbiota-disease information in a standardized way. The diseases are classified using the MedDRA classification system and the micro-organisms are linked to their NCBI and SILVA taxonomy. Finally, each study included in the Disbiome database is assessed for its reporting quality using a standardized questionnaire. Disbiome is the first database giving a clear, concise and up-to-date overview of microbial composition differences in diseases, together with the relevant information of the studies published. The strength of this database lies within the combination of the presence of references to other databases, which enables both specific and diverse search strategies within the Disbiome database, and the human annotation which ensures a simple and structured presentation of the available data.
Military Suicide Research Consortium
2014-10-01
increasing and decreasing (or even ceasing entirely) across different periods of time but still building on itself with each progressive episode...community from suicide. One study found that social norms, high levels of support, identification with role models , and high self-esteem help pro - tect...in follow-up. o Conducted quality control checks of clinical data . Monitored safety, adverse events for DSMB reporting. Initiated Database
NASA Astrophysics Data System (ADS)
Guion, A., Jr.; Hodgkins, H.
2015-12-01
The Center of Excellence in Remote Sensing Education and Research (CERSER) has implemented three research projects during the summer Research Experience for Undergraduates (REU) program gathering water quality data for local waterways. The data has been compiled manually utilizing pen and paper and then entered into a spreadsheet. With the spread of electronic devices capable of interacting with databases, the development of an electronic method of entering and manipulating the water quality data was pursued during this project. This project focused on the development of an interactive database to gather, display, and analyze data collected from local waterways. The database and entry form was built in MySQL on a PHP server allowing participants to enter data from anywhere Internet access is available. This project then researched applying this data to the Google Maps site to provide labeling and information to users. The NIA server at http://nia.ecsu.edu is used to host the application for download and for storage of the databases. Water Quality Database Team members included the authors plus Derek Morris Jr., Kathryne Burton and Mr. Jeff Wood as mentor.
Determinants of Post-fire Water Quality in the Western United States
NASA Astrophysics Data System (ADS)
Rust, A.; Saxe, S.; Dolan, F.; Hogue, T. S.; McCray, J. E.
2015-12-01
Large wildfires are becoming increasingly common in the Western United States. Wildfires that consume greater than twenty percent of the watershed impact river water quality. The surface waters of the arid West are limited and in demand by the aquatic ecosystems, irrigated agriculture, and the region's growing human population. A range of studies, typically focused on individual fires, have observed mobilization of contaminants, nutrients (including nitrates), and sediments into receiving streams. Post-fire metal concentrations have also been observed to increase when fires were located in streams close to urban centers. The objective of this work was to assemble an extensive historical water quality database through data mining from federal, state and local agencies into a fire-database. Data from previous studies on individual fires by the co-authors was also included. The fire-database includes observations of water quality, discharge, geospatial and land characteristics from over 200 fire-impacted watersheds in the western U.S. since 1985. Water quality data from burn impacted watersheds was examined for trends in water quality response using statistical analysis. Watersheds where there was no change in water quality after fire were also examined to determine characteristics of the watershed that make it more resilient to fire. The ultimate goal is to evaluate trends in post-fire water quality response and identify key drivers of resiliency and post-fire response. The fire-database will eventually be publicly available.Large wildfires are becoming increasingly common in the Western United States. Wildfires that consume greater than twenty percent of the watershed impact river water quality. The surface waters of the arid West are limited and in demand by the aquatic ecosystems, irrigated agriculture, and the region's growing human population. A range of studies, typically focused on individual fires, have observed mobilization of contaminants, nutrients (including nitrates), and sediments into receiving streams. Post-fire metal concentrations have also been observed to increase when fires were located in streams close to urban centers. The objective of this work was to assemble an extensive historical water quality database through data mining from federal, state and local agencies into a fire-database. Data from previous studies on individual fires by the co-authors was also included. The fire-database includes observations of water quality, discharge, geospatial and land characteristics from over 200 fire-impacted watersheds in the western U.S. since 1985. Water quality data from burn impacted watersheds was examined for trends in water quality response using statistical analysis. Watersheds where there was no change in water quality after fire were also examined to determine characteristics of the watershed that make it more resilient to fire. The ultimate goal is to evaluate trends in post-fire water quality response and identify key drivers of resiliency and post-fire response. The fire-database will eventually be publicly available.
Building structural similarity database for metric learning
NASA Astrophysics Data System (ADS)
Jin, Guoxin; Pappas, Thrasyvoulos N.
2015-03-01
We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.
ClusterMine360: a database of microbial PKS/NRPS biosynthesis
Conway, Kyle R.; Boddy, Christopher N.
2013-01-01
ClusterMine360 (http://www.clustermine360.ca/) is a database of microbial polyketide and non-ribosomal peptide gene clusters. It takes advantage of crowd-sourcing by allowing members of the community to make contributions while automation is used to help achieve high data consistency and quality. The database currently has >200 gene clusters from >185 compound families. It also features a unique sequence repository containing >10 000 polyketide synthase/non-ribosomal peptide synthetase domains. The sequences are filterable and downloadable as individual or multiple sequence FASTA files. We are confident that this database will be a useful resource for members of the polyketide synthases/non-ribosomal peptide synthetases research community, enabling them to keep up with the growing number of sequenced gene clusters and rapidly mine these clusters for functional information. PMID:23104377
World-wide precision airports for SVS
NASA Astrophysics Data System (ADS)
Schiefele, Jens; Lugsch, Bill; Launer, Marc; Baca, Diana
2004-08-01
Future cockpit and aviation applications require high quality airport databases. Accuracy, resolution, integrity, completeness, traceability, and timeliness [1] are key requirements. For most aviation applications, attributed vector databases are needed. The geometry is based on points, lines, and closed polygons. To document the needs for aviation industry RTCA and EUROCAE developed in a joint committee, the DO-272/ED-99 document. It states industry needs for data features, attributes, coding, and capture rules for Airport Mapping Databases (AMDB). This paper describes the technical approach Jeppesen has taken to generate a world-wide set of three-hundred AMDB airports. All AMDB airports are DO-200A/ED-76 [1] and DO-272/ED-99 [2] compliant. Jeppesen airports have a 5m (CE90) accuracy and an 10-3 integrity. World-wide all AMDB data is delivered in WGS84 coordinates. Jeppesen continually updates the databases.
Spectral signature verification using statistical analysis and text mining
NASA Astrophysics Data System (ADS)
DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.
2016-05-01
In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is present for comparison. The spectral validation method proposed is described from a practical application and analytical perspective.
Machine learning approaches to analysing textual injury surveillance data: a systematic review.
Vallmuur, Kirsten
2015-06-01
To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Systematic review. The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bansal, Anu; Binkert, Christoph A; Robinson, Malcolm K; Shulman, Lawrence N; Pellerin, Linda; Davison, Brian
2008-08-01
To assess the utility of maintaining and analyzing a quality-management database while investigating a subjectively perceived increase in the incidence of tunneled catheter and port dysfunction in a cohort of oncology outpatients. All 152 patients undergoing lytic therapy (2-4 mg alteplase) of a malfunctioning indwelling central venous catheter (CVC) from January through June 2004 at a single cancer center in the United States were included in a quality-management database. Patients were categorized by time to device failure and the initial method of catheter placement (surgery vs interventional radiology). Data were analyzed after 3 months, and areas of possible improvement were identified and acted upon. Three months of follow-up data were then collected and similarly analyzed. In a 6-month period, 152 patients treated for catheter malfunction received a total of 276 doses of lytic therapy. A 3-month interim analysis revealed a disproportionately high rate (34%) of early catheter malfunction (ECM; <30 days from placement). Postplacement radiographs demonstrated suboptimal catheter positioning in 67% of these patients, all of whom had surgical catheter placement. There was a 50% absolute decrease in the number of patients presenting with catheter malfunction in the period from April through June (P < .001). Evaluation of postplacement radiographs in these patients demonstrated a 50% decrease in the incidence of suboptimal positioning (P < .05). Suboptimal positioning was likely responsible for some, but not all, cases of ECM. Maintenance of a quality-management database is a relatively simple intervention that can have a clear and important impact on the quality and cost of patient care.
Aygin, Dilek; Cengiz, Hande
2018-05-02
Prophylactic mastectomy is used to reduce the incidence of breast cancer in women with genetic predisposition and family history of breast cancer, and the rate of application is increased nowadays. Chronic pain, body image, and sexuality may negatively affect quality of life, while patients generally have increased quality of life and satisfaction after prophylactic mastectomy. The aim of this study is the evaluation of the results of the studies about quality of life of patients who underwent breast reconstruction after prophylactic mastectomy. For the 1996-2016 literature, we searched the databases of Scopus, Science Direct, PubMed, EBSCO, Cochrane, Medline Complete, Ovid, Springer Link, Google Academic, Taylor & Francis, PsychINFO databases. For the gray literature, National Thesis Center and ULAKBIM databases were searched. Seven studies complying with the criteria were included in the review. Seven studies included in this study aimed to investigate the effect of prophylactic mastectomy on breast pain, numbness, sexuality and quality of life. When the studies were reviewed, we were found that the majority of the patients were satisfied with the results of the procedure, although the body image perception and pain/ movement/ perception and sexual problems were experienced after the breast surgery. While overall satisfaction with cosmetic results was high, most women were not satisfied with the softness of the reconstructed breasts, and had problems with breast hardness, numbness and sex. Therefore, it is very important to inform the patients about the complications that may develop after the operation, while there is not enough data about the importance of informing the patients before the operation.
Importance of Data Management in a Long-term Biological Monitoring Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Sigurd W; Brandt, Craig C; McCracken, Kitty
2011-01-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program. An existing relational database was adapted and extended to handle biological data. Data modeling enabled the program's database to process, store, and retrievemore » its data. The data base's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. There are limitations. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.« less
Importance of Data Management in a Long-Term Biological Monitoring Program
NASA Astrophysics Data System (ADS)
Christensen, Sigurd W.; Brandt, Craig C.; McCracken, Mary K.
2011-06-01
The long-term Biological Monitoring and Abatement Program (BMAP) has always needed to collect and retain high-quality data on which to base its assessments of ecological status of streams and their recovery after remediation. Its formal quality assurance, data processing, and data management components all contribute to meeting this need. The Quality Assurance Program comprehensively addresses requirements from various institutions, funders, and regulators, and includes a data management component. Centralized data management began a few years into the program when an existing relational database was adapted and extended to handle biological data. The database's main data tables and several key reference tables are described. One of the most important related activities supporting long-term analyses was the establishing of standards for sampling site names, taxonomic identification, flagging, and other components. The implemented relational database supports the transmittal of data to the Oak Ridge Environmental Information System (OREIS) as the permanent repository. We also discuss some limitations to our implementation. Some types of program data were not easily accommodated in the central systems, and many possible data-sharing and integration options are not easily accessible to investigators. From our experience we offer data management advice to other biologically oriented long-term environmental sampling and analysis programs.
Grover, Frederick L.; Shroyer, A. Laurie W.; Hammermeister, Karl; Edwards, Fred H.; Ferguson, T. Bruce; Dziuban, Stanley W.; Cleveland, Joseph C.; Clark, Richard E.; McDonald, Gerald
2001-01-01
Objective To review the Department of Veteran Affairs (VA) and the Society of Thoracic Surgeons (STS) national databases over the past 10 years to evaluate their relative similarities and differences, to appraise their use as quality improvement tools, and to assess their potential to facilitate improvements in quality of cardiac surgical care. Summary Background Data The VA developed a mandatory risk-adjusted database in 1987 to monitor outcomes of cardiac surgery at all VA medical centers. In 1989 the STS developed a voluntary risk-adjusted database to help members assess quality and outcomes in their individual programs and to facilitate improvements in quality of care. Methods A short data form on every veteran operated on at each VA medical center is completed and transmitted electronically for analysis of unadjusted and risk-adjusted death and complications, as well as length of stay. Masked, confidential semiannual reports are then distributed to each program’s clinical team and the associated administrator. These reports are also reviewed by a national quality oversight committee. Thus, VA data are used both locally for quality improvement and at the national level with quality surveillance. The STS dataset (217 core fields and 255 extended fields) is transmitted for each patient semiannually to the Duke Clinical Research Institute (DCRI) for warehousing, analysis, and distribution. Site-specific reports are produced with regional and national aggregate comparisons for unadjusted and adjusted surgical deaths and complications, as well as length of stay for coronary artery bypass grafting (CABG), valvular procedures, and valvular/CABG procedures. Both databases use the logistic regression modeling approach. Data for key processes of care are also captured in both databases. Research projects are frequently carried out using each database. Results More than 74,000 and 1.6 million cardiac surgical patients have been entered into the VA and STS databases, respectively. Risk factors that predict surgical death for CABG are very similar in the two databases, as are the odds ratios for most of the risk factors. One major difference is that the VA is 99% male, the STS 71% male. Both databases have shown a significant reduction in the risk-adjusted surgical death rate during the past decade despite the fact that patients have presented with an increased risk factor profile. The ratio of observed to expected deaths decreased from 1.05 to 0.9 for the VA and from 1.5 to 0.9 for the STS. Conclusion It appears that the routine feedback of risk-adjusted data on local performance provided by these programs heightens awareness and leads to self-examination and self-assessment, which in turn improves quality and outcomes. This general quality improvement template should be considered for application in other settings beyond cardiac surgery. PMID:11573040
NASA Astrophysics Data System (ADS)
Opálková, Marie; Navrátil, Martin; Špunda, Vladimír; Blanc, Philippe; Wald, Lucien
2018-04-01
A database containing 10 min means of solar irradiance measured on a horizontal plane in several ultraviolet and visible bands from July 2014 to December 2016 at three stations in the area of the city of Ostrava (Czech Republic) is presented. The database contains time series of 10 min average irradiances or photosynthetic photon flux densities measured in the following spectral bands: 280-315 nm (UVB); 315-380 nm (UVA); and 400-700 nm (photosynthetically active radiation, PAR); 510-700 nm; 600-700 nm; 610-680 nm; 690-780 nm; 400-1100 nm. A series of meteorological variables including relative air humidity and air temperature at surface is also provided at the same 10 min time step at all three stations, and precipitation is provided for two stations. Air pressure, wind speed, wind direction, and concentrations of air pollutants PM10, SO2, NOx, NO, NO2 were measured at the 1 h time step at the fourth station owned by the Public Health Institute of Ostrava. The details of the experimental sites and instruments used for the measurements are given. Special attention is given to the data quality, and the original approach to the data quality which was established is described in detail. About 130 000 records for each of the three stations are available in the database. This database offers a unique ensemble of variables having a high temporal resolution and it is a reliable source for radiation in relation to environment and vegetation in highly polluted areas of industrial cities in the of northern mid-latitudes. The database has been placed on the PANGAEA repository (https://doi.org/10.1594/PANGAEA.879722) and contains individual data files for each station.
Methods for the guideline-based development of quality indicators--a systematic review
2012-01-01
Background Quality indicators (QIs) are used in many healthcare settings to measure, compare, and improve quality of care. For the efficient development of high-quality QIs, rigorous, approved, and evidence-based development methods are needed. Clinical practice guidelines are a suitable source to derive QIs from, but no gold standard for guideline-based QI development exists. This review aims to identify, describe, and compare methodological approaches to guideline-based QI development. Methods We systematically searched medical literature databases (Medline, EMBASE, and CINAHL) and grey literature. Two researchers selected publications reporting methodological approaches to guideline-based QI development. In order to describe and compare methodological approaches used in these publications, we extracted detailed information on common steps of guideline-based QI development (topic selection, guideline selection, extraction of recommendations, QI selection, practice test, and implementation) to predesigned extraction tables. Results From 8,697 hits in the database search and several grey literature documents, we selected 48 relevant references. The studies were of heterogeneous type and quality. We found no randomized controlled trial or other studies comparing the ability of different methodological approaches to guideline-based development to generate high-quality QIs. The relevant publications featured a wide variety of methodological approaches to guideline-based QI development, especially regarding guideline selection and extraction of recommendations. Only a few studies reported patient involvement. Conclusions Further research is needed to determine which elements of the methodological approaches identified, described, and compared in this review are best suited to constitute a gold standard for guideline-based QI development. For this research, we provide a comprehensive groundwork. PMID:22436067
Layani, Géraldine; Fleet, Richard; Dallaire, Renée; Tounkara, Fatoumata K; Poitras, Julien; Archambault, Patrick; Chauny, Jean-Marc; Ouimet, Mathieu; Gauthier, Josée; Dupuis, Gilles; Tanguay, Alain; Lévesque, Jean-Frédéric; Simard-Racine, Geneviève; Haggerty, Jeannie; Légaré, France
2016-01-01
Evidence-based indicators of quality of care have been developed to improve care and performance in Canadian emergency departments. The feasibility of measuring these indicators has been assessed mainly in urban and academic emergency departments. We sought to assess the feasibility of measuring quality-of-care indicators in rural emergency departments in Quebec. We previously identified rural emergency departments in Quebec that offered medical coverage with hospital beds 24 hours a day, 7 days a week and were located in rural areas or small towns as defined by Statistics Canada. A standardized protocol was sent to each emergency department to collect data on 27 validated quality-of-care indicators in 8 categories: duration of stay, patient safety, pain management, pediatrics, cardiology, respiratory care, stroke and sepsis/infection. Data were collected by local professional medical archivists between June and December 2013. Fifteen (58%) of the 26 emergency departments invited to participate completed data collection. The ability to measure the 27 quality-of-care indicators with the use of databases varied across departments. Centres 2, 5, 6 and 13 used databases for at least 21 of the indicators (78%-92%), whereas centres 3, 8, 9, 11, 12 and 15 used databases for 5 (18%) or fewer of the indicators. On average, the centres were able to measure only 41% of the indicators using heterogeneous databases and manual extraction. The 15 centres collected data from 15 different databases or combinations of databases. The average data collection time for each quality-of-care indicator varied from 5 to 88.5 minutes. The median data collection time was 15 minutes or less for most indicators. Quality-of-care indicators were not easily captured with the use of existing databases in rural emergency departments in Quebec. Further work is warranted to improve standardized measurement of these indicators in rural emergency departments in the province and to generalize the information gathered in this study to other health care environments.
Ehrhart, Karen Holcombe; Witt, L A; Schneider, Benjamin; Perry, Sara Jansen
2011-03-01
We lend theoretical insight to the service climate literature by exploring the joint effects of branch service climate and the internal service provided to the branch (the service received from corporate units to support external service delivery) on customer-rated service quality. We hypothesized that service climate is related to service quality most strongly when the internal service quality received is high, providing front-line employees with the capability to deliver what the service climate motivates them to do. We studied 619 employees and 1,973 customers in 36 retail branches of a bank. We aggregated employee perceptions of the internal service quality received from corporate units and the local service climate and external customer perceptions of service quality to the branch level of analysis. Findings were consistent with the hypothesis that high-quality internal service is necessary for branch service climate to yield superior external customer service quality. PsycINFO Database Record (c) 2011 APA, all rights reserved.
78 FR 28848 - Information Collection Activities; Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative Database.'' In accordance with the... for Healthcare Research and Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative... SOPS) Comparative Database; OMB NO. 0935- [[Page 28849
Guidelines for establishing and maintaining construction quality databases.
DOT National Transportation Integrated Search
2006-11-01
The main objective of this study was to develop and present guidelines for State highway agencies (SHAs) in establishing and maintaining database systems geared towards construction quality issues for asphalt and concrete paving projects. To accompli...
Hirabayashi, Satoshi; Nowak, David J
2016-08-01
Trees remove air pollutants through dry deposition processes depending upon forest structure, meteorology, and air quality that vary across space and time. Employing nationally available forest, weather, air pollution and human population data for 2010, computer simulations were performed for deciduous and evergreen trees with varying leaf area index for rural and urban areas in every county in the conterminous United States. The results populated a national database of annual air pollutant removal, concentration changes, and reductions in adverse health incidences and costs for NO2, O3, PM2.5 and SO2. The developed database enabled a first order approximation of air quality and associated human health benefits provided by trees with any forest configurations anywhere in the conterminous United States over time. Comprehensive national database of tree effects on air quality and human health in the United States was developed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database
NASA Technical Reports Server (NTRS)
Hanke, Jeremy L.
2011-01-01
The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.
Process mapping as a framework for performance improvement in emergency general surgery.
DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad
2017-12-01
Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.
Process mapping as a framework for performance improvement in emergency general surgery.
DeGirolamo, Kristin; D'Souza, Karan; Hall, William; Joos, Emilie; Garraway, Naisan; Sing, Chad Kim; McLaughlin, Patrick; Hameed, Morad
2018-02-01
Emergency general surgery conditions are often thought of as being too acute for the development of standardized approaches to quality improvement. However, process mapping, a concept that has been applied extensively in manufacturing quality improvement, is now being used in health care. The objective of this study was to create process maps for small bowel obstruction in an effort to identify potential areas for quality improvement. We used the American College of Surgeons Emergency General Surgery Quality Improvement Program pilot database to identify patients who received nonoperative or operative management of small bowel obstruction between March 2015 and March 2016. This database, patient charts and electronic health records were used to create process maps from the time of presentation to discharge. Eighty-eight patients with small bowel obstruction (33 operative; 55 nonoperative) were identified. Patients who received surgery had a complication rate of 32%. The processes of care from the time of presentation to the time of follow-up were highly elaborate and variable in terms of duration; however, the sequences of care were found to be consistent. We used data visualization strategies to identify bottlenecks in care, and they showed substantial variability in terms of operating room access. Variability in the operative care of small bowel obstruction is high and represents an important improvement opportunity in general surgery. Process mapping can identify common themes, even in acute care, and suggest specific performance improvement measures.
ERIC Educational Resources Information Center
Nworji, Alexander O.
2013-01-01
Most organizations spend millions of dollars due to the impact of improperly implemented database application systems as evidenced by poor data quality problems. The purpose of this quantitative study was to use, and extend, the technology acceptance model (TAM) to assess the impact of information quality and technical quality factors on database…
Databases as policy instruments. About extending networks as evidence-based policy.
de Bont, Antoinette; Stoevelaar, Herman; Bal, Roland
2007-12-07
This article seeks to identify the role of databases in health policy. Access to information and communication technologies has changed traditional relationships between the state and professionals, creating new systems of surveillance and control. As a result, databases may have a profound effect on controlling clinical practice. We conducted three case studies to reconstruct the development and use of databases as policy instruments. Each database was intended to be employed to control the use of one particular pharmaceutical in the Netherlands (growth hormone, antiretroviral drugs for HIV and Taxol, respectively). We studied the archives of the Dutch Health Insurance Board, conducted in-depth interviews with key informants and organized two focus groups, all focused on the use of databases both in policy circles and in clinical practice. Our results demonstrate that policy makers hardly used the databases, neither for cost control nor for quality assurance. Further analysis revealed that these databases facilitated self-regulation and quality assurance by (national) bodies of professionals, resulting in restrictive prescription behavior amongst physicians. The databases fulfill control functions that were formerly located within the policy realm. The databases facilitate collaboration between policy makers and physicians, since they enable quality assurance by professionals. Delegating regulatory authority downwards into a network of physicians who control the use of pharmaceuticals seems to be a good alternative for centralized control on the basis of monitoring data.
National Databases for Neurosurgical Outcomes Research: Options, Strengths, and Limitations.
Karhade, Aditya V; Larsen, Alexandra M G; Cote, David J; Dubois, Heloise M; Smith, Timothy R
2017-08-05
Quality improvement, value-based care delivery, and personalized patient care depend on robust clinical, financial, and demographic data streams of neurosurgical outcomes. The neurosurgical literature lacks a comprehensive review of large national databases. To assess the strengths and limitations of various resources for outcomes research in neurosurgery. A review of the literature was conducted to identify surgical outcomes studies using national data sets. The databases were assessed for the availability of patient demographics and clinical variables, longitudinal follow-up of patients, strengths, and limitations. The number of unique patients contained within each data set ranged from thousands (Quality Outcomes Database [QOD]) to hundreds of millions (MarketScan). Databases with both clinical and financial data included PearlDiver, Premier Healthcare Database, Vizient Clinical Data Base and Resource Manager, and the National Inpatient Sample. Outcomes collected by databases included patient-reported outcomes (QOD); 30-day morbidity, readmissions, and reoperations (National Surgical Quality Improvement Program); and disease incidence and disease-specific survival (Surveillance, Epidemiology, and End Results-Medicare). The strengths of large databases included large numbers of rare pathologies and multi-institutional nationally representative sampling; the limitations of these databases included variable data veracity, variable data completeness, and missing disease-specific variables. The improvement of existing large national databases and the establishment of new registries will be crucial to the future of neurosurgical outcomes research. Copyright © 2017 by the Congress of Neurological Surgeons
NASA Astrophysics Data System (ADS)
Ray, E.; McCabe, D.; Sheldon, S.; Jankowski, K.; Haselton, L.; Luck, M.; van Houten, J.
2009-12-01
The Vermont EPSCoR Streams Project engages a diverse group of undergraduates, high school students, and their teachers in hands-on water quality research and exposes them to the process of science. The project aims to (1) recruit students to science careers and (2) create a water quality database comprised of high-quality data collected by undergraduates and high school groups. The project is the training and outreach mechanism of the Complex Systems Modeling for Environmental Problem Solving research program, an NSF-funded program at the University of Vermont (UVM) that provides computational strategies and fresh approaches for understanding how natural and built environments interact. The Streams Project trains participants to collect and analyze data from streams throughout Vermont and at limited sites in Connecticut, New York, and Puerto Rico. Participants contribute their data to an online database and use it to complete individual research projects that focus on the effect of land use and precipitation patterns on selected measures of stream water quality. All undergraduates and some high school groups are paired with a mentor, who is either a graduate student or a faculty member at UVM or other college. Each year, undergraduate students and high school groups are trained to (1) collect water and macroinvertebrate samples from streams, (2) analyze water samples for total phosphorus, bacteria, and total suspended solids in an analytical laboratory, and/or (3) use geographic information systems (GIS) to assess landscape-level data for their watersheds. After training, high school groups collect samples from stream sites on a twice-monthly basis while undergraduates conduct semi-autonomous field and laboratory research. High school groups monitor sites in two watersheds with contrasting land uses. Undergraduate projects are shaped by the interests of students and their mentors. Contribution to a common database provides students with the option to expand the scope of their analyses and produce more powerful results than any one team could have produced alone. The year of research culminates in a final project that is presented at a symposium. The project is in its second year and has received positive feedback from outside reviewers. Participants leave the project with a greater understanding of watershed research. Immediate outcomes include nearly 60 participant projects, an online publicly-accessible shared dataset, and Web-based macroinvertebrate identification keys. We found that the best training strategies make the material and concepts explicit. To this end, the project is enhancing its Web interface, which will soon include tutorials on water quality and an interactive map through which participants will have access to watershed-level spatial information such as land use, bedrock, soils, and transportation infrastructure. Ultimately, the data from the project can inform public debate and aid resource managers in implementing watershed restoration and protection projects.
Kim, Ki Hwan; Do, Won-Joon; Park, Sung-Hong
2018-05-04
The routine MRI scan protocol consists of multiple pulse sequences that acquire images of varying contrast. Since high frequency contents such as edges are not significantly affected by image contrast, down-sampled images in one contrast may be improved by high resolution (HR) images acquired in another contrast, reducing the total scan time. In this study, we propose a new deep learning framework that uses HR MR images in one contrast to generate HR MR images from highly down-sampled MR images in another contrast. The proposed convolutional neural network (CNN) framework consists of two CNNs: (a) a reconstruction CNN for generating HR images from the down-sampled images using HR images acquired with a different MRI sequence and (b) a discriminator CNN for improving the perceptual quality of the generated HR images. The proposed method was evaluated using a public brain tumor database and in vivo datasets. The performance of the proposed method was assessed in tumor and no-tumor cases separately, with perceptual image quality being judged by a radiologist. To overcome the challenge of training the network with a small number of available in vivo datasets, the network was pretrained using the public database and then fine-tuned using the small number of in vivo datasets. The performance of the proposed method was also compared to that of several compressed sensing (CS) algorithms. Incorporating HR images of another contrast improved the quantitative assessments of the generated HR image in reference to ground truth. Also, incorporating a discriminator CNN yielded perceptually higher image quality. These results were verified in regions of normal tissue as well as tumors for various MRI sequences from pseudo k-space data generated from the public database. The combination of pretraining with the public database and fine-tuning with the small number of real k-space datasets enhanced the performance of CNNs in in vivo application compared to training CNNs from scratch. The proposed method outperformed the compressed sensing methods. The proposed method can be a good strategy for accelerating routine MRI scanning. © 2018 American Association of Physicists in Medicine.
Ang, Darwin N; Behrns, Kevin E
2013-07-01
The emphasis on high-quality care has spawned the development of quality programs, most of which focus on broad outcome measures across a diverse group of providers. Our aim was to investigate the clinical outcomes for a department of surgery with multiple service lines of patient care using a relational database. Mortality, length of stay (LOS), patient safety indicators (PSIs), and hospital-acquired conditions were examined for each service line. Expected values for mortality and LOS were derived from University HealthSystem Consortium regression models, whereas expected values for PSIs were derived from Agency for Healthcare Research and Quality regression models. Overall, 5200 patients were evaluated from the months of January through May of both 2011 (n = 2550) and 2012 (n = 2650). The overall observed-to-expected (O/E) ratio of mortality improved from 1.03 to 0.92. The overall O/E ratio for LOS improved from 0.92 to 0.89. PSIs that predicted mortality included postoperative sepsis (O/E:1.89), postoperative respiratory failure (O/E:1.83), postoperative metabolic derangement (O/E:1.81), and postoperative deep vein thrombosis or pulmonary embolus (O/E:1.8). Mortality and LOS can be improved by using a relational database with outcomes reported to specific service lines. Service line quality can be influenced by distribution of frequent reports, group meetings, and service line-directed interventions.
Long, Linda; Briscoe, Simon; Cooper, Chris; Hyde, Chris; Crathorne, Louise
2015-01-01
Lateral elbow tendinopathy (LET) is a common complaint causing characteristic pain in the lateral elbow and upper forearm, and tenderness of the forearm extensor muscles. It is thought to be an overuse injury and can have a major impact on the patient's social and professional life. The condition is challenging to treat and prone to recurrent episodes. The average duration of a typical episode ranges from 6 to 24 months, with most (89%) reporting recovery by 1 year. This systematic review aims to summarise the evidence concerning the clinical effectiveness and cost-effectiveness of conservative interventions for LET. A comprehensive search was conducted from database inception to 2012 in a range of databases including MEDLINE, EMBASE and Cochrane Databases. We conducted an overview of systematic reviews to summarise the current evidence concerning the clinical effectiveness and a systematic review for the cost-effectiveness of conservative interventions for LET. We identified additional randomised controlled trials (RCTs) that could contribute further evidence to existing systematic reviews. We searched MEDLINE, EMBASE, Allied and Complementary Medicine Database, Cumulative Index to Nursing and Allied Health Literature, Web of Science, The Cochrane Library and other important databases from inception to January 2013. A total of 29 systematic reviews published since 2003 matched our inclusion criteria. These were quality appraised using the Assessment of Multiple Systematic Reviews (AMSTAR) checklist; five were considered high quality and evaluated using a Grading of Recommendations, Assessment, Development and Evaluation approach. A total of 36 RCTs were identified that were not included in a systematic review and 29 RCTs were identified that had only been evaluated in an included systematic review of intermediate/low quality. These were then mapped to existing systematic reviews where further evidence could provide updates. Two economic evaluations were identified. The summary of findings from the review was based only on high-quality evidence (scoring of > 5 AMSTAR). Other limitations were that identified RCTs were not quality appraised and dichotomous outcomes were also not considered. Economic evaluations took effectiveness estimates from trials that had small sample sizes leading to uncertainty surrounding the effect sizes reported. This, in turn, led to uncertainty of the reported cost-effectiveness and, as such, no robust recommendations could be made in this respect. Clinical effectiveness evidence from the high-quality systematic reviews identified in this overview continues to suggest uncertainty as to the effectiveness of many conservative interventions for the treatment of LET. Although new RCT evidence has been identified with either placebo or active controls, there is uncertainty as to the size of effects reported within them because of the small sample size. Conclusions regarding cost-effectiveness are also unclear. We consider that, although updated or new systematic reviews may also be of value, the primary focus of future work should be on conducting large-scale, good-quality clinical trials using a core set of outcome measures (for defined time points) and appropriate follow-up. Subgroup analysis of existing RCT data may be beneficial to ascertain whether or not certain patient groups are more likely to respond to treatments. This study is registered as PROSPERO CRD42013003593. The National Institute for Health Research Health Technology Assessment programme.
Liu, Ken H.; Walker, Douglas I.; Uppal, Karan; Tran, ViLinh; Rohrbeck, Patricia; Mallon, Timothy M.; Jones, Dean P.
2016-01-01
Objective To maximize detection of serum metabolites with high-resolution metabolomics (HRM). Methods Department of Defense Serum Repository (DoDSR) samples were analyzed using ultra-high resolution mass spectrometry with three complementary chromatographic phases and four ionization modes. Chemical coverage was evaluated by number of ions detected and accurate mass matches to a human metabolomics database. Results Individual HRM platforms provided accurate mass matches for up to 58% of the KEGG metabolite database. Combining two analytical methods increased matches to 72%, and included metabolites in most major human metabolic pathways and chemical classes. Detection and feature quality varied by analytical configuration. Conclusions Dual chromatography HRM with positive and negative electrospray ionization provides an effective generalized method for metabolic assessment of military personnel. PMID:27501105
Fazio, Simone; Garraín, Daniel; Mathieux, Fabrice; De la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda
2015-01-01
Under the framework of the European Platform on Life Cycle Assessment, the European Reference Life-Cycle Database (ELCD - developed by the Joint Research Centre of the European Commission), provides core Life Cycle Inventory (LCI) data from front-running EU-level business associations and other sources. The ELCD contains energy-related data on power and fuels. This study describes the methods to be used for the quality analysis of energy data for European markets (available in third-party LC databases and from authoritative sources) that are, or could be, used in the context of the ELCD. The methodology was developed and tested on the energy datasets most relevant for the EU context, derived from GaBi (the reference database used to derive datasets for the ELCD), Ecoinvent, E3 and Gemis. The criteria for the database selection were based on the availability of EU-related data, the inclusion of comprehensive datasets on energy products and services, and the general approval of the LCA community. The proposed approach was based on the quality indicators developed within the International Reference Life Cycle Data System (ILCD) Handbook, further refined to facilitate their use in the analysis of energy systems. The overall Data Quality Rating (DQR) of the energy datasets can be calculated by summing up the quality rating (ranging from 1 to 5, where 1 represents very good, and 5 very poor quality) of each of the quality criteria indicators, divided by the total number of indicators considered. The quality of each dataset can be estimated for each indicator, and then compared with the different databases/sources. The results can be used to highlight the weaknesses of each dataset and can be used to guide further improvements to enhance the data quality with regard to the established criteria. This paper describes the application of the methodology to two exemplary datasets, in order to show the potential of the methodological approach. The analysis helps LCA practitioners to evaluate the usefulness of the ELCD datasets for their purposes, and dataset developers and reviewers to derive information that will help improve the overall DQR of databases.
de Bruin, Marijn; Viechtbauer, Wolfgang; Hospers, Harm J; Schaalma, Herman P; Kok, Gerjo
2009-11-01
Clinical trials of behavioral interventions seek to enhance evidence-based health care. However, in case the quality of standard care provided to control conditions varies between studies and affects outcomes, intervention effects cannot be directly interpreted or compared. The objective of the present study was to examine whether standard care quality (SCQ) could be reliably assessed, varies between studies of highly active antiretroviral HIV-adherence interventions, and is related to the proportion of patients achieving an undetectable viral load ("success rate"). Databases were searched for relevant articles. Authors of selected studies retrospectively completed a checklist with standard care activities, which were coded to compute SCQ scores. The relationship between SCQ and the success rates was examined using meta-regression. Cronbach's alpha, variability in SCQ, and relation between SCQ and success rate. Reliability of the SCQ instrument was high (Cronbach's alpha = .91). SCQ scores ranged from 3.7 to 27.8 (total range = 0-30) and were highly predictive of success rate (p = .002). Variation in SCQ provided to control groups may substantially influence effect sizes of behavior change interventions. Future trials should therefore assess and report SCQ, and meta-analyses should control for variability in SCQ, thereby producing more accurate estimates of the effectiveness of behavior change interventions. PsycINFO Database Record (c) 2009 APA, all rights reserved.
The collection of chemical structures and associated experimental data for QSAR modeling is facilitated by the increasing number and size of public databases. However, the performance of QSAR models highly depends on the quality of the data used and the modeling methodology. The ...
Design and Development of Web-Based Information Literacy Tutorials
ERIC Educational Resources Information Center
Su, Shiao-Feng; Kuo, Jane
2010-01-01
The current study conducts a thorough content analysis of recently built or up-to-date high-quality web-based information literacy tutorials contributed by academic libraries in a peer-reviewed database, PRIMO. This research analyzes the topics/skills PRIMO tutorials consider essential and the teaching strategies they consider effective. The…
SPATIALLY-BALANCED SURVEY DESIGN FOR GROUNDWATER USING EXISTING WELLS
Many states have a monitoring program to evaluate the water quality of groundwater across the state. These programs rely on existing wells for access to the groundwater, due to the high cost of drilling new wells. Typically, a state maintains a database of all well locations, in...
Intubation Success in Critical Care Transport: A Multicenter Study.
Reichert, Ryan J; Gothard, Megan; Gothard, M David; Schwartz, Hamilton P; Bigham, Michael T
2018-02-21
Tracheal intubation (TI) is a lifesaving critical care skill. Failed TI attempts, however, can harm patients. Critical care transport (CCT) teams function as the first point of critical care contact for patients being transported to tertiary medical centers for specialized surgical, medical, and trauma care. The Ground and Air Medical qUality in Transport (GAMUT) Quality Improvement Collaborative uses a quality metric database to track CCT quality metric performance, including TI. We sought to describe TI among GAMUT participants with the hypothesis that CCT would perform better than other prehospital TI reports and similarly to hospital TI success. The GAMUT Database is a global, voluntary database for tracking consensus quality metric performance among CCT programs performing neonatal, pediatric, and adult transports. The TI-specific quality metrics are "first attempt TI success" and "definitive airway sans hypoxia/hypotension on first attempt (DASH-1A)." The 2015 GAMUT Database was queried and analysis included patient age, program type, and intubation success rate. Analysis included simple statistics and Pearson chi-square with Bonferroni-adjusted post hoc z tests (significance = p < 0.05 via two-sided testing). Overall, 85,704 patient contacts (neonatal n [%] = 12,664 [14.8%], pediatric n [%] = 28,992 [33.8%], adult n [%] = 44,048 [51.4%]) were included, with 4,036 (4.7%) TI attempts. First attempt TI success was lowest in neonates (59.3%, 617 attempts), better in pediatrics (81.7%, 519 attempts), and best in adults (87%, 2900 attempts), p < 0.001. Adult-focused CCT teams had higher overall first attempt TI success versus pediatric- and neonatal-focused teams (86.9% vs. 63.5%, p < 0.001) and also in pediatric first attempt TI success (86.5% vs. 75.3%, p < 0.001). DASH-1A rates were lower across all patient types (neonatal = 51.9%, pediatric = 74.3%, adult = 79.8%). CCT TI is not uncommon, and rates of TI and DASH-1A success are higher in adult patients and adult-focused CCT teams. TI success rates are higher in CCT than other prehospital settings, but lower than in-hospital success TI rates. Identifying factors influencing TI success among high performers should influence best practice strategies for TI.
[Data validation methods and discussion on Chinese materia medica resource survey].
Zhang, Yue; Ma, Wei-Feng; Zhang, Xiao-Bo; Zhu, Shou-Dong; Guo, Lan-Ping; Wang, Xing-Xing
2013-07-01
From the beginning of the fourth national survey of the Chinese materia medica resources, there were 22 provinces have conducted pilots. The survey teams have reported immense data, it put forward the very high request to the database system construction. In order to ensure the quality, it is necessary to check and validate the data in database system. Data validation is important methods to ensure the validity, integrity and accuracy of census data. This paper comprehensively introduce the data validation system of the fourth national survey of the Chinese materia medica resources database system, and further improve the design idea and programs of data validation. The purpose of this study is to promote the survey work smoothly.
USDA-ARS?s Scientific Manuscript database
In this study, we generated a linkage map containing 1,151,856 high quality SNPs between Mo17 and B73, which were verified in the maize intermated B73'×'Mo17 (IBM) Syn10 population. This resource is an excellent complement to existing maize genetic maps available in an online database (iPlant, http:...
Validation of a for anaerobic bacteria optimized MALDI-TOF MS biotyper database: The ENRIA project.
Veloo, A C M; Jean-Pierre, H; Justesen, U S; Morris, T; Urban, E; Wybo, I; Kostrzewa, M; Friedrich, A W
2018-03-12
Within the ENRIA project, several 'expertise laboratories' collaborated in order to optimize the identification of clinical anaerobic isolates by using a widely available platform, the Biotyper Matrix Assisted Laser Desorption Ionization Time-of-Flight Mass Spectrometry (MALDI-TOF MS). Main Spectral Profiles (MSPs) of well characterized anaerobic strains were added to one of the latest updates of the Biotyper database db6903; (V6 database) for common use. MSPs of anaerobic strains nominated for addition to the Biotyper database are included in this validation. In this study, we validated the optimized database (db5989 [V5 database] + ENRIA MSPs) using 6309 anaerobic isolates. Using the V5 database 71.1% of the isolates could be identified with high confidence, 16.9% with low confidence and 12.0% could not be identified. Including the MSPs added to the V6 database and all MSPs created within the ENRIA project, the amount of strains identified with high confidence increased to 74.8% and 79.2%, respectively. Strains that could not be identified using MALDI-TOF MS decreased to 10.4% and 7.3%, respectively. The observed increase in high confidence identifications differed per genus. For Bilophila wadsworthia, Prevotella spp., gram-positive anaerobic cocci and other less commonly encountered species more strains were identified with higher confidence. A subset of the non-identified strains (42.1%) were identified using 16S rDNA gene sequencing. The obtained identities demonstrated that strains could not be identified either due to the generation of spectra of insufficient quality or due to the fact that no MSP of the encountered species was present in the database. Undoubtedly, the ENRIA project has successfully increased the number of anaerobic isolates that can be identified with high confidence. We therefore recommend further expansion of the database to include less frequently isolated species as this would also allow us to gain valuable insight into the clinical relevance of these less common anaerobic bacteria. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
The UniProtKB guide to the human proteome
Breuza, Lionel; Poux, Sylvain; Estreicher, Anne; Famiglietti, Maria Livia; Magrane, Michele; Tognolli, Michael; Bridge, Alan; Baratin, Delphine; Redaschi, Nicole
2016-01-01
Advances in high-throughput and advanced technologies allow researchers to routinely perform whole genome and proteome analysis. For this purpose, they need high-quality resources providing comprehensive gene and protein sets for their organisms of interest. Using the example of the human proteome, we will describe the content of a complete proteome in the UniProt Knowledgebase (UniProtKB). We will show how manual expert curation of UniProtKB/Swiss-Prot is complemented by expert-driven automatic annotation to build a comprehensive, high-quality and traceable resource. We will also illustrate how the complexity of the human proteome is captured and structured in UniProtKB. Database URL: www.uniprot.org PMID:26896845
Bitsch, A; Jacobi, S; Melber, C; Wahnschaffe, U; Simetska, N; Mangelsdorf, I
2006-12-01
A database for repeated dose toxicity data has been developed. Studies were selected by data quality. Review documents or risk assessments were used to get a pre-screened selection of available valid data. The structure of the chemicals should be rather simple for well defined chemical categories. The database consists of three core data sets for each chemical: (1) structural features and physico-chemical data, (2) data on study design, (3) study results. To allow consistent queries, a high degree of standardization categories and glossaries were developed for relevant parameters. At present, the database consists of 364 chemicals investigated in 1018 studies which resulted in a total of 6002 specific effects. Standard queries have been developed, which allow analyzing the influence of structural features or PC data on LOELs, target organs and effects. Furthermore, it can be used as an expert system. First queries have shown that the database is a very valuable tool.
Development of an electronic database for Acute Pain Service outcomes
Love, Brandy L; Jensen, Louise A; Schopflocher, Donald; Tsui, Ban CH
2012-01-01
BACKGROUND: Quality assurance is increasingly important in the current health care climate. An electronic database can be used for tracking patient information and as a research tool to provide quality assurance for patient care. OBJECTIVE: An electronic database was developed for the Acute Pain Service, University of Alberta Hospital (Edmonton, Alberta) to record patient characteristics, identify at-risk populations, compare treatment efficacies and guide practice decisions. METHOD: Steps in the database development involved identifying the goals for use, relevant variables to include, and a plan for data collection, entry and analysis. Protocols were also created for data cleaning quality control. The database was evaluated with a pilot test using existing data to assess data collection burden, accuracy and functionality of the database. RESULTS: A literature review resulted in an evidence-based list of demographic, clinical and pain management outcome variables to include. Time to assess patients and collect the data was 20 min to 30 min per patient. Limitations were primarily software related, although initial data collection completion was only 65% and accuracy of data entry was 96%. CONCLUSIONS: The electronic database was found to be relevant and functional for the identified goals of data storage and research. PMID:22518364
NASA Astrophysics Data System (ADS)
García-Mayordomo, Julián; Martín-Banda, Raquel; Insua-Arévalo, Juan M.; Álvarez-Gómez, José A.; Martínez-Díaz, José J.; Cabral, João
2017-08-01
Active fault databases are a very powerful and useful tool in seismic hazard assessment, particularly when singular faults are considered seismogenic sources. Active fault databases are also a very relevant source of information for earth scientists, earthquake engineers and even teachers or journalists. Hence, active fault databases should be updated and thoroughly reviewed on a regular basis in order to keep a standard quality and uniformed criteria. Desirably, active fault databases should somehow indicate the quality of the geological data and, particularly, the reliability attributed to crucial fault-seismic parameters, such as maximum magnitude and recurrence interval. In this paper we explain how we tackled these issues during the process of updating and reviewing the Quaternary Active Fault Database of Iberia (QAFI) to its current version 3. We devote particular attention to describing the scheme devised for classifying the quality and representativeness of the geological evidence of Quaternary activity and the accuracy of the slip rate estimation in the database. Subsequently, we use this information as input for a straightforward rating of the level of reliability of maximum magnitude and recurrence interval fault seismic parameters. We conclude that QAFI v.3 is a much better database than version 2 either for proper use in seismic hazard applications or as an informative source for non-specialized users. However, we already envision new improvements for a future update.
Low dose CT image restoration using a database of image patches
NASA Astrophysics Data System (ADS)
Ha, Sungsoo; Mueller, Klaus
2015-01-01
Reducing the radiation dose in CT imaging has become an active research topic and many solutions have been proposed to remove the significant noise and streak artifacts in the reconstructed images. Most of these methods operate within the domain of the image that is subject to restoration. This, however, poses limitations on the extent of filtering possible. We advocate to take into consideration the vast body of external knowledge that exists in the domain of already acquired medical CT images, since after all, this is what radiologists do when they examine these low quality images. We can incorporate this knowledge by creating a database of prior scans, either of the same patient or a diverse corpus of different patients, to assist in the restoration process. Our paper follows up on our previous work that used a database of images. Using images, however, is challenging since it requires tedious and error prone registration and alignment. Our new method eliminates these problems by storing a diverse set of small image patches in conjunction with a localized similarity matching scheme. We also empirically show that it is sufficient to store these patches without anatomical tags since their statistics are sufficiently strong to yield good similarity matches from the database and as a direct effect, produce image restorations of high quality. A final experiment demonstrates that our global database approach can recover image features that are difficult to preserve with conventional denoising approaches.
Evaluating Computational Gene Ontology Annotations.
Škunca, Nives; Roberts, Richard J; Steffen, Martin
2017-01-01
Two avenues to understanding gene function are complementary and often overlapping: experimental work and computational prediction. While experimental annotation generally produces high-quality annotations, it is low throughput. Conversely, computational annotations have broad coverage, but the quality of annotations may be variable, and therefore evaluating the quality of computational annotations is a critical concern.In this chapter, we provide an overview of strategies to evaluate the quality of computational annotations. First, we discuss why evaluating quality in this setting is not trivial. We highlight the various issues that threaten to bias the evaluation of computational annotations, most of which stem from the incompleteness of biological databases. Second, we discuss solutions that address these issues, for example, targeted selection of new experimental annotations and leveraging the existing experimental annotations.
Névéol, Aurélie; Wilbur, W John; Lu, Zhiyong
2012-01-01
High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/
Névéol, Aurélie; Wilbur, W. John; Lu, Zhiyong
2012-01-01
High-throughput experiments and bioinformatics techniques are creating an exploding volume of data that are becoming overwhelming to keep track of for biologists and researchers who need to access, analyze and process existing data. Much of the available data are being deposited in specialized databases, such as the Gene Expression Omnibus (GEO) for microarrays or the Protein Data Bank (PDB) for protein structures and coordinates. Data sets are also being described by their authors in publications archived in literature databases such as MEDLINE and PubMed Central. Currently, the curation of links between biological databases and the literature mainly relies on manual labour, which makes it a time-consuming and daunting task. Herein, we analysed the current state of link curation between GEO, PDB and MEDLINE. We found that the link curation is heterogeneous depending on the sources and databases involved, and that overlap between sources is low, <50% for PDB and GEO. Furthermore, we showed that text-mining tools can automatically provide valuable evidence to help curators broaden the scope of articles and database entries that they review. As a result, we made recommendations to improve the coverage of curated links, as well as the consistency of information available from different databases while maintaining high-quality curation. Database URLs: http://www.ncbi.nlm.nih.gov/PubMed, http://www.ncbi.nlm.nih.gov/geo/, http://www.rcsb.org/pdb/ PMID:22685160
Learning to rank for blind image quality assessment.
Gao, Fei; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2015-10-01
Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpinets, Tatiana V; Park, Byung; Syed, Mustafa H
2010-01-01
The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire non-redundant sequences of the CAZy database. Themore » second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains (DUF) and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit (CAT), and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.« less
Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C
2010-12-01
The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.
A systematic review of non-pharmacological interventions for primary Sjögren's syndrome.
Hackett, Katie L; Deane, Katherine H O; Strassheim, Victoria; Deary, Vincent; Rapley, Tim; Newton, Julia L; Ng, Wan-Fai
2015-11-01
To evaluate the effects of non-pharmacological interventions for primary SS (pSS) on outcomes falling within the World Health Organization International Classification of Functioning Disability and Health domains. We searched the following databases from inception to September 2014: Cochrane Database of Systematic Reviews; Medline; Embase; PsychINFO; CINAHL; and clinical trials registers. We included randomized controlled trials of any non-pharmacological intervention. Two authors independently reviewed titles and abstracts against the inclusion/exclusion criteria and independently assessed trial quality and extracted data. A total of 1463 studies were identified, from which 17 full text articles were screened and 5 studies were included in the review; a total of 130 participants were randomized. The included studies investigated the effectiveness of an oral lubricating device for dry mouth, acupuncture for dry mouth, lacrimal punctum plugs for dry eyes and psychodynamic group therapy for coping with symptoms. Overall, the studies were of low quality and at high risk of bias. Although one study showed punctum plugs to improve dry eyes, the sample size was relatively small. Further high-quality studies to evaluate non-pharmacological interventions for PSS are needed. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology.
Spirituality and Mental Well-Being in Combat Veterans: A Systematic Review.
Smith-MacDonald, Lorraine; Norris, Jill M; Raffin-Bouchal, Shelley; Sinclair, Shane
2017-11-01
Many veterans experience significant compromised spiritual and mental well-being. Despite effective and evidence-based treatments, veterans continue to experience poor completion rates and suboptimal therapeutic effects. Spirituality, whether expressed through religious or secular means, is a part of adjunctive or supplemental treatment modalities to treat post-traumatic stress disorder (PTSD) and is particularly relevant to combat trauma. The aim of this systematic review was to examine the relationship between spirituality and mental well-being in postdeployment veterans. Electronic databases (MEDLINE, PsycINFO, CINAHL, Web of Science, JSTOR) were searched from database inception to March 2016. Gray literature was identified in databases, websites, and reference lists of included studies. Study quality was assessed using the Effective Public Health Practice Project Quality Assessment Tool and Critical Appraising Skill Programme Qualitative Checklist. From 6,555 abstracts, 43 studies were included. Study quality was low-moderate. Spirituality had an effect on PTSD, suicide, depression, anger and aggression, anxiety, quality of life, and other mental well-being outcomes for veterans. "Negative spiritual coping" was often associated with an increase mental health diagnoses and symptom severity; "positive spiritual coping" had an ameliorating effect. Addressing veterans' spiritual well-being should be a routine and integrated component of veterans' health, with regular assessment and treatment. This requires an interdisciplinary approach, including integrating chaplains postcombat, to help address these issues and enhance the continuity of care. Further high-quality research is needed to isolate the salient components of spirituality that are most harmful and helpful in veterans' mental well-being, including the incorporating of veterans' perspectives directly. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Wasson, Lauren T.; Cusmano, Amberle; Meli, Laura; Louh, Irene; Falzon, Louise; Hampsey, Meghan; Young, Geoffrey; Shaffer, Jonathan; Davidson, Karina W.
2016-01-01
Importance There are concerns about the current quality of undergraduate medical education (UME) and its effect on students’ well-being. Objective This systematic review was designed to identify best practices for UME learning environment interventions that are associated with improved emotional well-being of students. Data Sources Learning environment interventions were identified by searching the biomedical electronic databases Ovid MEDLINE, EMBASE, the Cochrane Library, and the ERIC database from the database inception dates to October 2016. Studies examined any intervention designed to promote medical students’ emotional well-being in the setting of a US academic medical school, with an outcome defined as students’ reports of well-being as assessed by surveys, semistructured interviews, or other quantitative methods. Data Extraction and Synthesis Two investigators independently reviewed abstracts and full-text articles. Data were extracted into tables to summarize results. Study quality was assessed by the Medical Education Research Study Quality Instrument (MERQSI), which has a possible range of 5–18; higher scores indicate higher design and methods quality, and a score of ≥ 14 indicates a high-quality study. Findings Twenty-eight articles including at least 8224 participants met eligibility criteria. Study designs included single-group cross-sectional or post-test only (n=10), single-group pre-/post-test (n=2), nonrandomized two-group (n=13), and randomized clinical trial (n=3); 93% were conducted at a single site, and the mean MERSQI score for all studies was 10.3 (range 5–13, SD=2.11). Studies encompassed a variety of types of interventions, including those focused on pass/fail grading systems (n=3, mean MERSQI=12.0), mental health programs (n=4, MERSQI=11.9), mind-body skills programs (n=7, MERSQI=11.2), curriculum structure (n=3, MERSQI=9.5), multicomponent program reform (n=5, MERSQI=9.4), wellness programs (n=4, MERSQI=9.0), and advising/mentoring programs (n=3, MERSQI=8.2). Conclusions and Relevance In this systematic review, limited evidence suggested that some specific learning environment interventions were associated with improved emotional well-being among medical students. However, the overall quality of the evidence was low, highlighting the need for high-quality medical education research. PMID:27923091
Malpractice litigation and nursing home quality of care.
Konetzka, R Tamara; Park, Jeongyoung; Ellis, Robert; Abbo, Elmer
2013-12-01
To assess the potential deterrent effect of nursing home litigation threat on nursing home quality. We use a panel dataset of litigation claims and Nursing Home Online Survey Certification and Reporting (OSCAR) data from 1995 to 2005 in six states: Florida, Illinois, Wisconsin, New Jersey, Missouri, and Delaware, for a total of 2,245 facilities. Claims data are from Westlaw's Adverse Filings database, a proprietary legal database, on all malpractice, negligence, and personal injury/wrongful death claims filed against nursing facilities. A lagged 2-year moving average of the county-level number of malpractice claims is used to represent the threat of litigation. We use facility fixed-effects models to examine the relationship between the threat of litigation and nursing home quality. We find significant increases in registered nurse-to-total staffing ratios in response to rising malpractice threat, and a reduction in pressure sores among highly staffed facilities. However, the magnitude of the deterrence effect is small. Deterrence in response to the threat of malpractice litigation is unlikely to lead to widespread improvements in nursing home quality. This should be weighed against other benefits and costs of litigation to assess the net benefit of tort reform. © Health Research and Educational Trust.
Implementing Pay-for-Performance in the Neonatal Intensive Care Unit
Profit, Jochen; Zupancic, John A. F.; Gould, Jeffrey B.; Petersen, Laura A.
2011-01-01
Pay-for-performance initiatives in medicine are proliferating rapidly. Neonatal intensive care is a likely target for these efforts because of the high cost, available databases, and relative strength of evidence for at least some measures of quality. Pay-for-performance may improve patient care but requires valid measurements of quality to ensure that financial incentives truly support superior performance. Given the existing uncertainty with respect to both the effectiveness of pay-for-performance and the state of quality measurement science, experimentation with pay-for-performance initiatives should proceed with caution and in controlled settings. In this article, we describe approaches to measuring quality and implementing pay-for-performance in the NICU setting. PMID:17473099
The implementation of non-Voigt line profiles in the HITRAN database: H2 case study
NASA Astrophysics Data System (ADS)
Wcisło, P.; Gordon, I. E.; Tran, H.; Tan, Y.; Hu, S.-M.; Campargue, A.; Kassi, S.; Romanini, D.; Hill, C.; Kochanov, R. V.; Rothman, L. S.
2016-07-01
Experimental capabilities of molecular spectroscopy and its applications nowadays require a sub-percent or even sub-per mille accuracy of the representation of the shapes of molecular transitions. This implies the necessity of using more advanced line-shape models which are characterized by many more parameters than a simple Voigt profile. It is a great challenge for modern molecular spectral databases to store and maintain the extended set of line-shape parameters as well as their temperature dependences. It is even more challenging to reliably retrieve these parameters from experimental spectra over a large range of pressures and temperatures. In this paper we address this problem starting from the case of the H2 molecule for which the non-Voigt line-shape effects are exceptionally pronounced. For this purpose we reanalyzed the experimental data reported in the literature. In particular, we performed detailed line-shape analysis of high-quality spectra obtained with cavity-enhanced techniques. We also report the first high-quality cavity-enhanced measurement of the H2 fundamental vibrational mode. We develop a correction to the Hartmann-Tran profile (HTP) which adjusts the HTP to the particular model of the velocity-changing collisions. This allows the measured spectra to be better represented over a wide range of pressures. The problem of storing the HTP parameters in the HITRAN database together with their temperature dependences is also discussed.
ERIC Educational Resources Information Center
Grooms, David W.
1988-01-01
Discusses the quality controls imposed on text and image data that is currently being converted from paper to digital images by the Patent and Trademark Office. The methods of inspection used on text and on images are described, and the quality of the data delivered thus far is discussed. (CLB)
Rural Water Quality Database: Educational Program to Collect Information.
ERIC Educational Resources Information Center
Lemley, Ann; Wagenet, Linda
1993-01-01
A New York State project created a water quality database for private drinking water supplies, using the statewide educational program to collect the data. Another goal was to develop this program so rural residents could increase their knowledge of water supply management. (Author)
EPA U.S. Nine-region MARKAL DATABASE, DATABASE DOCUMENTATION
The evolution of the energy system in the United States is an important factor in future environmental outcomes including air quality and climate change. Given this, decision makers need to understand how a changing energy landscape will impact future air quality and contribute ...
Shoberg, Thomas G.; Stoddard, Paul R.
2013-01-01
The ability to augment local gravity surveys with additional gravity stations from easily accessible national databases can greatly increase the areal coverage and spatial resolution of a survey. It is, however, necessary to integrate such data seamlessly with the local survey. One challenge to overcome in integrating data from national databases is that these data are typically of unknown quality. This study presents a procedure for the evaluation and seamless integration of gravity data of unknown quality from a national database with data from a local Global Positioning System (GPS)-based survey. The starting components include the latitude, longitude, elevation and observed gravity at each station location. Interpolated surfaces of the complete Bouguer anomaly are used as a means of quality control and comparison. The result is an integrated dataset of varying quality with many stations having GPS accuracy and other reliable stations of unknown origin, yielding a wider coverage and greater spatial resolution than either survey alone.
Giffen, Sarah E.
2002-01-01
An environmental database was developed to store water-quality data collected during the 1999 U.S. Geological Survey investigation of the occurrence and distribution of dioxins, furans, and PCBs in the riverbed sediment and fish tissue in the Penobscot River in Maine. The database can be used to store a wide range of detailed information and to perform complex queries on the data it contains. The database also could be used to store data from other historical and any future environmental studies conducted on the Penobscot River and surrounding regions.
Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian
2016-01-01
The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Evolution of the architecture of the ATLAS Metadata Interface (AMI)
NASA Astrophysics Data System (ADS)
Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.
2015-12-01
The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Pow, Jessie; King, David B; Stephenson, Ellen; DeLongis, Anita
2017-01-01
Given evidence suggesting a detrimental effect of occupational stress on sleep, it is important to identify protective factors that may ameliorate this effect. We followed 87 paramedics upon waking and after work over 1 week using a daily diary methodology. Multilevel modeling was used to examine whether the detrimental effects of daily occupational stress on sleep quality were buffered by perceived social support availability. Paramedics who reported more support availability tended to report better quality sleep over the week. Additionally, perceived support availability buffered postworkday sleep from average occupational stress and days of especially high occupational stress. Perceived support availability also buffered off-workday sleep from the cumulative amount of occupational stress experienced over the previous workweek. Those with low levels of support displayed poor sleep quality in the face of high occupational stress; those high in support did not show significant effects of occupational stress on sleep. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Layani, Géraldine; Fleet, Richard; Dallaire, Renée; Tounkara, Fatoumata K.; Poitras, Julien; Archambault, Patrick; Chauny, Jean-Marc; Ouimet, Mathieu; Gauthier, Josée; Dupuis, Gilles; Tanguay, Alain; Lévesque, Jean-Frédéric; Simard-Racine, Geneviève; Haggerty, Jeannie; Légaré, France
2016-01-01
Background: Evidence-based indicators of quality of care have been developed to improve care and performance in Canadian emergency departments. The feasibility of measuring these indicators has been assessed mainly in urban and academic emergency departments. We sought to assess the feasibility of measuring quality-of-care indicators in rural emergency departments in Quebec. Methods: We previously identified rural emergency departments in Quebec that offered medical coverage with hospital beds 24 hours a day, 7 days a week and were located in rural areas or small towns as defined by Statistics Canada. A standardized protocol was sent to each emergency department to collect data on 27 validated quality-of-care indicators in 8 categories: duration of stay, patient safety, pain management, pediatrics, cardiology, respiratory care, stroke and sepsis/infection. Data were collected by local professional medical archivists between June and December 2013. Results: Fifteen (58%) of the 26 emergency departments invited to participate completed data collection. The ability to measure the 27 quality-of-care indicators with the use of databases varied across departments. Centres 2, 5, 6 and 13 used databases for at least 21 of the indicators (78%-92%), whereas centres 3, 8, 9, 11, 12 and 15 used databases for 5 (18%) or fewer of the indicators. On average, the centres were able to measure only 41% of the indicators using heterogeneous databases and manual extraction. The 15 centres collected data from 15 different databases or combinations of databases. The average data collection time for each quality-of-care indicator varied from 5 to 88.5 minutes. The median data collection time was 15 minutes or less for most indicators. Interpretation: Quality-of-care indicators were not easily captured with the use of existing databases in rural emergency departments in Quebec. Further work is warranted to improve standardized measurement of these indicators in rural emergency departments in the province and to generalize the information gathered in this study to other health care environments. PMID:27730103
A privacy-preserved analytical method for ehealth database with minimized information loss.
Chen, Ya-Ling; Cheng, Bo-Chao; Chen, Hsueh-Lin; Lin, Chia-I; Liao, Guo-Tan; Hou, Bo-Yu; Hsu, Shih-Chun
2012-01-01
Digitizing medical information is an emerging trend that employs information and communication technology (ICT) to manage health records, diagnostic reports, and other medical data more effectively, in order to improve the overall quality of medical services. However, medical information is highly confidential and involves private information, even legitimate access to data raises privacy concerns. Medical records provide health information on an as-needed basis for diagnosis and treatment, and the information is also important for medical research and other health management applications. Traditional privacy risk management systems have focused on reducing reidentification risk, and they do not consider information loss. In addition, such systems cannot identify and isolate data that carries high risk of privacy violations. This paper proposes the Hiatus Tailor (HT) system, which ensures low re-identification risk for medical records, while providing more authenticated information to database users and identifying high-risk data in the database for better system management. The experimental results demonstrate that the HT system achieves much lower information loss than traditional risk management methods, with the same risk of re-identification.
Clinical Databases for Chest Physicians.
Courtwright, Andrew M; Gabriel, Peter E
2018-04-01
A clinical database is a repository of patient medical and sociodemographic information focused on one or more specific health condition or exposure. Although clinical databases may be used for research purposes, their primary goal is to collect and track patient data for quality improvement, quality assurance, and/or actual clinical management. This article aims to provide an introduction and practical advice on the development of small-scale clinical databases for chest physicians and practice groups. Through example projects, we discuss the pros and cons of available technical platforms, including Microsoft Excel and Access, relational database management systems such as Oracle and PostgreSQL, and Research Electronic Data Capture. We consider approaches to deciding the base unit of data collection, creating consensus around variable definitions, and structuring routine clinical care to complement database aims. We conclude with an overview of regulatory and security considerations for clinical databases. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
The increasing number and size of public databases is facilitating the collection of chemical structures and associated experimental data for QSAR modeling. However, the performance of QSAR models is highly dependent not only on the modeling methodology, but also on the quality o...
Quality control of EUVE databases
NASA Technical Reports Server (NTRS)
John, L. M.; Drake, J.
1992-01-01
The publicly accessible databases for the Extreme Ultraviolet Explorer include: the EUVE Archive mailserver; the CEA ftp site; the EUVE Guest Observer Mailserver; and the Astronomical Data System node. The EUVE Performance Assurance team is responsible for verifying that these public EUVE databases are working properly, and that the public availability of EUVE data contained therein does not infringe any data rights which may have been assigned. In this poster, we describe the Quality Assurance (QA) procedures we have developed from the approach of QA as a service organization, thus reflecting the overall EUVE philosophy of Quality Assurance integrated into normal operating procedures, rather than imposed as an external, post facto, control mechanism.
ScanRanker: Quality Assessment of Tandem Mass Spectra via Sequence Tagging
Ma, Ze-Qiang; Chambers, Matthew C.; Ham, Amy-Joan L.; Cheek, Kristin L.; Whitwell, Corbin W.; Aerni, Hans-Rudolf; Schilling, Birgit; Miller, Aaron W.; Caprioli, Richard M.; Tabb, David L.
2011-01-01
In shotgun proteomics, protein identification by tandem mass spectrometry relies on bioinformatics tools. Despite recent improvements in identification algorithms, a significant number of high quality spectra remain unidentified for various reasons. Here we present ScanRanker, an open-source tool that evaluates the quality of tandem mass spectra via sequence tagging with reliable performance in data from different instruments. The superior performance of ScanRanker enables it not only to find unassigned high quality spectra that evade identification through database search, but also to select spectra for de novo sequencing and cross-linking analysis. In addition, we demonstrate that the distribution of ScanRanker scores predicts the richness of identifiable spectra among multiple LC-MS/MS runs in an experiment, and ScanRanker scores assist the process of peptide assignment validation to increase confident spectrum identifications. The source code and executable versions of ScanRanker are available from http://fenchurch.mc.vanderbilt.edu. PMID:21520941
Hoderlein, Xenia; Moseley, Anne M; Elkins, Mark R
2017-08-01
Many clinical trials are reported without reference to the existing relevant high-quality research. This study aimed to investigate the extent to which authors of reports of clinical trials of physiotherapy interventions try to use high-quality clinical research to (1) help justify the need for the trial in the introduction and (2) help interpret the trial's results in the discussion. Data were extracted from 221 clinical trials that were randomly selected from the Physiotherapy Evidence Database: 70 published in 2001 (10% sample) and 151 published in 2015 (10% sample). The Physiotherapy Evidence Database score (which rates methodological quality and completeness of reporting) for each trial was also downloaded. Overall 41% of trial reports cited a systematic review or the results of a search for other evidence in the introduction section: 20% for 2001 and 50% for 2015 (relative risk = 2.3, 95% confidence interval = 1.5-3.8). For the discussion section, only 1 of 221 trials integrated the results of the trial into an existing meta-analysis, but citation of a relevant systematic review did increase from 17% in 2001 to 34% in 2015. There was no relationship between citation of existing research and the total Physiotherapy Evidence Database score. Published reports of clinical trials of physiotherapy interventions increasingly cite a systematic review or the results of a search for other evidence in the introduction, but integration with existing research in the discussion section is very rare. To encourage the use of existing research, stronger recommendations to refer to existing systematic reviews (where available) could be incorporated into reporting checklists and journal editorial guidelines.
2008 Niday Perinatal Database quality audit: report of a quality assurance project.
Dunn, S; Bottomley, J; Ali, A; Walker, M
2011-12-01
This quality assurance project was designed to determine the reliability, completeness and comprehensiveness of the data entered into Niday Perinatal Database. Quality of the data was measured by comparing data re-abstracted from the patient record to the original data entered into the Niday Perinatal Database. A representative sample of hospitals in Ontario was selected and a random sample of 100 linked mother and newborn charts were audited for each site. A subset of 33 variables (representing 96 data fields) from the Niday dataset was chosen for re-abstraction. Of the data fields for which Cohen's kappa statistic or intraclass correlation coefficient (ICC) was calculated, 44% showed substantial or almost perfect agreement (beyond chance). However, about 17% showed less than 95% agreement and a kappa or ICC value of less than 60% indicating only slight, fair or moderate agreement (beyond chance). Recommendations to improve the quality of these data fields are presented.
St Louis, James D; Kurosawa, Hiromi; Jonas, Richard A; Sandoval, Nestor; Cervantes, Jorge; Tchervenkov, Christo I; Jacobs, Jeffery P; Sakamoto, Kisaburo; Stellin, Giovanni; Kirklin, James K
2017-09-01
The World Society for Pediatric and Congenital Heart Surgery was founded with the mission to "promote the highest quality comprehensive cardiac care to all patients with congenital heart disease, from the fetus to the adult, regardless of the patient's economic means, with an emphasis on excellence in teaching, research, and community service." Early on, the Society's members realized that a crucial step in meeting this goal was to establish a global database that would collect vital information, allowing cardiac surgical centers worldwide to benchmark their outcomes and improve the quality of congenital heart disease care. With tireless efforts from all corners of the globe and utilizing the vast experience and invaluable input of multiple international experts, such a platform of global information exchange was created: The World Database for Pediatric and Congenital Heart Disease went live on January 1, 2017. This database has been thoughtfully designed to produce meaningful performance and quality analyses of surgical outcomes extending beyond immediate hospital survival, allowing capture of important morbidities and mortalities for up to 1 year postoperatively. In order to advance the societal mission, this quality improvement program is available free of charge to WSPCHS members. In establishing the World Database, the Society has taken an essential step to further the process of global improvement in care for children with congenital heart disease.
Silva-Lopes, Victor W; Monteiro-Leal, Luiz H
2003-07-01
The development of new technology and the possibility of fast information delivery by either Internet or Intranet connections are changing education. Microanatomy education depends basically on the correct interpretation of microscopy images by students. Modern microscopes coupled to computers enable the presentation of these images in a digital form by creating image databases. However, the access to this new technology is restricted entirely to those living in cities and towns with an Information Technology (IT) infrastructure. This study describes the creation of a free Internet histology database composed by high-quality images and also presents an inexpensive way to supply it to a greater number of students through Internet/Intranet connections. By using state-of-the-art scientific instruments, we developed a Web page (http://www2.uerj.br/~micron/atlas/atlasenglish/index.htm) that, in association with a multimedia microscopy laboratory, intends to help in the reduction of the IT educational gap between developed and underdeveloped regions. Copyright 2003 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Garcia Menendez, F.; Afrin, S.
2017-12-01
Prescribed fires are used extensively across the Southeastern United States and are a major source of air pollutant emissions in the region. These land management projects can adversely impact local and regional air quality. However, the emissions and air pollution impacts of prescribed fires remain largely uncertain. Satellite data, commonly used to estimate fire emissions, is often unable to detect the low-intensity, short-lived prescribed fires characteristic of the region. Additionally, existing ground-based prescribed burn records are incomplete, inconsistent and scattered. Here we present a new unified database of prescribed fire occurrence and characteristics developed from systemized digital burn permit records collected from public and private land management organizations in the Southeast. This bottom-up fire database is used to analyze the correlation between high PM2.5 concentrations measured by monitoring networks in southern states and prescribed fire occurrence at varying spatial and temporal scales. We show significant associations between ground-based records of prescribed fire activity and the observational air quality record at numerous sites by applying regression analysis and controlling confounding effects of meteorology. Furthermore, we demonstrate that the response of measured PM2.5 concentrations to prescribed fire estimates based on burning permits is significantly stronger than their response to satellite fire observations from MODIS (moderate-resolution imaging spectroradiometer) and geostationary satellites or prescribed fire emissions data in the National Emissions Inventory. These results show the importance of bottom-up smoke emissions estimates and reflect the need for improved ground-based fire data to advance air quality impacts assessments focused on prescribed burning.
Déjà vu: a database of highly similar citations in the scientific literature
Errami, Mounir; Sun, Zhaohui; Long, Tara C.; George, Angela C.; Garner, Harold R.
2009-01-01
In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as ‘the’ biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org. PMID:18757888
Deja vu: a database of highly similar citations in the scientific literature.
Errami, Mounir; Sun, Zhaohui; Long, Tara C; George, Angela C; Garner, Harold R
2009-01-01
In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as 'the' biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org.
Danish Colorectal Cancer Group Database.
Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H
2016-01-01
The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001-2003 to <2% since 2013. The database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish Colorectal Cancer Group provides high-quality data and has been documenting an increase in short- and long-term survivals since it started in 2001 for both patients with colon and rectal cancers.
Mobile Source Observation Database (MSOD)
The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).
PeTMbase: A Database of Plant Endogenous Target Mimics (eTMs).
Karakülah, Gökhan; Yücebilgili Kurtoğlu, Kuaybe; Unver, Turgay
2016-01-01
MicroRNAs (miRNA) are small endogenous RNA molecules, which regulate target gene expression at post-transcriptional level. Besides, miRNA activity can be controlled by a newly discovered regulatory mechanism called endogenous target mimicry (eTM). In target mimicry, eTMs bind to the corresponding miRNAs to block the binding of specific transcript leading to increase mRNA expression. Thus, miRNA-eTM-target-mRNA regulation modules involving a wide range of biological processes; an increasing need for a comprehensive eTM database arose. Except miRSponge with limited number of Arabidopsis eTM data no available database and/or repository was developed and released for plant eTMs yet. Here, we present an online plant eTM database, called PeTMbase (http://petmbase.org), with a highly efficient search tool. To establish the repository a number of identified eTMs was obtained utilizing from high-throughput RNA-sequencing data of 11 plant species. Each transcriptome libraries is first mapped to corresponding plant genome, then long non-coding RNA (lncRNA) transcripts are characterized. Furthermore, additional lncRNAs retrieved from GREENC and PNRD were incorporated into the lncRNA catalog. Then, utilizing the lncRNA and miRNA sources a total of 2,728 eTMs were successfully predicted. Our regularly updated database, PeTMbase, provides high quality information regarding miRNA:eTM modules and will aid functional genomics studies particularly, on miRNA regulatory networks.
Characterising droughts in Central America with uncertain hydro-meteorological data
NASA Astrophysics Data System (ADS)
Quesada Montano, B.; Westerberg, I.; Wetterhall, F.; Hidalgo, H. G.; Halldin, S.
2015-12-01
Droughts studies are scarce in Central America, a region frequently affected by droughts that cause significant socio-economic and environmental problems. Drought characterisation is important for water management and planning and can be done with the help of drought indices. Many indices have been developed in the last decades but their ability to suitably characterise droughts depends on the region of application. In Central America, comprehensive and high-quality observational networks of meteorological and hydrological data are not available. This limits the choice of drought indices and denotes the need to evaluate the quality of the data used in their calculation. This paper aimed to find which combination(s) of drought index and meteorological database are most suitable for characterising droughts in Central America. The drought indices evaluated were the standardised precipitation index (SPI), deciles (DI), the standardised precipitation evapotranspiration index (SPEI) and the effective drought index (EDI). These were calculated using precipitation data from the Climate Hazards Group Infra-Red Precipitation with station (CHIRPS), CRN073, the Climate Research Unit (CRU), ERA-Interim and station databases, and temperature data from the CRU database. All the indices were calculated at 1-, 3-, 6-, 9- and 12-month accumulation times. As a first step, the large-scale meteorological precipitation datasets were compared to have an overview of the level of agreement between them and find possible quality problems. Then, the performance of all the combinations of drought indices and meteorological datasets were evaluated against independent river discharge data, in form of the standardised streamflow index (SSI). Results revealed the large disagreement between the precipitation datasets; we found the selection of database to be more important than the selection of drought index. We found that the best combinations of meteorological drought index and database were obtained using the SPI and DI, calculated with CHIRPS and station data.
R2 Water Quality Portal Monitoring Stations
The Water Quality Data Portal (WQP) provides an easy way to access data stored in various large water quality databases. The WQP provides various input parameters on the form including location, site, sampling, and date parameters to filter and customize the returned results. The The Water Quality Portal (WQP) is a cooperative service sponsored by the United States Geological Survey (USGS), the Environmental Protection Agency (EPA) and the National Water Quality Monitoring Council (NWQMC) that integrates publicly available water quality data from the USGS National Water Information System (NWIS) the EPA STOrage and RETrieval (STORET) Data Warehouse, and the USDA ARS Sustaining The Earth??s Watersheds - Agricultural Research Database System (STEWARDS).
Selby, Luke V; Sjoberg, Daniel D; Cassella, Danielle; Sovel, Mindy; Weiser, Martin R; Sepkowitz, Kent; Jones, David R; Strong, Vivian E
2015-06-15
Surgical quality improvement requires accurate tracking and benchmarking of postoperative adverse events. We track surgical site infections (SSIs) with two systems; our in-house surgical secondary events (SSE) database and the National Surgical Quality Improvement Project (NSQIP). The SSE database, a modification of the Clavien-Dindo classification, categorizes SSIs by their anatomic site, whereas NSQIP categorizes by their level. Our aim was to directly compare these different definitions. NSQIP and the SSE database entries for all surgeries performed in 2011 and 2012 were compared. To match NSQIP definitions, and while blinded to NSQIP results, entries in the SSE database were categorized as either incisional (superficial or deep) or organ space infections. These categorizations were compared with NSQIP records; agreement was assessed with Cohen kappa. The 5028 patients in our cohort had a 6.5% SSI in the SSE database and a 4% rate in NSQIP, with an overall agreement of 95% (kappa = 0.48, P < 0.0001). The rates of categorized infections were similarly well matched; incisional rates of 4.1% and 2.7% for the SSE database and NSQIP and organ space rates of 2.6% and 1.5%. Overall agreements were 96% (kappa = 0.36, P < 0.0001) and 98% (kappa = 0.55, P < 0.0001), respectively. Over 80% of cases recorded by the SSE database but not NSQIP did not meet NSQIP criteria. The SSE database is an accurate, real-time record of postoperative SSIs. Institutional databases that capture all surgical cases can be used in conjunction with NSQIP with excellent concordance. Copyright © 2015 Elsevier Inc. All rights reserved.
Corpus-based Statistical Screening for Phrase Identification
Kim, Won; Wilbur, W. John
2000-01-01
Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
Aesthetic quality inference for online fashion shopping
NASA Astrophysics Data System (ADS)
Chen, Ming; Allebach, Jan
2014-03-01
On-line fashion communities in which participants post photos of personal fashion items for viewing and possible purchase by others are becoming increasingly popular. Generally, these photos are taken by individuals who have no training in photography with low-cost mobile phone cameras. It is desired that photos of the products have high aesthetic quality to improve the users' online shopping experience. In this work, we design features for aesthetic quality inference in the context of online fashion shopping. Psychophysical experiments are conducted to construct a database of the photos' aesthetic evaluation, specifically for photos from an online fashion shopping website. We then extract both generic low-level features and high-level image attributes to represent the aesthetic quality. Using a support vector machine framework, we train a predictor of the aesthetic quality rating based on the feature vector. Experimental results validate the efficacy of our approach. Metadata such as the product type are also used to further improve the result.
Reiner, Bruce
2015-06-01
One of the greatest challenges facing healthcare professionals is the ability to directly and efficiently access relevant data from the patient's healthcare record at the point of care; specific to both the context of the task being performed and the specific needs and preferences of the individual end-user. In radiology practice, the relative inefficiency of imaging data organization and manual workflow requirements serves as an impediment to historical imaging data review. At the same time, clinical data retrieval is even more problematic due to the quality and quantity of data recorded at the time of order entry, along with the relative lack of information system integration. One approach to address these data deficiencies is to create a multi-disciplinary patient referenceable database which consists of high-priority, actionable data within the cumulative patient healthcare record; in which predefined criteria are used to categorize and classify imaging and clinical data in accordance with anatomy, technology, pathology, and time. The population of this referenceable database can be performed through a combination of manual and automated methods, with an additional step of data verification introduced for data quality control. Once created, these referenceable databases can be filtered at the point of care to provide context and user-specific data specific to the task being performed and individual end-user requirements.
Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B; Almon, Richard R; DuBois, Debra C; Jusko, William J; Hoffman, Eric P
2004-01-01
Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp).
Chen, Josephine; Zhao, Po; Massaro, Donald; Clerch, Linda B.; Almon, Richard R.; DuBois, Debra C.; Jusko, William J.; Hoffman, Eric P.
2004-01-01
Publicly accessible DNA databases (genome browsers) are rapidly accelerating post-genomic research (see http://www.genome.ucsc.edu/), with integrated genomic DNA, gene structure, EST/ splicing and cross-species ortholog data. DNA databases have relatively low dimensionality; the genome is a linear code that anchors all associated data. In contrast, RNA expression and protein databases need to be able to handle very high dimensional data, with time, tissue, cell type and genes, as interrelated variables. The high dimensionality of microarray expression profile data, and the lack of a standard experimental platform have complicated the development of web-accessible databases and analytical tools. We have designed and implemented a public resource of expression profile data containing 1024 human, mouse and rat Affymetrix GeneChip expression profiles, generated in the same laboratory, and subject to the same quality and procedural controls (Public Expression Profiling Resource; PEPR). Our Oracle-based PEPR data warehouse includes a novel time series query analysis tool (SGQT), enabling dynamic generation of graphs and spreadsheets showing the action of any transcript of interest over time. In this report, we demonstrate the utility of this tool using a 27 time point, in vivo muscle regeneration series. This data warehouse and associated analysis tools provides access to multidimensional microarray data through web-based interfaces, both for download of all types of raw data for independent analysis, and also for straightforward gene-based queries. Planned implementations of PEPR will include web-based remote entry of projects adhering to quality control and standard operating procedure (QC/SOP) criteria, and automated output of alternative probe set algorithms for each project (see http://microarray.cnmcresearch.org/pgadatatable.asp). PMID:14681485
Chandonia, John-Marc; Fox, Naomi K; Brenner, Steven E
2017-02-03
SCOPe (Structural Classification of Proteins-extended, http://scop.berkeley.edu) is a database of relationships between protein structures that extends the Structural Classification of Proteins (SCOP) database. SCOP is an expert-curated ordering of domains from the majority of proteins of known structure in a hierarchy according to structural and evolutionary relationships. SCOPe classifies the majority of protein structures released since SCOP development concluded in 2009, using a combination of manual curation and highly precise automated tools, aiming to have the same accuracy as fully hand-curated SCOP releases. SCOPe also incorporates and updates the ASTRAL compendium, which provides several databases and tools to aid in the analysis of the sequences and structures of proteins classified in SCOPe. SCOPe continues high-quality manual classification of new superfamilies, a key feature of SCOP. Artifacts such as expression tags are now separated into their own class, in order to distinguish them from the homology-based annotations in the remainder of the SCOPe hierarchy. SCOPe 2.06 contains 77,439 Protein Data Bank entries, double the 38,221 structures classified in SCOP. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
A systematic review of administrative and clinical databases of infants admitted to neonatal units.
Statnikov, Yevgeniy; Ibrahim, Buthaina; Modi, Neena
2017-05-01
High quality information, increasingly captured in clinical databases, is a useful resource for evaluating and improving newborn care. We conducted a systematic review to identify neonatal databases, and define their characteristics. We followed a preregistered protocol using MesH terms to search MEDLINE, EMBASE, CINAHL, Web of Science and OVID Maternity and Infant Care Databases for articles identifying patient level databases covering more than one neonatal unit. Full-text articles were reviewed and information extracted on geographical coverage, criteria for inclusion, data source, and maternal and infant characteristics. We identified 82 databases from 2037 publications. Of the country-specific databases there were 39 regional and 39 national. Sixty databases restricted entries to neonatal unit admissions by birth characteristic or insurance cover; 22 had no restrictions. Data were captured specifically for 53 databases; 21 administrative sources; 8 clinical sources. Two clinical databases hold the largest range of data on patient characteristics, USA's Pediatrix BabySteps Clinical Data Warehouse and UK's National Neonatal Research Database. A number of neonatal databases exist that have potential to contribute to evaluating neonatal care. The majority is created by entering data specifically for the database, duplicating information likely already captured in other administrative and clinical patient records. This repetitive data entry represents an unnecessary burden in an environment where electronic patient records are increasingly used. Standardisation of data items is necessary to facilitate linkage within and between countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Effects of pilates on patients with chronic non-specific low back pain: a systematic review
Lin, Hui-Ting; Hung, Wei-Ching; Hung, Jia-Ling; Wu, Pei-Shan; Liaw, Li-Jin; Chang, Jia-Hao
2016-01-01
[Purpose] To evaluate the effects of Pilates on patients with chronic low back pain through a systematic review of high-quality articles on randomized controlled trials. [Subjects and Methods] Keywords and synonyms for “Pilates” and “Chronic low back pain” were used in database searches. The databases included PubMed, Physiotherapy Evidence Database (PEDro), Medline, and the Cochrane Library. Articles involving randomized controlled trials with higher than 5 points on the PEDro scale were reviewed for suitability and inclusion. The methodological quality of the included randomized controlled trials was evaluated using the PEDro scale. Relevant information was extracted by 3 reviewers. [Results] Eight randomized controlled trial articles were included. Patients with chronic low back pain showed statistically significant improvement in pain relief and functional ability compared to patients who only performed usual or routine health care. However, other forms of exercise were similar to Pilates in the improvement of pain relief and functional capacity. [Conclusion] In patients with chronic low back pain, Pilates showed significant improvement in pain relief and functional enhancement. Other exercises showed effects similar to those of Pilates, if waist or torso movement was included and the exercises were performed for 20 cumulative hours. PMID:27821970
Nørrelund, Helene; Mazin, Wiktor; Pedersen, Lars
2014-01-01
Denmark is facing a reduction in clinical trial activity as the pharmaceutical industry has moved trials to low-cost emerging economies. Competitiveness in industry-sponsored clinical research depends on speed, quality, and cost. Because Denmark is widely recognized as a region that generates high quality data, an enhanced ability to attract future trials could be achieved if speed can be improved by taking advantage of the comprehensive national and regional registries. A "single point-of-entry" system has been established to support collaboration between hospitals and industry. When assisting industry in early-stage feasibility assessments, potential trial participants are identified by use of registries to shorten the clinical trial startup times. The Aarhus University Clinical Trial Candidate Database consists of encrypted data from the Danish National Registry of Patients allowing an immediate estimation of the number of patients with a specific discharge diagnosis in each hospital department or outpatient specialist clinic in the Central Denmark Region. The free access to health care, thorough monitoring of patients who are in contact with the health service, completeness of registration at the hospital level, and ability to link all databases are competitive advantages in an increasingly complex clinical trial environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yung, J; Stefan, W; Reeve, D
2015-06-15
Purpose: Phantom measurements allow for the performance of magnetic resonance (MR) systems to be evaluated. Association of Physicists in Medicine (AAPM) Report No. 100 Acceptance Testing and Quality Assurance Procedures for MR Imaging Facilities, American College of Radiology (ACR) MR Accreditation Program MR phantom testing, and ACR MRI quality control (QC) program documents help to outline specific tests for establishing system performance baselines as well as system stability over time. Analyzing and processing tests from multiple systems can be time-consuming for medical physicists. Besides determining whether tests are within predetermined limits or criteria, monitoring longitudinal trends can also help preventmore » costly downtime of systems during clinical operation. In this work, a semi-automated QC program was developed to analyze and record measurements in a database that allowed for easy access to historical data. Methods: Image analysis was performed on 27 different MR systems of 1.5T and 3.0T field strengths from GE and Siemens manufacturers. Recommended measurements involved the ACR MRI Accreditation Phantom, spherical homogenous phantoms, and a phantom with an uniform hole pattern. Measurements assessed geometric accuracy and linearity, position accuracy, image uniformity, signal, noise, ghosting, transmit gain, center frequency, and magnetic field drift. The program was designed with open source tools, employing Linux, Apache, MySQL database and Python programming language for the front and backend. Results: Processing time for each image is <2 seconds. Figures are produced to show regions of interests (ROIs) for analysis. Historical data can be reviewed to compare previous year data and to inspect for trends. Conclusion: A MRI quality assurance and QC program is necessary for maintaining high quality, ACR MRI Accredited MR programs. A reviewable database of phantom measurements assists medical physicists with processing and monitoring of large datasets. Longitudinal data can reveal trends that although are within passing criteria indicate underlying system issues.« less
Alper, Brian S; Tristan, Mario; Ramirez-Morera, Anggie; Vreugdenhil, Maria M T; Van Zuuren, Esther J; Fedorowicz, Zbys
2016-06-01
Guideline development is challenging, expensive and labor-intensive. A high-quality guideline with 90 recommendations for breast cancer treatment was developed within 6 months with limited resources in Costa Rica. We describe the experience and propose a process others can use and adapt.The ADAPTE method (using existing guidelines to minimize repeating work that has been done) was used but existing guidelines were not current. The method was extended to use databases that systematically identify, appraise and synthesize evidence for clinical application (DynaMed, EBM Guidelines) to provide current evidence searches and critical appraisal of evidence. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach was used to rate the quality of evidence and the strength of recommendations. Draft recommendations with supporting evidence were provided to panel members for facilitated voting to target panel discussion to areas necessary for reaching consensus.Training panelists in guideline development methodology facilitated rapid consensus development. Extending 'guideline adaptation' to 'evidence database adaptation' was highly effective and efficient. Methods were created to simplify mapping DynaMed evidence ratings to GRADE ratings. Twelve steps are presented to facilitate rapid guideline development and enable further adaptation by others.This is a case report and the RAPADAPTE method was retrospectively derived. Prospective replication and validation will support advances for the guideline development community. If guideline development can be accelerated without compromising validity and relevance of the resulting recommendations this would greatly improve our ability to impact clinical care. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.
Data Auditor: Analyzing Data Quality Using Pattern Tableaux
NASA Astrophysics Data System (ADS)
Srivastava, Divesh
Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.
A high-quality fuels database of photos and information
Clinton S. Wright; Paige C. Eagle; Diana L. Olson
2010-01-01
Photo series and their associated data provide a quick and easy way for managers to quantify and describe fuel and vegetation properties, such as loading of dead and down woody material, tree density, or height of understory vegetation. This information is critical for making fuel management decisions and for predicting fire behavior and fire effects. The Digital Photo...
One primary biological indicator of condition used in the National Rivers and Streams Assessment (NRSA) is the fish assemblage. Data for the 2008-2009 assessment were collected on field forms from over 2100 sites. After field forms were scanned into the NRSA database, we develope...
Learning From Small-Scale Experimental Evaluations of After School Programs. Snapshot Number 8
ERIC Educational Resources Information Center
Harvard Family Research Project, Harvard University, 2006
2006-01-01
The Harvard Family Research Project (HFRP) Out-of-School Time Program Evaluation Database contains profiles of out-of-school time (OST) program evaluations. Its purpose is to provide accessible information about previous and current evaluations to support the development of high quality evaluations and programs in the OST field. Types of Programs…
NASA Astrophysics Data System (ADS)
Sprintall, J.; Cowley, R.; Palmer, M. D.; Domingues, C. M.; Suzuki, T.; Ishii, M.; Boyer, T.; Goni, G. J.; Gouretski, V. V.; Macdonald, A. M.; Thresher, A.; Good, S. A.; Diggs, S. C.
2016-02-01
Historical ocean temperature profile observations provide a critical element for a host of ocean and climate research activities. These include providing initial conditions for seasonal-to-decadal prediction systems, evaluating past variations in sea level and Earth's energy imbalance, ocean state estimation for studying variability and change, and climate model evaluation and development. The International Quality controlled Ocean Database (IQuOD) initiative represents a community effort to create the most globally complete temperature profile dataset, with (intelligent) metadata and assigned uncertainties. With an internationally coordinated effort organized by oceanographers, with data and ocean instrumentation expertise, and in close consultation with end users (e.g., climate modelers), the IQuOD initiative will assess and maximize the potential of an irreplaceable collection of ocean temperature observations (tens of millions of profiles collected at a cost of tens of billions of dollars, since 1772) to fulfil the demand for a climate-quality global database that can be used with greater confidence in a vast range of climate change related research and services of societal benefit. Progress towards version 1 of the IQuOD database, ongoing and future work will be presented. More information on IQuOD is available at www.iquod.org.
Zhang, Xinghe; Guo, Taipin; Zhu, Bowen; Gao, Qing; Wang, Hourong; Tai, Xiantao; Jing, Fujie
2018-05-01
Preterm infants are babies born alive before 37 weeks. Many survived infants concomitant with defects of growth and development, a lifetime of disability usually as following when insufficient intervention. In early intervention of preterm infants, pediatric Tuina shows good effect in many Chinese and some English clinical trials. This systematic review is aimed to evaluate the efficacy and safety of pediatric Tuina for promoting growth and development of preterm infants. The electronic databases of Cochrane Library, MEDLINE, EBASE, Web of Science, Springer, World Health Organization International Clinical Trials Registry Platform, China National Knowledge Infrastructure, Chinese Biomedical Literature Database, Wan-fang database, Chinese Scientific Journal Database, and other databases will be searched from establishment to April 1, 2018. All published randomized controlled trials (RCTs) about this topic will be included. Two independent researchers will operate article retrieval, screening, quality evaluation, and data analyses by Review Manager (V.5.3.5). Meta-analyses, subgroup analysis, and/or descriptive analysis will be performed based on included data conditions. High-quality synthesis and/or descriptive analysis of current evidence will be provided from weight increase, motor development, neuropsychological development, length of stay, days of weight recovery to birthweight, days on supplemental oxygen, daily sleep duration, and side effects. This study will provide the evidence of whether pediatric Tuina is an effective early intervention for preterm infants. There is no requirement of ethical approval and informed consent, and it will be in print or published by electronic copies. This systematic review protocol has been registered in the PROSPERO network (No. CRD42018090563).
Expert searching in public health
Alpi, Kristine M.
2005-01-01
Objective: The article explores the characteristics of public health information needs and the resources available to address those needs that distinguish it as an area of searching requiring particular expertise. Methods: Public health searching activities from reference questions and literature search requests at a large, urban health department library were reviewed to identify the challenges in finding relevant public health information. Results: The terminology of the information request frequently differed from the vocabularies available in the databases. Searches required the use of multiple databases and/or Web resources with diverse interfaces. Issues of the scope and features of the databases relevant to the search questions were considered. Conclusion: Expert searching in public health differs from other types of expert searching in the subject breadth and technical demands of the databases to be searched, the fluidity and lack of standardization of the vocabulary, and the relative scarcity of high-quality investigations at the appropriate level of geographic specificity. Health sciences librarians require a broad exposure to databases, gray literature, and public health terminology to perform as expert searchers in public health. PMID:15685281
Liu, Ken H; Walker, Douglas I; Uppal, Karan; Tran, ViLinh; Rohrbeck, Patricia; Mallon, Timothy M; Jones, Dean P
2016-08-01
The aim of this study was to maximize detection of serum metabolites with high-resolution metabolomics (HRM). Department of Defense Serum Repository (DoDSR) samples were analyzed using ultrahigh resolution mass spectrometry with three complementary chromatographic phases and four ionization modes. Chemical coverage was evaluated by number of ions detected and accurate mass matches to a human metabolomics database. Individual HRM platforms provided accurate mass matches for up to 58% of the KEGG metabolite database. Combining two analytical methods increased matches to 72% and included metabolites in most major human metabolic pathways and chemical classes. Detection and feature quality varied by analytical configuration. Dual chromatography HRM with positive and negative electrospray ionization provides an effective generalized method for metabolic assessment of military personnel.
Air pollution in Latin America: Bottom-up Vehicular Emissions Inventory and Atmospheric Modeling
NASA Astrophysics Data System (ADS)
Ibarra Espinosa, S.; Vela, A. V.; Calderon, M. G.; Carlos, G.; Ynoue, R.
2016-12-01
Air pollution is a global environmental and health problem. Population of Latin America are facing air quality risks due to high level of air pollution. According to World Health Organization (WHO; 2016), several Latin American cities have high level of pollution. Emissions inventories are a key tool for air quality, however they normally present lack of quality and adequate documentation in developing countries. This work aims to develop air quality assessments in Latin American countries by 1) develop a high resolution emissions inventory of vehicles, and 2) simulate air pollutant concentrations. The bottom-up vehicular emissions inventory used was obtained with the REMI model (Ibarra et al., 2016) which allows to interpolate traffic over road network of Open Street Map to estimate vehicular emissions 24-h, each day of the week. REMI considers several parameters, among them the average age of fleet which was associated with gross domestic product (GDP) per capita. The estimated pollutants are CO, NOx, HC, PM2.5, NO, NO2, CO2, N2O, COV, NH3 and Fuel Consumption. The emissions inventory was performed at the biggest cities, including every capital of Latin America's countries. Initial results shows that the cities with most CO emissions are Buenos Aires 162800 (t/year), São Paulo 152061 (t/year), Campinas 151567 (t/year) and Brasilia 144332 (t/year). The results per capita shows that the city with most CO emissions per capita is Campinas, with 130 (kgCO/hab/year), showed in figure 1. This study also cover high resolution air quality simulations with WRF-Chem main cities in Latin America. Results will be assessed comparing: fuel estimates with local fuel sales, traffic count interpolation with available traffic data set at each city, and comparison between air pollutant simulations with air monitoring observation data. Ibarra, S., R. Ynoue, and S. Mhartain. 2016: "High Resolution Vehicular Emissions Inventory for the Megacity of São Paulo." Manuscript submitted to Journal of Atmospheric Environment. (1-15) WHO. 2016: WHO Global Urban Ambient Air Pollution Database (update 2016). http://www.who.int/phe/health_topics/outdoorair/databases/cities/en/
WLN's Database: New Directions.
ERIC Educational Resources Information Center
Ziegman, Bruce N.
1988-01-01
Describes features of the Western Library Network's database, including the database structure, authority control, contents, quality control, and distribution methods. The discussion covers changes in distribution necessitated by increasing telecommunications costs and the development of optical data disk products. (CLB)
Development of an Integrated Biospecimen Database among the Regional Biobanks in Korea.
Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun
2016-04-01
This study developed an integrated database for 15 regional biobanks that provides large quantities of high-quality bio-data to researchers to be used for the prevention of disease, for the development of personalized medicines, and in genetics studies. We collected raw data, managed independently by 15 regional biobanks, for database modeling and analyzed and defined the metadata of the items. We also built a three-step (high, middle, and low) classification system for classifying the item concepts based on the metadata. To generate clear meanings of the items, clinical items were defined using the Systematized Nomenclature of Medicine Clinical Terms, and specimen items were defined using the Logical Observation Identifiers Names and Codes. To optimize database performance, we set up a multi-column index based on the classification system and the international standard code. As a result of subdividing 7,197,252 raw data items collected, we refined the metadata into 1,796 clinical items and 1,792 specimen items. The classification system consists of 15 high, 163 middle, and 3,588 low class items. International standard codes were linked to 69.9% of the clinical items and 71.7% of the specimen items. The database consists of 18 tables based on a table from MySQL Server 5.6. As a result of the performance evaluation, the multi-column index shortened query time by as much as nine times. The database developed was based on an international standard terminology system, providing an infrastructure that can integrate the 7,197,252 raw data items managed by the 15 regional biobanks. In particular, it resolved the inevitable interoperability issues in the exchange of information among the biobanks, and provided a solution to the synonym problem, which arises when the same concept is expressed in a variety of ways.
The Danish Cardiac Rehabilitation Database.
Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole
2016-01-01
The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.
Foot and Ankle Fellowship Websites: An Assessment of Accessibility and Quality.
Hinds, Richard M; Danna, Natalie R; Capo, John T; Mroczek, Kenneth J
2017-08-01
The Internet has been reported to be the first informational resource for many fellowship applicants. The objective of this study was to assess the accessibility of orthopaedic foot and ankle fellowship websites and to evaluate the quality of information provided via program websites. The American Orthopaedic Foot and Ankle Society (AOFAS) and the Fellowship and Residency Electronic Interactive Database (FREIDA) fellowship databases were accessed to generate a comprehensive list of orthopaedic foot and ankle fellowship programs. The databases were reviewed for links to fellowship program websites and compared with program websites accessed from a Google search. Accessible fellowship websites were then analyzed for the quality of recruitment and educational content pertinent to fellowship applicants. Forty-seven orthopaedic foot and ankle fellowship programs were identified. The AOFAS database featured direct links to 7 (15%) fellowship websites with the independent Google search yielding direct links to 29 (62%) websites. No direct website links were provided in the FREIDA database. Thirty-six accessible websites were analyzed for content. Program websites featured a mean 44% (range = 5% to 75%) of the total assessed content. The most commonly presented recruitment and educational content was a program description (94%) and description of fellow operative experience (83%), respectively. There is substantial variability in the accessibility and quality of orthopaedic foot and ankle fellowship websites. Recognition of deficits in accessibility and content quality may assist foot and ankle fellowships in improving program information online. Level IV.
Roshania, Reshma; Mallow, Michaela; Dunbar, Nelson; Mansary, David; Shetty, Pranav; Lyon, Taralyn; Pham, Kacey; Abad, Matthew; Shedd, Erin; Tran, Anh-Minh A; Cundy, Sarah; Levine, Adam C
2016-09-28
The 2014 outbreak of Ebola virus disease (EVD) in West Africa was the largest ever recorded. Starting in September 2014, International Medical Corps (IMC) managed 5 Ebola treatment units (ETUs) in Liberia and Sierra Leone, which cumulatively cared for about 2,500 patients. We conducted a retrospective cohort study of patient data collected at the 5 ETUs over 1 year of operations. To collect clinical and epidemiological data from the patient care areas, each chart was either manually copied across the fence between the high-risk zone and low-risk zone, imaged across the fence, or imaged in the high-risk zone. Each ETU's data were entered into a separate electronic database, and these were later combined into a single relational database. Lot quality assurance sampling was used to ensure data quality, with reentry of data with high error rates from imaged records. The IMC database contains records on 2,768 patient presentations, including 2,351 patient admissions with full follow-up data. Of the patients admitted, 470 (20.0%) tested positive for EVD, with an overall case fatality ratio (CFR) of 57.0% for EVD-positive patients and 8.1% for EVD-negative patients. Although more men were admitted than women (53.4% vs. 46.6%), a larger proportion of women were diagnosed EVD positive (25.6% vs. 15.2%). Diarrhea, red eyes, contact with an ill person, and funeral attendance were significantly more common in patients with EVD than in those with other diagnoses. Among EVD-positive patients, age was a significant predictor of mortality: the highest CFRs were among children under 5 (89.1%) and adults over 55 (71.4%). While several prior reports have documented the experiences of individual ETUs, this study is the first to present data from multiple ETUs across 2 countries run by the same organization with similar clinical protocols. Our experience demonstrates that even in austere settings under difficult conditions, it is possible for humanitarian organizations to collect high-quality clinical and epidemiologic data during a major infectious disease outbreak. © Roshania et al.
Use of amphetamine-type stimulants in the Islamic Republic of Iran, 2004-2015: a review.
Shadloo, Behrang; Amin-Esmaeili, Masoumeh; Haft-Baradaran, Minoo; Noroozi, Alireza; Ghorban-Jahromi, Reza; Rahimi-Movaghar, Afarin
2017-05-01
Amphetamine-type stimulants (ATS) are the second most commonly used illicit drugs in the world, after cannabis. The production of ATS has increased worldwide, including in the Middle East. This review aims to assess ATS use in the Islamic Republic of Iran. PubMed, Scientific Information Database (a national database) and Iranian Center for Addiction Studies were searched. The review included studies on the general population, university and high school students, other specific populations, and drug users. The result show that self-reported methamphetamine and ecstasy use in 2016 was < 1% in the general population and university and high-school students, but the prevalence was higher in certain groups. There has also been an increase in the proportion of ATS users among clients of drug treatment centres. The findings highlight the need for high quality epidemiological studies and closer monitoring of stimulant use in different populations.
Using Large Diabetes Databases for Research.
Wild, Sarah; Fischbacher, Colin; McKnight, John
2016-09-01
There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately. © 2016 Diabetes Technology Society.
Maetens, Arno; De Schreye, Robrecht; Faes, Kristof; Houttekier, Dirk; Deliens, Luc; Gielen, Birgit; De Gendt, Cindy; Lusyne, Patrick; Annemans, Lieven; Cohen, Joachim
2016-10-18
The use of full-population databases is under-explored to study the use, quality and costs of end-of-life care. Using the case of Belgium, we explored: (1) which full-population databases provide valid information about end-of-life care, (2) what procedures are there to use these databases, and (3) what is needed to integrate separate databases. Technical and privacy-related aspects of linking and accessing Belgian administrative databases and disease registries were assessed in cooperation with the database administrators and privacy commission bodies. For all relevant databases, we followed procedures in cooperation with database administrators to link the databases and to access the data. We identified several databases as fitting for end-of-life care research in Belgium: the InterMutualistic Agency's national registry of health care claims data, the Belgian Cancer Registry including data on incidence of cancer, and databases administrated by Statistics Belgium including data from the death certificate database, the socio-economic survey and fiscal data. To obtain access to the data, approval was required from all database administrators, supervisory bodies and two separate national privacy bodies. Two Trusted Third Parties linked the databases via a deterministic matching procedure using multiple encrypted social security numbers. In this article we describe how various routinely collected population-level databases and disease registries can be accessed and linked to study patterns in the use, quality and costs of end-of-life care in the full population and in specific diagnostic groups.
Huang, Weixin; Li, Xiaohui; Wang, Yuanping; Yan, Xia; Wu, Siping
2017-12-01
Stress urinary incontinence (SUI) is a widespread complaint in the adult women. Electroacupuncture has been widely applied in the treatment of SUI. But its efficacy has not been evaluated scientifically and systematically. Therefore, we provide a protocol of systematic evaluation to assess the effectiveness and safety of electroacupuncture treatment on women with SUI. The retrieved databases include 3 English literature databases, namely PubMed, Embase, and Cochrane Library, and 3 Chinese literature databases, namely Chinese Biomedical Literature Database (CBM), China National Knowledge Infrastructure (CNKI), and Wanfang Database. The randomized controlled trials (RCTs) of the electroacupuncture treatment on women with SUI will be searched in the above-mentioned databases from the time when the respective databases were established to December 2017. The change from baseline in the amount of urine leakage measured by the 1-hour pad test will be accepted as the primary outcomes. We will use RevMan V.5.3 software as well to compute the data synthesis carefully when a meta-analysis is allowed. This study will provide a high-quality synthesis to assess the effectiveness and safety of electroacupuncture treatment on women with SUI. The conclusion of our systematic review will provide evidence to judge whether electroacupuncture is an effective intervention for women with SUI. PROSPERO CRD42017070947.
Famulari, Stevie; Witz, Kyla
2015-01-01
Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.
A Systematic Review Evaluating the Effect of Vitamin B6 on Semen Quality.
Banihani, Saleem Ali
2017-12-30
This review systematically discusses and summarizes the effect of vitamin B6 on semen quality. To achieve this contribution, we searched the PubMed, Scopus, and Web of Science databases for English language papers from 1984 through 2017 using the key words "sperm" versus "Vitamin B6", "pyridoxine", and "pyridoxal". Also, the references from selected published papers were included, only if relevant. To date, as revealed by rodent studies, high doses of vitamin B6 impair semen quality and sperm parameters. While in humans, it is suggested, but not yet directly approved, that seminal vitamin B6 levels may alter sperm quality (i.e., sperm quantity and quality), and that vitamin B6 deficiency may trigger the chemical toxicity to sperm (i.e., hyperhomocysteinemia, oxidative injury). The adverse effect of vitamin B6, when used at high doses, has been revealed in experimental animals, but not yet directly approved in humans. Consequently, in vitro studies on human ejaculate as well as clinical studies that investigate the direct effect of vitamin B6 on semen quality seem very significant.
Ekhtiari, Seper; Kay, Jeffrey; de Sa, Darren; Simunovic, Nicole; Musahl, Volker; Peterson, Devin C; Ayeni, Olufemi R
2017-05-01
To characterize and assess the methodological quality of patient and physician surveys related to anterior cruciate ligament reconstruction, and to analyze the factors influencing response rate. The databases MEDLINE, Embase, and PubMed were searched from database inception to search date and screened in duplicate for relevant studies. Data regarding survey characteristics, response rates, and distribution methods were extracted. A previously published list of recommendations for high-quality surveys in orthopaedics was used as a scale to assess survey quality (12 items scored 0, 1, or 2; maximum score = 24). Of the initial 1,276 studies, 53 studies published between 1986 and 2016 met the inclusion criteria. Sixty-four percent of studies were distributed to physicians, compared with 32% distributed to patients and less than 4% to coaches. The median number of items in each survey was 10.5, and the average response rate was 73% (range: 18% to 100%). In-person distribution was the most common method (40%), followed by web-based methods (28%) and mail (25%). Response rates were highest for surveys targeted at patients (77%, P < .0001) and those delivered in-person (94%, P < .0001). The median quality score was 12/24 (range = 8.5/24 to 21/24). There was high inter-rater agreement using the quality scale (intraclass correlation coefficient = 0.92), but there was no correlation with the response rate (Rho = -0.01, P = .97). Response rates vary based on target audience and distribution methods, with patients responding at a significantly higher rate than physicians and in-person distribution yielding significantly higher response rates than web or mail surveys. Level IV, systematic review of Level IV studies. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Alali, Aziz S; Burton, Kirsteen; Fowler, Robert A; Naimark, David M J; Scales, Damon C; Mainprize, Todd G; Nathens, Avery B
2015-07-01
Economic evaluations provide a unique opportunity to identify the optimal strategies for the diagnosis and management of traumatic brain injury (TBI), for which uncertainty is common and the economic burden is substantial. The objective of this study was to systematically review and examine the quality of contemporary economic evaluations in the diagnosis and management of TBI. Two reviewers independently searched MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, NHS Economic Evaluation Database, Health Technology Assessment Database, EconLit, and the Tufts CEA Registry for comparative economic evaluations published from 2000 onward (last updated on August 30, 2013). Data on methods, results, and quality were abstracted in duplicate. The results were summarized quantitatively and qualitatively. Of 3539 citations, 24 economic evaluations met our inclusion criteria. Nine were cost-utility, five were cost-effectiveness, three were cost-minimization, and seven were cost-consequences analyses. Only six studies were of high quality. Current evidence from high-quality studies suggests the economic attractiveness of the following strategies: a low medical threshold for computed tomography (CT) scanning of asymptomatic infants with possible inflicted TBI, selective CT scanning of adults with mild TBI as per the Canadian CT Head Rule, management of severe TBI according to the Brain Trauma Foundation guidelines, management of TBI in dedicated neurocritical care units, and early transfer of patients with TBI with nonsurgical lesions to neuroscience centers. Threshold-guided CT scanning, adherence to Brain Trauma Foundation guidelines, and care for patients with TBI, including those with nonsurgical lesions, in specialized settings appear to be economically attractive strategies. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Pharmacoepidemiology resources in Ireland-an introduction to pharmacy claims data.
Sinnott, Sarah-Jo; Bennett, Kathleen; Cahir, Caitriona
2017-11-01
Administrative health data, such as pharmacy claims data, present a valuable resource for conducting pharmacoepidemiological and health services research. Often, data are available for whole populations allowing population level analyses. Moreover, their routine collection ensures that the data reflect health care utilisation in the real-world setting compared to data collected in clinical trials. The Irish Health Service Executive-Primary Care Reimbursement Service (HSE-PCRS) community pharmacy claims database is described. The availability of demographic variables and drug-related information is discussed. The strengths and limitations associated using this database for conducting research are presented, in particular, internal and external validity. Examples of recently conducted research using the HSE-PCRS pharmacy claims database are used to illustrate the breadth of its use. The HSE-PCRS national pharmacy claims database is a large, high-quality, valid and accurate data source for measuring drug exposure in specific populations in Ireland. The main limitation is the lack of generalisability for those aged <70 years and the lack of information on indication or outcome.
5SRNAdb: an information resource for 5S ribosomal RNAs.
Szymanski, Maciej; Zielezinski, Andrzej; Barciszewski, Jan; Erdmann, Volker A; Karlowski, Wojciech M
2016-01-04
Ribosomal 5S RNA (5S rRNA) is the ubiquitous RNA component found in the large subunit of ribosomes in all known organisms. Due to its small size, abundance and evolutionary conservation 5S rRNA for many years now is used as a model molecule in studies on RNA structure, RNA-protein interactions and molecular phylogeny. 5SRNAdb (http://combio.pl/5srnadb/) is the first database that provides a high quality reference set of ribosomal 5S RNAs (5S rRNA) across three domains of life. Here, we give an overview of new developments in the database and associated web tools since 2002, including updates to database content, curation processes and user web interfaces. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
2010-01-01
Background In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. Methods SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). Results For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease. The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Conclusions Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found. PMID:20416069
Hjerpe, Per; Merlo, Juan; Ohlsson, Henrik; Bengtsson Boström, Kristina; Lindblad, Ulf
2010-04-23
In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease.The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found.
A User's Applications of Imaging Techniques: The University of Maryland Historic Textile Database.
ERIC Educational Resources Information Center
Anderson, Clarita S.
1991-01-01
Describes the incorporation of textile images into the University of Maryland Historic Textile Database by a computer user rather than a computer expert. Selection of a database management system is discussed, and PICTUREPOWER, a system that integrates photographic quality images with text and numeric information in databases, is described. (three…
Nobrega, R Paul; Brown, Michael; Williams, Cody; Sumner, Chris; Estep, Patricia; Caffry, Isabelle; Yu, Yao; Lynaugh, Heather; Burnina, Irina; Lilov, Asparouh; Desroches, Jordan; Bukowski, John; Sun, Tingwan; Belk, Jonathan P; Johnson, Kirt; Xu, Yingda
2017-10-01
The state-of-the-art industrial drug discovery approach is the empirical interrogation of a library of drug candidates against a target molecule. The advantage of high-throughput kinetic measurements over equilibrium assessments is the ability to measure each of the kinetic components of binding affinity. Although high-throughput capabilities have improved with advances in instrument hardware, three bottlenecks in data processing remain: (1) intrinsic molecular properties that lead to poor biophysical quality in vitro are not accounted for in commercially available analysis models, (2) processing data through a user interface is time-consuming and not amenable to parallelized data collection, and (3) a commercial solution that includes historical kinetic data in the analysis of kinetic competition data does not exist. Herein, we describe a generally applicable method for the automated analysis, storage, and retrieval of kinetic binding data. This analysis can deconvolve poor quality data on-the-fly and store and organize historical data in a queryable format for use in future analyses. Such database-centric strategies afford greater insight into the molecular mechanisms of kinetic competition, allowing for the rapid identification of allosteric effectors and the presentation of kinetic competition data in absolute terms of percent bound to antigen on the biosensor.
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Drager, Andreas; ...
2015-10-17
In this study, genome-scale metabolic models are mathematically structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scalemore » metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data.« less
BiGG Models: A platform for integrating, standardizing and sharing genome-scale models
King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.
2016-01-01
Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456
Benford's Law for Quality Assurance of Manner of Death Counts in Small and Large Databases.
Daniels, Jeremy; Caetano, Samantha-Jo; Huyer, Dirk; Stephen, Andrew; Fernandes, John; Lytwyn, Alice; Hoppe, Fred M
2017-09-01
To assess if Benford's law, a mathematical law used for quality assurance in accounting, can be applied as a quality assurance measure for the manner of death determination. We examined a regional forensic pathology service's monthly manner of death counts (N = 2352) from 2011 to 2013, and provincial monthly and weekly death counts from 2009 to 2013 (N = 81,831). We tested whether each dataset's leading digit followed Benford's law via the chi-square test. For each database, we assessed whether number 1 was the most common leading digit. The manner of death counts first digit followed Benford's law in all the three datasets. Two of the three datasets had 1 as the most frequent leading digit. The manner of death data in this study showed qualities consistent with Benford's law. The law has potential as a quality assurance metric in the manner of death determination for both small and large databases. © 2017 American Academy of Forensic Sciences.
Malpractice Litigation and Nursing Home Quality of Care
Konetzka, R Tamara; Park, Jeongyoung; Ellis, Robert; Abbo, Elmer
2013-01-01
Objective. To assess the potential deterrent effect of nursing home litigation threat on nursing home quality. Data Sources/Study Setting. We use a panel dataset of litigation claims and Nursing Home Online Survey Certification and Reporting (OSCAR) data from 1995 to 2005 in six states: Florida, Illinois, Wisconsin, New Jersey, Missouri, and Delaware, for a total of 2,245 facilities. Claims data are from Westlaw's Adverse Filings database, a proprietary legal database, on all malpractice, negligence, and personal injury/wrongful death claims filed against nursing facilities. Study Design. A lagged 2-year moving average of the county-level number of malpractice claims is used to represent the threat of litigation. We use facility fixed-effects models to examine the relationship between the threat of litigation and nursing home quality. Principal Findings. We find significant increases in registered nurse-to-total staffing ratios in response to rising malpractice threat, and a reduction in pressure sores among highly staffed facilities. However, the magnitude of the deterrence effect is small. Conclusions. Deterrence in response to the threat of malpractice litigation is unlikely to lead to widespread improvements in nursing home quality. This should be weighed against other benefits and costs of litigation to assess the net benefit of tort reform. PMID:23741985
Data management in clinical research: An overview
Krishnankutty, Binny; Bellary, Shantala; Kumar, Naveen B.R.; Moodahadu, Latha S.
2012-01-01
Clinical Data Management (CDM) is a critical phase in clinical research, which leads to generation of high-quality, reliable, and statistically sound data from clinical trials. This helps to produce a drastic reduction in time from drug development to marketing. Team members of CDM are actively involved in all stages of clinical trial right from inception to completion. They should have adequate process knowledge that helps maintain the quality standards of CDM processes. Various procedures in CDM including Case Report Form (CRF) designing, CRF annotation, database designing, data-entry, data validation, discrepancy management, medical coding, data extraction, and database locking are assessed for quality at regular intervals during a trial. In the present scenario, there is an increased demand to improve the CDM standards to meet the regulatory requirements and stay ahead of the competition by means of faster commercialization of product. With the implementation of regulatory compliant data management tools, CDM team can meet these demands. Additionally, it is becoming mandatory for companies to submit the data electronically. CDM professionals should meet appropriate expectations and set standards for data quality and also have a drive to adapt to the rapidly changing technology. This article highlights the processes involved and provides the reader an overview of the tools and standards adopted as well as the roles and responsibilities in CDM. PMID:22529469
GETPrime: a gene- or transcript-specific primer database for quantitative real-time PCR.
Gubelmann, Carine; Gattiker, Alexandre; Massouras, Andreas; Hens, Korneel; David, Fabrice; Decouttere, Frederik; Rougemont, Jacques; Deplancke, Bart
2011-01-01
The vast majority of genes in humans and other organisms undergo alternative splicing, yet the biological function of splice variants is still very poorly understood in large part because of the lack of simple tools that can map the expression profiles and patterns of these variants with high sensitivity. High-throughput quantitative real-time polymerase chain reaction (qPCR) is an ideal technique to accurately quantify nucleic acid sequences including splice variants. However, currently available primer design programs do not distinguish between splice variants and also differ substantially in overall quality, functionality or throughput mode. Here, we present GETPrime, a primer database supported by a novel platform that uniquely combines and automates several features critical for optimal qPCR primer design. These include the consideration of all gene splice variants to enable either gene-specific (covering the majority of splice variants) or transcript-specific (covering one splice variant) expression profiling, primer specificity validation, automated best primer pair selection according to strict criteria and graphical visualization of the latter primer pairs within their genomic context. GETPrime primers have been extensively validated experimentally, demonstrating high transcript specificity in complex samples. Thus, the free-access, user-friendly GETPrime database allows fast primer retrieval and visualization for genes or groups of genes of most common model organisms, and is available at http://updepla1srv1.epfl.ch/getprime/. Database URL: http://deplanckelab.epfl.ch.
GETPrime: a gene- or transcript-specific primer database for quantitative real-time PCR
Gubelmann, Carine; Gattiker, Alexandre; Massouras, Andreas; Hens, Korneel; David, Fabrice; Decouttere, Frederik; Rougemont, Jacques; Deplancke, Bart
2011-01-01
The vast majority of genes in humans and other organisms undergo alternative splicing, yet the biological function of splice variants is still very poorly understood in large part because of the lack of simple tools that can map the expression profiles and patterns of these variants with high sensitivity. High-throughput quantitative real-time polymerase chain reaction (qPCR) is an ideal technique to accurately quantify nucleic acid sequences including splice variants. However, currently available primer design programs do not distinguish between splice variants and also differ substantially in overall quality, functionality or throughput mode. Here, we present GETPrime, a primer database supported by a novel platform that uniquely combines and automates several features critical for optimal qPCR primer design. These include the consideration of all gene splice variants to enable either gene-specific (covering the majority of splice variants) or transcript-specific (covering one splice variant) expression profiling, primer specificity validation, automated best primer pair selection according to strict criteria and graphical visualization of the latter primer pairs within their genomic context. GETPrime primers have been extensively validated experimentally, demonstrating high transcript specificity in complex samples. Thus, the free-access, user-friendly GETPrime database allows fast primer retrieval and visualization for genes or groups of genes of most common model organisms, and is available at http://updepla1srv1.epfl.ch/getprime/. Database URL: http://deplanckelab.epfl.ch. PMID:21917859
Hinton, W; Liyanage, H; McGovern, A; Liaw, S-T; Kuziemsky, C; Munro, N; de Lusignan, S
2017-08-01
Background: The Institute of Medicine framework defines six dimensions of quality for healthcare systems: (1) safety, (2) effectiveness, (3) patient centeredness, (4) timeliness of care, (5) efficiency, and (6) equity. Large health datasets provide an opportunity to assess quality in these areas. Objective: To perform an international comparison of the measurability of the delivery of these aims, in people with type 2 diabetes mellitus (T2DM) from large datasets. Method: We conducted a survey to assess healthcare outcomes data quality of existing databases and disseminated this through professional networks. We examined the data sources used to collect the data, frequency of data uploads, and data types used for identifying people with T2DM. We compared data completeness across the six areas of healthcare quality, using selected measures pertinent to T2DM management. Results: We received 14 responses from seven countries (Australia, Canada, Italy, the Netherlands, Norway, Portugal, Turkey and the UK). Most databases reported frequent data uploads and would be capable of near real time analysis of healthcare quality.The majority of recorded data related to safety (particularly medication adverse events) and treatment efficacy (glycaemic control and microvascular disease). Data potentially measuring equity was less well recorded. Recording levels were lowest for patient-centred care, timeliness of care, and system efficiency, with the majority of databases containing no data in these areas. Databases using primary care sources had higher data quality across all areas measured. Conclusion: Data quality could be improved particularly in the areas of patient-centred care, timeliness, and efficiency. Primary care derived datasets may be most suited to healthcare quality assessment. Georg Thieme Verlag KG Stuttgart.
Impact of medical director certification on nursing home quality of care.
Rowland, Frederick N; Cowles, Mick; Dickstein, Craig; Katz, Paul R
2009-07-01
This study tests the research hypothesis that certified medical directors are able to use their training, education, and knowledge to positively influence quality of care in US nursing homes. F-tag numbers were identified within the State Operations Manual that reflect dimensions of quality thought to be impacted by the medical director. A weighting system was developed based on the "scope and severity" level at which the nursing homes were cited for these specific tag numbers. Then homes led by certified medical directors were compared with homes led by medical directors not known to be certified. DATA/PARTICIPANTS: Data were obtained from the Centers for Medicare & Medicaid Services' Online Survey Certification and Reporting database for nursing homes. Homes with a certified medical director (547) were identified from the database of the American Medical Directors Association. The national survey database was used to compute a "standardized quality score" (zero representing best possible score and 1.0 representing average score) for each home, and the homes with certified medical directors compared with the other homes in the database. Regression analysis was then used to attempt to identify the most important contributors to measured quality score differences between the homes. The standardized quality score of facilities with certified medical directors (n=547) was 0.8958 versus 1.0037 for facilities without certified medical directors (n=15,230) (lower number represents higher quality). When nursing facility characteristics were added to the regression equation, the presence of a certified medical director accounted for up to 15% improvement in quality. The presence of certified medical directors is an independent predictor of quality in US nursing homes.
NASA Astrophysics Data System (ADS)
Giles, D. M.; Holben, B. N.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Sorokin, M. G.; Espenak, F.; Schafer, J.; Sinyuk, A.
2015-12-01
The Aerosol Robotic Network (AERONET) has provided a database of aerosol optical depth (AOD) measured by surface-based Sun/sky radiometers for over 20 years. AERONET provides unscreened (Level 1.0) and automatically cloud cleared (Level 1.5) AOD in near real-time (NRT), while manually inspected quality assured (Level 2.0) AOD are available after instrument field deployment (Smirnov et al., 2000). The growing need for NRT quality controlled aerosol data has become increasingly important. Applications of AERONET NRT data include the satellite evaluation (e.g., MODIS, VIIRS, MISR, OMI), data synergism (e.g., MPLNET), verification of aerosol forecast models and reanalysis (e.g., GOCART, ICAP, NAAPS, MERRA), input to meteorological models (e.g., NCEP, ECMWF), and field campaign support (e.g., KORUS-AQ, ORACLES). In response to user needs for quality controlled NRT data sets, the new Version 3 (V3) Level 1.5V product was developed with similar quality controls as those applied by hand to the Version 2 (V2) Level 2.0 data set. The AERONET cloud screened (Level 1.5) NRT AOD database can be significantly impacted by data anomalies. The most significant data anomalies include AOD diurnal dependence due to contamination or obstruction of the sensor head windows, anomalous AOD spectral dependence due to problems with filter degradation, instrument gains, or non-linear changes in calibration, and abnormal changes in temperature sensitive wavelengths (e.g., 1020nm) in response to anomalous sensor head temperatures. Other less common AOD anomalies result from loose filters, uncorrected clock shifts, connection and electronic issues, and various solar eclipse episodes. Automatic quality control algorithms are applied to the new V3 Level 1.5 database to remove NRT AOD anomalies and produce the new AERONET V3 Level 1.5V AOD product. Results of the quality control algorithms are presented and the V3 Level 1.5V AOD database is compared to the V2 Level 2.0 AOD database.
The National Eutrophication Survey: lake characteristics and historical nutrient concentrations
NASA Astrophysics Data System (ADS)
Stachelek, Joseph; Ford, Chanse; Kincaid, Dustin; King, Katelyn; Miller, Heather; Nagelkirk, Ryan
2018-01-01
Historical ecological surveys serve as a baseline and provide context for contemporary research, yet many of these records are not preserved in a way that ensures their long-term usability. The National Eutrophication Survey (NES) database is currently only available as scans of the original reports (PDF files) with no embedded character information. This limits its searchability, machine readability, and the ability of current and future scientists to systematically evaluate its contents. The NES data were collected by the US Environmental Protection Agency between 1972 and 1975 as part of an effort to investigate eutrophication in freshwater lakes and reservoirs. Although several studies have manually transcribed small portions of the database in support of specific studies, there have been no systematic attempts to transcribe and preserve the database in its entirety. Here we use a combination of automated optical character recognition and manual quality assurance procedures to make these data available for analysis. The performance of the optical character recognition protocol was found to be linked to variation in the quality (clarity) of the original documents. For each of the four archival scanned reports, our quality assurance protocol found an error rate between 5.9 and 17 %. The goal of our approach was to strike a balance between efficiency and data quality by combining entry of data by hand with digital transcription technologies. The finished database contains information on the physical characteristics, hydrology, and water quality of about 800 lakes in the contiguous US (Stachelek et al.(2017), https://doi.org/10.5063/F1639MVD). Ultimately, this database could be combined with more recent studies to generate meta-analyses of water quality trends and spatial variation across the continental US.
A database for spectral image quality
NASA Astrophysics Data System (ADS)
Le Moan, Steven; George, Sony; Pedersen, Marius; Blahová, Jana; Hardeberg, Jon Yngve
2015-01-01
We introduce a new image database dedicated to multi-/hyperspectral image quality assessment. A total of nine scenes representing pseudo-at surfaces of different materials (textile, wood, skin. . . ) were captured by means of a 160 band hyperspectral system with a spectral range between 410 and 1000nm. Five spectral distortions were designed, applied to the spectral images and subsequently compared in a psychometric experiment, in order to provide a basis for applications such as the evaluation of spectral image difference measures. The database can be downloaded freely from http://www.colourlab.no/cid.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
2002-01-01
1-hour and proposed 8-hour National Ambient Air Quality Standards. Reactive biogenic (natural) volatile organic compounds emitted from plants have...uncertainty in predicting plant species composition and frequency. Isoprene emissions computed for the study area from the project’s high-resolution...Landcover Database (BELD 2), while monoterpene and other reactive volatile organic compound emission rates were almost 26% and 28% lower, respectively
Subscribing to Databases: How Important Is Depth and Quality of Indexing?
ERIC Educational Resources Information Center
Delong, Linwood
2007-01-01
This paper compares the subject indexing on articles pertaining to Immanuel Kant, agriculture, and aging that are found simultaneously in Humanities Index, Academic Search Elite (EBSCO) and Periodicals Research II (Micromedia ProQuest), in order to show that there are substantial variations in the depth and quality of indexing in these databases.…
HIV quality report cards: impact of case-mix adjustment and statistical methods.
Ohl, Michael E; Richardson, Kelly K; Goto, Michihiko; Vaughan-Sarrazin, Mary; Schweizer, Marin L; Perencevich, Eli N
2014-10-15
There will be increasing pressure to publicly report and rank the performance of healthcare systems on human immunodeficiency virus (HIV) quality measures. To inform discussion of public reporting, we evaluated the influence of case-mix adjustment when ranking individual care systems on the viral control quality measure. We used data from the Veterans Health Administration (VHA) HIV Clinical Case Registry and administrative databases to estimate case-mix adjusted viral control for 91 local systems caring for 12 368 patients. We compared results using 2 adjustment methods, the observed-to-expected estimator and the risk-standardized ratio. Overall, 10 913 patients (88.2%) achieved viral control (viral load ≤400 copies/mL). Prior to case-mix adjustment, system-level viral control ranged from 51% to 100%. Seventeen (19%) systems were labeled as low outliers (performance significantly below the overall mean) and 11 (12%) as high outliers. Adjustment for case mix (patient demographics, comorbidity, CD4 nadir, time on therapy, and income from VHA administrative databases) reduced the number of low outliers by approximately one-third, but results differed by method. The adjustment model had moderate discrimination (c statistic = 0.66), suggesting potential for unadjusted risk when using administrative data to measure case mix. Case-mix adjustment affects rankings of care systems on the viral control quality measure. Given the sensitivity of rankings to selection of case-mix adjustment methods-and potential for unadjusted risk when using variables limited to current administrative databases-the HIV care community should explore optimal methods for case-mix adjustment before moving forward with public reporting. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Cavallo, Sabrina; Brosseau, Lucie; Toupin-April, Karine; Wells, George A; Smith, Christine A; Pugh, Arlanna G; Stinson, Jennifer; Thomas, Roanne; Ahmed, Sara; Duffy, Ciarán M; Rahman, Prinon; Àlvarez-Gallardo, Inmaculada C; Loew, Laurianne; De Angelis, Gino; Feldman, Debbie Ehrmann; Majnemer, Annette; Gagnon, Isabelle J; Maltais, Désirée; Mathieu, Marie-Ève; Kenny, Glen P; Tupper, Susan; Whitney-Mahoney, Kristi; Bigford, Sarah
2017-05-01
To create guidelines focused on the use of structured physical activity (PA) in the management of juvenile idiopathic arthritis (JIA). A systematic literature search was conducted using the electronic databases Cochrane Central Register of Controlled Trials, MEDLINE (Ovid), EMBASE (Ovid), and Physiotherapy Evidence Database for all studies related to PA programs for JIA from January 1966 until December 2014, and was updated in May 2015. Study selection was completed independently by 2 reviewers. Studies were included if they involved individuals aged ≤21 years diagnosed with JIA who were taking part in therapeutic exercise or other PA interventions for which effects of various disease-related outcomes were compared with a control group (eg, no PA program or activity of lower intensity). Two reviewers independently extracted information on interventions, comparators, outcomes, time period, and study design. The statistical analysis was reported using the Cochrane Collaboration methods. The quality of the included studies was assessed according to the Physiotherapy Evidence Database Scale. Five randomized controlled trials (RCTs) fit the selection criteria; of these, 4 were high-quality RCTs. The following recommendations were developed: (1) Pilates for improving quality of life, pain, functional ability, and range of motion (ROM) (grade A); (2) home exercise program for improving quality of life and functional ability (grade A); (3) aquatic aerobic fitness for decreasing the number of active joints (grade A); and (4) and cardio-karate aerobic exercise for improving ROM and number of active joints (grade C+). The Ottawa Panel recommends the following structured exercises and physical activities for the management of JIA: Pilates, cardio-karate, home and aquatic exercises. Pilates showed improvement in a higher number of outcomes. Copyright © 2017. Published by Elsevier Inc.
SEER Linked Databases - SEER Datasets
SEER-Medicare database of elderly persons with cancer is useful for epidemiologic and health services research. SEER-MHOS has health-related quality of life information about elderly persons with cancer. SEER-CAHPS database has clinical, survey, and health services information on people with cancer.
ERIC Educational Resources Information Center
Bell, Steven J.
2003-01-01
Discusses full-text databases and whether existing aggregator databases are meeting user needs. Topics include the need for better search interfaces; concepts of quality research and information retrieval; information overload; full text in electronic journal collections versus aggregator databases; underrepresentation of certain disciplines; and…
Roman, C; Scripcariu, L; Diaconescu, Rm; Grigoriu, A
2012-01-01
Biocides for prolonging the shelf life of a large variety of materials have been extensively used over the last decades. It has estimated that the worldwide biocide consumption to be about 12.4 billion dollars in 2011, and is expected to increase in 2012. As biocides are substances we get in contact with in our everyday lives, access to this type of information is of paramount importance in order to ensure an appropriate living environment. Consequently, a database where information may be quickly processed, sorted, and easily accessed, according to different search criteria, is the most desirable solution. The main aim of this work was to design and implement a relational database with complete information about biocides used in public health management to improve the quality of life. Design and implementation of a relational database for biocides, by using the software "phpMyAdmin". A database, which allows for an efficient collection, storage, and management of information including chemical properties and applications of a large quantity of biocides, as well as its adequate dissemination into the public health environment. The information contained in the database herein presented promotes an adequate use of biocides, by means of information technologies, which in consequence may help achieve important improvement in our quality of life.
Dunbar, Margaret; Mirpuri, Sheena; Yip, Tiffany
2017-10-01
Previous research has indicated that school engagement tends to decline across high school. At the same time, sleep problems and exposure to social stressors such as ethnic/racial discrimination increase. The current study uses a biopsychosocial perspective to examine the interactive and prospective effects of sleep and discrimination on trajectories of academic performance. Growth curve models were used to explore changes in 6 waves of academic outcomes in a sample of 310 ethnically and racially diverse adolescents (mean age = 14.47 years, SD = .78, and 64.1% female). Ethnic/racial discrimination was assessed at Time 1 in a single survey. Sleep quality and duration were also assessed at Time 1 with daily diary surveys. School engagement and grades were reported every 6 months for 3 years. Higher self-reported sleep quality in the ninth grade was associated with higher levels of academic engagement at the start of high school. Ethnic/racial discrimination moderated the relationship between sleep quality and engagement such that adolescents reporting low levels of discrimination reported a steeper increase in engagement over time, whereas their peers reporting poor sleep quality and high levels of discrimination reported the worse engagement in the ninth grade and throughout high school. The combination of poor sleep quality and high levels of discrimination in ninth grade has downstream consequences for adolescent academic outcomes. This study applies the biopsychosocial model to understand the development and daily experiences of diverse adolescents. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Fandakova, Yana; Sander, Myriam C; Grandy, Thomas H; Cabeza, Roberto; Werkle-Bergner, Markus; Shing, Yee Lee
2018-02-01
Older adults are more likely than younger adults to falsely recall past episodes that occurred differently or not at all. We examined whether older adults' propensity for false associative memory is related to declines in postretrieval monitoring processes and their modulation with varying memory representations. Younger (N = 20) and older adults (N = 32) studied and relearned unrelated scene-word pairs, followed by a final cued recall that was used to distribute the pairs for an associative recognition test 24 hours later. This procedure allowed individualized formation of rearranged pairs that were made up of elements of pairs that were correctly recalled in the final cued recall ("high-quality" pairs), and of pairs that were not correctly recalled ("low-quality" pairs). Both age groups falsely recognized more low-quality than high-quality rearranged pairs, with a less pronounced reduction in false alarms to high-quality pairs in older adults. In younger adults, cingulo-opercular activity was enhanced for false alarms and for low-quality correct rejections, consistent with its role in postretrieval monitoring. Older adults did not show such modulated recruitment, suggesting deficits in their selective engagement of monitoring processes given variability in the fidelity of memory representations. There were no age differences in hippocampal activity, which was higher for high-quality than low-quality correct rejections in both age groups. These results demonstrate that the engagement of cingulo-opercular monitoring mechanisms varies with memory representation quality and contributes to age-related deficits in false associative memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Database Performance Monitoring for the Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.
The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website.more » To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.« less
Key elements of high-quality practice organisation in primary health care: a systematic review.
Crossland, Lisa; Janamian, Tina; Jackson, Claire L
2014-08-04
To identify elements that are integral to high-quality practice and determine considerations relating to high-quality practice organisation in primary care. A narrative systematic review of published and grey literature. Electronic databases (PubMed, CINAHL, the Cochrane Library, Embase, Emerald Insight, PsycInfo, the Primary Health Care Research and Information Service website, Google Scholar) were searched in November 2013 and used to identify articles published in English from 2002 to 2013. Reference lists of included articles were searched for relevant unpublished articles and reports. Data were configured at the study level to allow for the inclusion of findings from a broad range of study types. Ten elements were most often included in the existing organisational assessment tools. A further three elements were identified from an inductive thematic analysis of descriptive articles, and were noted as important considerations in effective quality improvement in primary care settings. Although there are some validated tools available to primary care that identify and build quality, most are single-strategy approaches developed outside health care settings. There are currently no validated organisational improvement tools, designed specifically for primary health care, which combine all elements of practice improvement and whose use does not require extensive external facilitation.
DeTEXT: A Database for Evaluating Text Extraction from Biomedical Literature Figures
Yin, Xu-Cheng; Yang, Chun; Pei, Wei-Yi; Man, Haixia; Zhang, Jun; Learned-Miller, Erik; Yu, Hong
2015-01-01
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information. A high-quality ground truth standard can greatly facilitate the development of an automated system. This article describes DeTEXT: A database for evaluating text extraction from biomedical literature figures. It is the first publicly available, human-annotated, high quality, and large-scale figure-text dataset with 288 full-text articles, 500 biomedical figures, and 9308 text regions. This article describes how figures were selected from open-access full-text biomedical articles and how annotation guidelines and annotation tools were developed. We also discuss the inter-annotator agreement and the reliability of the annotations. We summarize the statistics of the DeTEXT data and make available evaluation protocols for DeTEXT. Finally we lay out challenges we observed in the automated detection and recognition of figure text and discuss research directions in this area. DeTEXT is publicly available for downloading at http://prir.ustb.edu.cn/DeTEXT/. PMID:25951377
Hurst, Dominic
2012-06-01
The Medline, Cochrane CENTRAL, Biomed Central, Database of Open Access Journals (DOAJ), OpenJ-Gate, Bibliografia Brasileira de Odontologia (BBO), LILACS, IndMed, Sabinet, Scielo, Scirus (Medicine), OpenSIGLE and Google Scholar databases were searched. Hand searching was performed for journals not indexed in the databases. References of included trials were checked. Prospective clinical trials with test and control groups with a follow up of at least one year were included. Data abstraction was conducted independently and clinical and methodologically homogeneous data were pooled using a fixed-effects model. Eighteen trials were included. From these 32 individual dichotomous datasets were extracted and analysed. The majority of the results show no differences between both types of intervention. A high risk of selection-, performance-, detection- and attrition bias was identified. Existing research gaps are mainly due to lack of trials and small sample size. The current evidence indicates that the failure rate of high-viscosity GIC/ART restorations is not higher than, but similar to that of conventional amalgam fillings after periods longer than one year. These results are in line with the conclusions drawn during the original systematic review. There is a high risk that these results are affected by bias, and thus confirmation by further trials with suitably high numbers of participants is needed.
Ruano, J; Aguilar-Luque, M; Isla-Tejera, B; Alcalde-Mellado, P; Gay-Mimbrera, J; Hernandez-Romero, José Luis; Sanz-Cabanillas, J L; Maestre-López, B; González-Padilla, M; Carmona-Fernández, P J; Gómez-García, F; García-Nieto, A Vélez
2018-05-24
The aim of this study was to describe the relationship among abstract structure, readability, and completeness, and how these features may influence social media activity and bibliometric results, considering systematic reviews (SRs) about interventions in psoriasis classified by methodological quality. Systematic literature searches about psoriasis interventions were undertaken on relevant databases. For each review, methodological quality was evaluated using the Assessing the Methodological Quality of Systematic Reviews (AMSTAR) tool. Abstract extension, structure, readability, and quality and completeness of reporting were analyzed. Social media activity, which consider Twitter and Facebook mention counts, as well as Mendeley readers and Google scholar citations were obtained for each article. Analyses were conducted to describe any potential influence of abstract characteristics on review's social media diffusion. We classified 139 intervention SRs as displaying high/moderate/low methodological quality. We observed that abstract readability of SRs has been maintained high for last 20 years, although there are some differences based on their methodological quality. Free-format abstracts were most sensitive to the increase of text readability as compared with more structured abstracts (IMRAD or 8-headings), yielding opposite effects on their quality and completeness depending on the methodological quality: a worsening in low quality reviews and an improvement in those of high-quality. Both readability indices and PRISMA for Abstract total scores showed an inverse relationship with social media activity and bibliometric results in high methodological quality reviews but not in those of lower quality. Our results suggest that increasing abstract readability must be specially considered when writing free-format summaries of high-quality reviews, because this fact correlates with an improvement of their completeness and quality, and this may help to achieve broader social media visibility and article usage. Copyright © 2018 Elsevier Inc. All rights reserved.
Electroacupuncture for Tinnitus: A Systematic Review
Liu, Yang; Zhong, Juan; Jiang, Luyun; Liu, Ying; Chen, Qing; Xie, Yan; Zhang, Qinxiu
2016-01-01
Background Treatment effects of electroacupuncture for patients with subjective tinnitus has yet to be clarified. Objectives To assess the effect of electroacupuncutre for alleviating the symptoms of subjective tinnitus. Methods Extensive literature searches were carried out in three English and four Chinese databases (PubMed, EMBASE, Cochrane Library, CNKI, Wanfang Chinese Digital Periodical and Conference Database, VIP, and ChiCTR).The date of the most recent search was 1 June 2014. Randomized controlled trials (RCTs) or quasi-RCTs were included. The titles, abstracts, and keywords of all records were reviewed by two authors independently. The data were collected and extracted by three authors. The risk of bias in the trials was assessed in accordance with the Cochrane Handbook, version 5.1.0. (http://www.handbook.cochrane.org). Eighty-nine studies were retrieved. After discarding 84 articles, five studies with 322 participants were identified. Assessment of the methodological quality of the studies identified weaknesses in all five studies. All studies were judged as having a high risk of selection and performance bias. The attrition bias was high in four studies. Incompleteness bias was low in all studies. Reporting bias was unclear in all studies. Because of the limited number of trials included and the various types of interventions and outcomes, we were unable to conduct pooled analyses. Conclusions Due to the poor methodological quality of the primary studies and the small sample sizes, no convincing evidence that electroacupuncture is beneficial for treating tinnitus could be found. There is an urgent need for more high-quality trials with large sample sizes for the investigation of electroacupuncture treatment for tinnitus. PMID:26938213
Linking microarray reporters with protein functions.
Gaj, Stan; van Erk, Arie; van Haaften, Rachel I M; Evelo, Chris T A
2007-09-26
The analysis of microarray experiments requires accurate and up-to-date functional annotation of the microarray reporters to optimize the interpretation of the biological processes involved. Pathway visualization tools are used to connect gene expression data with existing biological pathways by using specific database identifiers that link reporters with elements in the pathways. This paper proposes a novel method that aims to improve microarray reporter annotation by BLASTing the original reporter sequences against a species-specific EMBL subset, that was derived from and crosslinked back to the highly curated UniProt database. The resulting alignments were filtered using high quality alignment criteria and further compared with the outcome of a more traditional approach, where reporter sequences were BLASTed against EnsEMBL followed by locating the corresponding protein (UniProt) entry for the high quality hits. Combining the results of both methods resulted in successful annotation of > 58% of all reporter sequences with UniProt IDs on two commercial array platforms, increasing the amount of Incyte reporters that could be coupled to Gene Ontology terms from 32.7% to 58.3% and to a local GenMAPP pathway from 9.6% to 16.7%. For Agilent, 35.3% of the total reporters are now linked towards GO nodes and 7.1% on local pathways. Our methods increased the annotation quality of microarray reporter sequences and allowed us to visualize more reporters using pathway visualization tools. Even in cases where the original reporter annotation showed the correct description the new identifiers often allowed improved pathway and Gene Ontology linking. These methods are freely available at http://www.bigcat.unimaas.nl/public/publications/Gaj_Annotation/.
Edgren, Gustaf; Rostgaard, Klaus; Vasan, Senthil K; Wikman, Agneta; Norda, Rut; Pedersen, Ole Birger; Erikstrup, Christian; Nielsen, Kaspar René; Titlestad, Kjell; Ullum, Henrik; Melbye, Mads; Nyrén, Olof; Hjalgrim, Henrik
2015-07-01
Risks of transfusion-transmitted disease are currently at a record low in the developed world. Still, available methods for blood surveillance might not be sufficient to detect transmission of diseases with unknown etiologies or with very long incubation periods. We have previously created the anonymized Scandinavian Donations and Transfusions (SCANDAT) database, containing data on blood donors, blood transfusions, and transfused patients, with complete follow-up of donors and patients for a range of health outcomes. Here we describe the re-creation of SCANDAT with updated, identifiable data. We collected computerized data on blood donations and transfusions from blood banks covering all of Sweden and Denmark. After data cleaning, two structurally identical databases were created and the entire database was linked with nationwide health outcomes registers to attain complete follow-up for up to 47 years regarding hospital care, cancer, and death. After removal of erroneous records, the database contained 25,523,334 donation records, 21,318,794 transfusion records, and 3,692,653 unique persons with valid identification, presently followed over 40 million person-years, with possibility for future extension. Data quality is generally high with 96% of all transfusions being traceable to their respective donation(s) and a very high (>97%) concordance with official statistics on annual number of blood donations and transfusions. It is possible to create a binational, nationwide database with almost 50 years of follow-up of blood donors and transfused patients for a range of health outcomes. We aim to use this database for further studies of donor health, transfusion-associated risks, and transfusion-transmitted disease. © 2015 AABB.
Bordeianou, Liliana; Cauley, Christy E; Antonelli, Donna; Bird, Sarah; Rattner, David; Hutter, Matthew; Mahmood, Sadiqa; Schnipper, Deborah; Rubin, Marc; Bleday, Ronald; Kenney, Pardon; Berger, David
2017-01-01
Two systems measure surgical site infection rates following colorectal surgeries: the American College of Surgeons National Surgical Quality Improvement Program and the Centers for Disease Control and Prevention National Healthcare Safety Network. The Centers for Medicare & Medicaid Services pay-for-performance initiatives use National Healthcare Safety Network data for hospital comparisons. This study aimed to compare database concordance. This is a multi-institution cohort study of systemwide Colorectal Surgery Collaborative. The National Surgical Quality Improvement Program requires rigorous, standardized data capture techniques; National Healthcare Safety Network allows 5 data capture techniques. Standardized surgical site infection rates were compared between databases. The Cohen κ-coefficient was calculated. This study was conducted at Boston-area hospitals. National Healthcare Safety Network or National Surgical Quality Improvement Program patients undergoing colorectal surgery were included. Standardized surgical site infection rates were the primary outcomes of interest. Thirty-day surgical site infection rates of 3547 (National Surgical Quality Improvement Program) vs 5179 (National Healthcare Safety Network) colorectal procedures (2012-2014). Discrepancies appeared: National Surgical Quality Improvement Program database of hospital 1 (N = 1480 patients) routinely found surgical site infection rates of approximately 10%, routinely deemed rate "exemplary" or "as expected" (100%). National Healthcare Safety Network data from the same hospital and time period (N = 1881) revealed a similar overall surgical site infection rate (10%), but standardized rates were deemed "worse than national average" 80% of the time. Overall, hospitals using less rigorous capture methods had improved surgical site infection rates for National Healthcare Safety Network compared with standardized National Surgical Quality Improvement Program reports. The correlation coefficient between standardized infection rates was 0.03 (p = 0.88). During 25 site-time period observations, National Surgical Quality Improvement Program and National Healthcare Safety Network data matched for 52% of observations (13/25). κ = 0.10 (95% CI, -0.1366 to 0.3402; p = 0.403), indicating poor agreement. This study investigated hospitals located in the Northeastern United States only. Variation in Centers for Medicare & Medicaid Services-mandated National Healthcare Safety Network infection surveillance methodology leads to unreliable results, which is apparent when these results are compared with standardized data. High-quality data would improve care quality and compare outcomes among institutions.
Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey
2017-01-01
The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...
Quality assessment of economic evaluation studies in pediatric surgery: a systematic review.
Fotso Kamdem, Arnaud; Nerich, Virginie; Auber, Frederic; Jantchou, Prévost; Ecarnot, Fiona; Woronoff-Lemsi, Marie-Christine
2015-04-01
To assess economic evaluation studies (EES) in pediatric surgery and to identify potential factors associated with high-quality studies. A systematic review of the literature using PubMed and Cochrane databases was conducted to identify EES in pediatric surgery published between 1 June 1993 and 30 June 2013. Assessment criteria are derived from the Drummond checklist. A high quality study was defined as a Drummond score ≥7. Logistic regression analysis was used to determine factors associated with high quality studies. 119 studies were included. 43.7% (n=52) of studies were full EES. Cost-effectiveness analysis was the most frequent (61.5%) type of full EES. Only 31.6% of studies had a Drummond score ≥7 and 73% of these were full EES. The factors associated with high quality were identification of costs (OR: 14.08; 95% CI: 3.38-100; p<0.001), estimation of utility value (OR: 8.13; 95% CI: 2.02-43.47; p=0.005) and study funding (OR: 3.50; 95% CI: 1.27-10.10; p=0.02). This review shows that the number and the quality of EES are low despite the increasing number of studies published in recent years. In the current context of budget constraints, our results should encourage pediatric surgeons to focus more on EES. Copyright © 2015 Elsevier Inc. All rights reserved.
Quality of life in children with adverse drug reactions: a narrative and systematic review.
Del Pozzo-Magaña, Blanca R; Rieder, Michael J; Lazo-Langner, Alejandro
2015-10-01
Adverse drug reactions are a common problem affecting adults and children. The economic impact of the adverse drug reactions has been widely evaluated; however, studies of the impact on the quality of life of children with adverse drug reactions are scarce. The aim was to evaluate studies assessing the health-related quality of life of children with adverse drug reactions. We conducted a systematic review that included the following electronic databases: MEDLINE, EMBASE and the Cochrane Library (including the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, the Cochrane Controlled Trials Register and the Health Technology Assessment Databases). Nine studies were included. Four of the studies were conducted in children with epilepsy; the rest of them involved children with chronic viral hepatitis, Crohn's disease, paediatric cancer and multiple adverse drug reactions compared with healthy children. Based on their findings, authors of all studies concluded that adverse drug reactions had a negative impact on the quality of life of children. No meta-analysis was conducted given the heterogeneous nature of the studies. To date, there is no specific instrument that measures quality of life of children with adverse drug reactions, and the information available is poor and variable. In general, adverse drug reactions have a negative impact on the quality of life of affected children. For those interested in this area, more work needs to be done to improve tools that help to evaluate efficiently the health-related quality of life of children with adverse drug reactions and chronic diseases. © 2014 The British Pharmacological Society.
Quality of life in children with adverse drug reactions: a narrative and systematic review
Del Pozzo-Magaña, Blanca R; Rieder, Michael J; Lazo-Langner, Alejandro
2015-01-01
Aims Adverse drug reactions are a common problem affecting adults and children. The economic impact of the adverse drug reactions has been widely evaluated; however, studies of the impact on the quality of life of children with adverse drug reactions are scarce. The aim was to evaluate studies assessing the health-related quality of life of children with adverse drug reactions. Methods We conducted a systematic review that included the following electronic databases: MEDLINE, EMBASE and the Cochrane Library (including the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effects, the Cochrane Controlled Trials Register and the Health Technology Assessment Databases). Results Nine studies were included. Four of the studies were conducted in children with epilepsy; the rest of them involved children with chronic viral hepatitis, Crohn’s disease, paediatric cancer and multiple adverse drug reactions compared with healthy children. Based on their findings, authors of all studies concluded that adverse drug reactions had a negative impact on the quality of life of children. No meta-analysis was conducted given the heterogeneous nature of the studies. Conclusions To date, there is no specific instrument that measures quality of life of children with adverse drug reactions, and the information available is poor and variable. In general, adverse drug reactions have a negative impact on the quality of life of affected children. For those interested in this area, more work needs to be done to improve tools that help to evaluate efficiently the health-related quality of life of children with adverse drug reactions and chronic diseases. PMID:24833305
An empirical assessment of high-performing medical groups: results from a national study.
Shortell, Stephen M; Schmittdiel, Julie; Wang, Margaret C; Li, Rui; Gillies, Robin R; Casalino, Lawrence P; Bodenheimer, Thomas; Rundall, Thomas G
2005-08-01
The performance of medical groups is receiving increased attention. Relatively little conceptual or empirical work exists that examines the various dimensions of medical group performance. Using a national database of 693 medical groups, this article develops a scorecard approach to assessing group performance and presents a theory-driven framework for differentiating between high-performing versus low-performing medical groups. The clinical quality of care, financial performance, and organizational learning capability of medical groups are assessed in relation to environmental forces, resource acquisition and resource deployment factors, and a quality-centered culture. Findings support the utility of the performance scorecard approach and identification of a number of key factors differentiating high-performing from low-performing groups including, in particular, the importance of a quality-centered culture and the requirement of outside reporting from third party organizations. The findings hold a number of important implications for policy and practice, and the framework presented provides a foundation for future research.
NASA Astrophysics Data System (ADS)
Ek, M. B.; Xia, Y.; Ford, T.; Wu, Y.; Quiring, S. M.
2015-12-01
The North American Soil Moisture Database (NASMD) was initiated in 2011 to provide support for developing climate forecasting tools, calibrating land surface models and validating satellite-derived soil moisture algorithms. The NASMD has collected data from over 30 soil moisture observation networks providing millions of in situ soil moisture observations in all 50 states as well as Canada and Mexico. It is recognized that the quality of measured soil moisture in NASMD is highly variable due to the diversity of climatological conditions, land cover, soil texture, and topographies of the stations and differences in measurement devices (e.g., sensors) and installation. It is also recognized that error, inaccuracy and imprecision in the data set can have significant impacts on practical operations and scientific studies. Therefore, developing an appropriate quality control procedure is essential to ensure the data is of the best quality. In this study, an automated quality control approach is developed using the North American Land Data Assimilation System phase 2 (NLDAS-2) Noah soil porosity, soil temperature, and fraction of liquid and total soil moisture to flag erroneous and/or spurious measurements. Overall results show that this approach is able to flag unreasonable values when the soil is partially frozen. A validation example using NLDAS-2 multiple model soil moisture products at the 20 cm soil layer showed that the quality control procedure had a significant positive impact in Alabama, North Carolina, and West Texas. It had a greater impact in colder regions, particularly during spring and autumn. Over 433 NASMD stations have been quality controlled using the methodology proposed in this study, and the algorithm will be implemented to control data quality from the other ~1,200 NASMD stations in the near future.
TU-FG-201-10: Quality Management of Accelerated Partial Breast Irradiation (APBI) Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, H; Lorio, V; Cernica, G
2016-06-15
Purpose: Since 2008, over 700 patients received high dose rate (HDR) APBI treatment at Virginia Hospital Center. The complexity involved in the planning process demonstrated a broad variation between patient geometry across all applicators, in relation to anatomical regions of interest. A quality management program instituting various metrics was implemented in March 2013 with the goal of ensuring an optimal plan is achieved for each patient. Methods: For each plan, an in-house complexity index, geometric conformity index, and plan quality index were defined. These indices were obtained for all patients treated. For patients treated after the implementation, the conformity indexmore » and quality index were maximized while other dosimetric limits, such as maximum skin and rib doses, were strictly kept. Subsequently, all evaluation parameters and applicator information were placed in a database for cross-evaluation with similar complexity. Results: Both the conformity and quality indices show good correlation with the complexity index. They decrease as complexity increases for all applicators. Multi lumen type balloon applicators demonstrate a minimal advantage over single lumen applicators in increasingly complex patient geometries, as compared to SAVI applicators which showed considerably greater advantage in these circumstances. After the implementation of the in-house planning protocol, there is a direct improvement of quality for SAVI plans. Conclusion: Due to their interstitial nature, SAVI devices show a better conformity in comparison to balloon-based devices regardless of the number of lumens, especially in complex cases. The quality management program focuses on optimizing indices by utilizing prior planning knowledge based on complexity levels. The database of indices assists in decision making and has subsequently aided in balancing the experience level among planners. This approach has made APBI planning more robust for patient care, with a measurable improvement in the plan quality.« less
MOCAT: A Metagenomics Assembly and Gene Prediction Toolkit
Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R.; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/. PMID:23082188
MOCAT: a metagenomics assembly and gene prediction toolkit.
Kultima, Jens Roat; Sunagawa, Shinichi; Li, Junhua; Chen, Weineng; Chen, Hua; Mende, Daniel R; Arumugam, Manimozhiyan; Pan, Qi; Liu, Binghang; Qin, Junjie; Wang, Jun; Bork, Peer
2012-01-01
MOCAT is a highly configurable, modular pipeline for fast, standardized processing of single or paired-end sequencing data generated by the Illumina platform. The pipeline uses state-of-the-art programs to quality control, map, and assemble reads from metagenomic samples sequenced at a depth of several billion base pairs, and predict protein-coding genes on assembled metagenomes. Mapping against reference databases allows for read extraction or removal, as well as abundance calculations. Relevant statistics for each processing step can be summarized into multi-sheet Excel documents and queryable SQL databases. MOCAT runs on UNIX machines and integrates seamlessly with the SGE and PBS queuing systems, commonly used to process large datasets. The open source code and modular architecture allow users to modify or exchange the programs that are utilized in the various processing steps. Individual processing steps and parameters were benchmarked and tested on artificial, real, and simulated metagenomes resulting in an improvement of selected quality metrics. MOCAT can be freely downloaded at http://www.bork.embl.de/mocat/.
Fenelon, Joseph M.
2006-01-01
More than 1,200 water-level measurements from 1957 to 2005 in the Rainier Mesa area of the Nevada Test Site were quality assured and analyzed. Water levels were measured from 50 discrete intervals within 18 boreholes and from 4 tunnel sites. An interpretive database was constructed that describes water-level conditions for each water level measured in the Rainier Mesa area. Multiple attributes were assigned to each water-level measurement in the database to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed for each water-level measurement. The database also includes hydrograph narratives that describe the water-level history of each well.
78 FR 46338 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-31
... Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative Database.'' In accordance with the... Safety Culture Comparative Database Request for information collection approval. The Agency for... on Patient Safety Culture (Hospital SOPS) Comparative Database; OMB NO. 0935-0162, last approved on...
NASA Astrophysics Data System (ADS)
Kløve Keiding, Jakob; Erichsen, Eyolf; Heldal, Tom; Aslaksen Aasly, Kari
2017-04-01
Good access to construction materials is crucial for future infrastructure development and continued economic growth. In Norway >80 % of construction materials come from crushed aggregates and represent an growing share of the consumption. Although recycling to some extend can cover the need for construction materials, economic growth, increasing population and urbanization necessitates exploitation of new rock resources in Norway as well as many other parts of the world. Aggregates must fulfill a number of technical requirements to ensure high quality and long life expectancy of new roads, buildings and structures. Aggregates also have to be extracted near the consumer market. Particularly for road construction strict criteria are in place for wearing course for roads with high traffic density. Thus knowledge of mechanical rock quality is paramount for both exploitation as well as future resource and land-use planning but is often not assessed or mapped beyond the quarry scale. The Geological survey of Norway runs a database with information about crushed aggregate deposits from >1500 Norwegian quarries and sample sites. Here we use mechanical test analyses from the database to assess the aggregate quality in the Nordland county, Norway. Maps have been produced linking bed rock geology with rock quality parameters. The survey documents that the county is challenged in meeting the requirements for roads with high traffic density and especially in the middle parts of the county many samples have weak mechanical properties. This to some degree reflect that weak Cambro-Silurian rocks like phyllite, schist, carbonate and greenstone are abundant in Nordland. Typically mechanically stronger rock types such as gabbro, monzonite and granite are also exposed in large parts of the county, but are also characterized by relative poor or very variable mechanical test quality. Preliminary results indicate that many intrinsic parameters influence the mechanical rock strength, but variable degrees of deformation in the different tectonostratigraphic units exposed in Nordland affects the rock mechanical properties and is a prominent feature of our mapping. Unsurprisingly rock type, mineralogy, grain size and rock texture are all important factors that have a major control on the mechanical behaviour of the rocks. However, this assessment shows that there is an intricate interaction between these parameters and the resulting mechanical properties at present making it difficult to assess mechanical quality accurately only based on petrographic examination.
Mathieu, John E; Rapp, Tammy L
2009-01-01
This study examined the influences of team charters and performance strategies on the performance trajectories of 32 teams of master's of business administration students competing in a business strategy simulation over time. The authors extended existing theory on team development by demonstrating that devoting time to laying a foundation for both teamwork (i.e., team charters) and taskwork (performance strategies) can pay dividends in terms of more effective team performance over time. Using random coefficients growth modeling techniques, they found that teams with high-quality performance strategies outperformed teams with poorer quality strategies. However, a significant interaction between quality of the charters of teams and their performance strategies was found, such that the highest sustained performances were exhibited by teams that were high on both features. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
2014-01-01
Background Poor quality medicines threaten the lives of millions of patients and are alarmingly common in many parts of the world. Nevertheless, the global extent of the problem remains unknown. Accurate estimates of the epidemiology of poor quality medicines are sparse and are influenced by sampling methodology and diverse chemical analysis techniques. In order to understand the existing data, the Antimalarial Quality Scientific Group at WWARN built a comprehensive, open-access, global database and linked Antimalarial Quality Surveyor, an online visualization tool. Analysis of the database is described here, the limitations of the studies and data reported, and their public health implications discussed. Methods The database collates customized summaries of 251 published anti-malarial quality reports in English, French and Spanish by time and location since 1946. It also includes information on assays to determine quality, sampling and medicine regulation. Results No publicly available reports for 60.6% (63) of the 104 malaria-endemic countries were found. Out of 9,348 anti-malarials sampled, 30.1% (2,813) failed chemical/packaging quality tests with 39.3% classified as falsified, 2.3% as substandard and 58.3% as poor quality without evidence available to categorize them as either substandard or falsified. Only 32.3% of the reports explicitly described their definitions of medicine quality and just 9.1% (855) of the samples collected in 4.6% (six) surveys were conducted using random sampling techniques. Packaging analysis was only described in 21.5% of publications and up to twenty wrong active ingredients were found in falsified anti-malarials. Conclusions There are severe neglected problems with anti-malarial quality but there are important caveats to accurately estimate the prevalence and distribution of poor quality anti-malarials. The lack of reports in many malaria-endemic areas, inadequate sampling techniques and inadequate chemical analytical methods and instrumental procedures emphasizes the need to interpret medicine quality results with caution. The available evidence demonstrates the need for more investment to improve both sampling and analytical methodology and to achieve consensus in defining different types of poor quality medicines. PMID:24712972
Draft secure medical database standard.
Pangalos, George
2002-01-01
Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.
Carter, Alexander W; Mandavia, Rishi; Mayer, Erik; Marti, Joachim; Mossialos, Elias; Darzi, Ara
2017-01-01
Introduction Recent avoidable failures in patient care highlight the ongoing need for evidence to support improvements in patient safety. According to the most recent reviews, there is a dearth of economic evidence related to patient safety. These reviews characterise an evidence gap in terms of the scope and quality of evidence available to support resource allocation decisions. This protocol is designed to update and improve on the reviews previously conducted to determine the extent of methodological progress in economic analyses in patient safety. Methods and analysis A broad search strategy with two core themes for original research (excluding opinion pieces and systematic reviews) in ‘patient safety’ and ‘economic analyses’ has been developed. Medline, Econlit and National Health Service Economic Evaluation Database bibliographic databases will be searched from January 2007 using a combination of medical subject headings terms and research-derived search terms (see table 1). The method is informed by previous reviews on this topic, published in 2012. Screening, risk of bias assessment (using the Cochrane collaboration tool) and economic evaluation quality assessment (using the Drummond checklist) will be conducted by two independent reviewers, with arbitration by a third reviewer as needed. Studies with a low risk of bias will be assessed using the Drummond checklist. High-quality economic evaluations are those that score >20/35. A qualitative synthesis of evidence will be performed using a data collection tool to capture the study design(s) employed, population(s), setting(s), disease area(s), intervention(s) and outcome(s) studied. Methodological quality scores will be compared with previous reviews where possible. Effect size(s) and estimate uncertainty will be captured and used in a quantitative synthesis of high-quality evidence, where possible. Ethics and dissemination Formal ethical approval is not required as primary data will not be collected. The results will be disseminated through a peer-reviewed publication, presentations and social media. Trial registration number CRD42017057853. PMID:28821527
Domain fusion analysis by applying relational algebra to protein sequence and domain databases
Truong, Kevin; Ikura, Mitsuhiko
2003-01-01
Background Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. Results This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at . Conclusion As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time. PMID:12734020
Schwach, Frank; Bushell, Ellen; Gomes, Ana Rita; Anar, Burcu; Girling, Gareth; Herd, Colin; Rayner, Julian C; Billker, Oliver
2015-01-01
The Plasmodium Genetic Modification (PlasmoGEM) database (http://plasmogem.sanger.ac.uk) provides access to a resource of modular, versatile and adaptable vectors for genome modification of Plasmodium spp. parasites. PlasmoGEM currently consists of >2000 plasmids designed to modify the genome of Plasmodium berghei, a malaria parasite of rodents, which can be requested by non-profit research organisations free of charge. PlasmoGEM vectors are designed with long homology arms for efficient genome integration and carry gene specific barcodes to identify individual mutants. They can be used for a wide array of applications, including protein localisation, gene interaction studies and high-throughput genetic screens. The vector production pipeline is supported by a custom software suite that automates both the vector design process and quality control by full-length sequencing of the finished vectors. The PlasmoGEM web interface allows users to search a database of finished knock-out and gene tagging vectors, view details of their designs, download vector sequence in different formats and view available quality control data as well as suggested genotyping strategies. We also make gDNA library clones and intermediate vectors available for researchers to produce vectors for themselves. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Drug development and nonclinical to clinical translational databases: past and current efforts.
Monticello, Thomas M
2015-01-01
The International Consortium for Innovation and Quality (IQ) in Pharmaceutical Development is a science-focused organization of pharmaceutical and biotechnology companies. The mission of the Preclinical Safety Leadership Group (DruSafe) of the IQ is to advance science-based standards for nonclinical development of pharmaceutical products and to promote high-quality and effective nonclinical safety testing that can enable human risk assessment. DruSafe is creating an industry-wide database to determine the accuracy with which the interpretation of nonclinical safety assessments in animal models correctly predicts human risk in the early clinical development of biopharmaceuticals. This initiative aligns with the 2011 Food and Drug Administration strategic plan to advance regulatory science and modernize toxicology to enhance product safety. Although similar in concept to the initial industry-wide concordance data set conducted by International Life Sciences Institute's Health and Environmental Sciences Institute (HESI/ILSI), the DruSafe database will proactively track concordance, include exposure data and large and small molecules, and will continue to expand with longer duration nonclinical and clinical study comparisons. The output from this work will help identify actual human and animal adverse event data to define both the reliability and the potential limitations of nonclinical data and testing paradigms in predicting human safety in phase 1 clinical trials. © 2014 by The Author(s).
Efficacy of ultrasound-guided percutaneous needle treatment of calcific tendinitis.
Vignesh, K Nithin; McDowall, Adam; Simunovic, Nicole; Bhandari, Mohit; Choudur, Hema N
2015-01-01
The purpose of this study was to conduct a systematic review of the efficacy of ultrasound-guided needle lavage in treating calcific tendinitis. Two independent assessors searched medical databases and screened studies for eligibility. Eleven articles were included. Heterogeneity among included studies precluded meta-analysis. Results of randomized controlled trials suggested no difference in pain relief between needle lavage and other interventions, but the studies were of low quality. Additional high-quality evidence is required to determine the relative efficacy of ultrasound-guided needle lavage in the management of calcific tendinitis of the rotator cuff.
Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T.
2016-01-01
Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan–Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. PMID:26507857
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
GLIMS Glacier Database: Status and Challenges
NASA Astrophysics Data System (ADS)
Raup, B. H.; Racoviteanu, A.; Khalsa, S. S.; Armstrong, R.
2008-12-01
GLIMS (Global Land Ice Measurements from Space) is an international initiative to map the world's glaciers and to build a GIS database that is usable via the World Wide Web. The GLIMS programme includes 70 institutions, and 25 Regional Centers (RCs), who analyze satellite imagery to map glaciers in their regions of expertise. The analysis results are collected at the National Snow and Ice Data Center (NSIDC) and ingested into the GLIMS Glacier Database. The database contains approximately 80 000 glacier outlines, half the estimated total on Earth. In addition, the database contains metadata on approximately 200 000 ASTER images acquired over glacierized terrain. Glacier data and the ASTER metadata can be viewed and searched via interactive maps at http://glims.org/. As glacier mapping with GLIMS has progressed, various hurdles have arisen that have required solutions. For example, the GLIMS community has formulated definitions for how to delineate glaciers with different complicated morphologies and how to deal with debris cover. Experiments have been carried out to assess the consistency of the database, and protocols have been defined for the RCs to follow in their mapping. Hurdles still remain. In June 2008, a workshop was convened in Boulder, Colorado to address issues such as mapping debris-covered glaciers, mapping ice divides, and performing change analysis using two different glacier inventories. This contribution summarizes the status of the GLIMS Glacier Database and steps taken to ensure high data quality.
Certifiable database generation for SVS
NASA Astrophysics Data System (ADS)
Schiefele, Jens; Damjanovic, Dejan; Kubbat, Wolfgang
2000-06-01
In future aircraft cockpits SVS will be used to display 3D physical and virtual information to pilots. A review of prototype and production Synthetic Vision Displays (SVD) from Euro Telematic, UPS Advanced Technologies, Universal Avionics, VDO-Luftfahrtgeratewerk, and NASA, are discussed. As data sources terrain, obstacle, navigation, and airport data is needed, Jeppesen-Sanderson, Inc. and Darmstadt Univ. of Technology currently develop certifiable methods for acquisition, validation, and processing methods for terrain, obstacle, and airport databases. The acquired data will be integrated into a High-Quality Database (HQ-DB). This database is the master repository. It contains all information relevant for all types of aviation applications. From the HQ-DB SVS relevant data is retried, converted, decimated, and adapted into a SVS Real-Time Onboard Database (RTO-DB). The process of data acquisition, verification, and data processing will be defined in a way that allows certication within DO-200a and new RTCA/EUROCAE standards for airport and terrain data. The open formats proposed will be established and evaluated for industrial usability. Finally, a NASA-industry cooperation to develop industrial SVS products under the umbrella of the NASA Aviation Safety Program (ASP) is introduced. A key element of the SVS NASA-ASP is the Jeppesen lead task to develop methods for world-wide database generation and certification. Jeppesen will build three airport databases that will be used in flight trials with NASA aircraft.
Hatch, Joseph R.; Bullock, John H.; Finkelman, Robert B.
2006-01-01
In 1999, the USGS initiated the National Coal Quality Inventory (NaCQI) project to address a need for quality information on coals that will be mined during the next 20-30 years. At the time this project was initiated, the publicly available USGS coal quality data was based on samples primarily collected and analyzed between 1973 and 1985. The primary objective of NaCQI was to create a database containing comprehensive, accurate and accessible chemical information on the quality of mined and prepared United States coals and their combustion byproducts. This objective was to be accomplished through maintaining the existing publicly available coal quality database, expanding the database through the acquisition of new samples from priority areas, and analysis of the samples using updated coal analytical chemistry procedures. Priorities for sampling include those areas where future sources of compliance coal are federally owned. This project was a cooperative effort between the U.S. Geological Survey (USGS), State geological surveys, universities, coal burning utilities, and the coal mining industry. Funding support came from the Electric Power Research Institute (EPRI) and the U.S. Department of Energy (DOE).
Sharma, Ravi; Lebrun-Harris, Lydie A; Ngo-Metzger, Quyen
2014-01-01
Determine the association between access to primary care by the underserved and Medicare spending and clinical quality across hospital referral regions (HRRs). Data on elderly fee-for-service beneficiaries across 306 HRRs came from CMS' Geographic Variation in Medicare Spending and Utilization database (2010). We merged data on number of health center patients (HRSA's Uniform Data System) and number of low-income residents (American Community Survey). We estimated access to primary care in each HRR by "health center penetration" (health center patients as a proportion of low-income residents). We calculated total Medicare spending (adjusted for population size, local input prices, and health risk). We assessed clinical quality by preventable hospital admissions, hospital readmissions, and emergency department visits. We sorted HRRs by health center penetration rate and compared spending and quality measures between the high- and low-penetration deciles. We also employed linear regressions to estimate spending and quality measures as a function of health center penetration. The high-penetration decile had 9.7% lower Medicare spending ($926 per capita, p=0.01) than the low-penetration decile, and no different clinical quality outcomes. Compared with elderly fee-for-service beneficiaries residing in areas with low-penetration of health center patients among low-income residents, those residing in high-penetration areas may accrue Medicare cost savings. Limited evidence suggests that these savings do not compromise clinical quality.
Interventions for treating pain and disability in adults with complex regional pain syndrome.
O'Connell, Neil E; Wand, Benedict M; McAuley, James; Marston, Louise; Moseley, G Lorimer
2013-04-30
There is currently no strong consensus regarding the optimal management of complex regional pain syndrome although a multitude of interventions have been described and are commonly used. To summarise the evidence from Cochrane and non-Cochrane systematic reviews of the effectiveness of any therapeutic intervention used to reduce pain, disability or both in adults with complex regional pain syndrome (CRPS). We identified Cochrane reviews and non-Cochrane reviews through a systematic search of the following databases: Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects (DARE), Ovid MEDLINE, Ovid EMBASE, CINAHL, LILACS and PEDro. We included non-Cochrane systematic reviews where they contained evidence not covered by identified Cochrane reviews. The methodological quality of reviews was assessed using the AMSTAR tool.We extracted data for the primary outcomes pain, disability and adverse events, and the secondary outcomes of quality of life, emotional well being and participants' ratings of satisfaction or improvement. Only evidence arising from randomised controlled trials was considered. We used the GRADE system to assess the quality of evidence. We included six Cochrane reviews and 13 non-Cochrane systematic reviews. Cochrane reviews demonstrated better methodological quality than non-Cochrane reviews. Trials were typically small and the quality variable.There is moderate quality evidence that intravenous regional blockade with guanethidine is not effective in CRPS and that the procedure appears to be associated with the risk of significant adverse events.There is low quality evidence that bisphosphonates, calcitonin or a daily course of intravenous ketamine may be effective for pain when compared with placebo; graded motor imagery may be effective for pain and function when compared with usual care; and that mirror therapy may be effective for pain in post-stroke CRPS compared with a 'covered mirror' control. This evidence should be interpreted with caution. There is low quality evidence that local anaesthetic sympathetic blockade is not effective. Low quality evidence suggests that physiotherapy or occupational therapy are associated with small positive effects that are unlikely to be clinically important at one year follow up when compared with a social work passive attention control.For a wide range of other interventions, there is either no evidence or very low quality evidence available from which no conclusions should be drawn. There is a critical lack of high quality evidence for the effectiveness of most therapies for CRPS. Until further larger trials are undertaken, formulating an evidence-based approach to managing CRPS will remain difficult.
Assessing values of air quality and visibility at risk from wildland fires.
Sue A. Ferguson; Steven J. McKay; David E. Nagel; Trent Piepho; Miriam L. Rorig; Casey Anderson; Lara Kellogg
2003-01-01
To assess values of air quality and visibility at risk from wildland fire in the United States, we generated a 40-year database that includes twice daily values of wind, mixing height, and a ventilation index that is the product of windspeed and mixing height. The database provides the first nationally consistent map of surface wind and ventilation index. In addition,...
2005-01-01
Introduction The present paper describes the methods of data collection and validation employed in the Intensive Care National Audit & Research Centre Case Mix Programme (CMP), a national comparative audit of outcome for adult, critical care admissions. The paper also describes the case mix, outcome and activity of the admissions in the Case Mix Programme Database (CMPD). Methods The CMP collects data on consecutive admissions to adult, general critical care units in England, Wales and Northern Ireland. Explicit steps are taken to ensure the accuracy of the data, including use of a dataset specification, of initial and refresher training courses, and of local and central validation of submitted data for incomplete, illogical and inconsistent values. Criteria for evaluating clinical databases developed by the Directory of Clinical Databases were applied to the CMPD. The case mix, outcome and activity for all admissions were briefly summarised. Results The mean quality level achieved by the CMPD for the 10 Directory of Clinical Databases criteria was 3.4 (on a scale of 1 = worst to 4 = best). The CMPD contained validated data on 129,647 admissions to 128 units. The median age was 63 years, and 59% were male. The mean Acute Physiology and Chronic Health Evaluation II score was 16.5. Mortality was 20.3% in the CMP unit and was 30.8% at ultimate discharge from hospital. Nonsurvivors stayed longer in intensive care than did survivors (median 2.0 days versus 1.7 days in the CMP unit) but had a shorter total hospital length of stay (9 days versus 16 days). Results for the CMPD were comparable with results from other published reports of UK critical care admissions. Conclusions The CMP uses rigorous methods to ensure data are complete, valid and reliable. The CMP scores well against published criteria for high-quality clinical databases.
Harrison, David A; Brady, Anthony R; Rowan, Kathy
2004-01-01
Introduction The present paper describes the methods of data collection and validation employed in the Intensive Care National Audit & Research Centre Case Mix Programme (CMP), a national comparative audit of outcome for adult, critical care admissions. The paper also describes the case mix, outcome and activity of the admissions in the Case Mix Programme Database (CMPD). Methods The CMP collects data on consecutive admissions to adult, general critical care units in England, Wales and Northern Ireland. Explicit steps are taken to ensure the accuracy of the data, including use of a dataset specification, of initial and refresher training courses, and of local and central validation of submitted data for incomplete, illogical and inconsistent values. Criteria for evaluating clinical databases developed by the Directory of Clinical Databases were applied to the CMPD. The case mix, outcome and activity for all admissions were briefly summarised. Results The mean quality level achieved by the CMPD for the 10 Directory of Clinical Databases criteria was 3.4 (on a scale of 1 = worst to 4 = best). The CMPD contained validated data on 129,647 admissions to 128 units. The median age was 63 years, and 59% were male. The mean Acute Physiology and Chronic Health Evaluation II score was 16.5. Mortality was 20.3% in the CMP unit and was 30.8% at ultimate discharge from hospital. Nonsurvivors stayed longer in intensive care than did survivors (median 2.0 days versus 1.7 days in the CMP unit) but had a shorter total hospital length of stay (9 days versus 16 days). Results for the CMPD were comparable with results from other published reports of UK critical care admissions. Conclusions The CMP uses rigorous methods to ensure data are complete, valid and reliable. The CMP scores well against published criteria for high-quality clinical databases. PMID:15025784
Coronado, Rogelio A; Bird, Mackenzie L; Van Hoy, Erin E; Huston, Laura J; Spindler, Kurt P; Archer, Kristin R
2018-03-01
To examine the role of psychosocial interventions in improving patient-reported clinical outcomes, including return to sport/activity, and intermediary psychosocial factors after anterior cruciate ligament reconstruction. MEDLINE/PubMed, CINAHL, PsycINFO, and Web of Science were searched from each database's inception to March 2017 for published studies in patients after anterior cruciate ligament reconstruction. Studies were included if they reported on the effects of a postoperative psychosocial intervention on a patient-reported clinical measure of disability, function, pain, quality of life, return to sport/activity, or intermediary psychosocial factor. Data were extracted using a standardized form and summary effects from each article were compiled. The methodological quality of randomized trials was assessed using the Physiotherapy Evidence Database Scale and scores greater than 5/10 were considered high quality. A total of 893 articles were identified from the literature search. Of these, four randomized trials ( N = 210) met inclusion criteria. The four articles examined guided imagery and relaxation, coping modeling, and visual imagery as postoperative psychosocial interventions. Methodological quality scores of the studies ranged from 5 to 9. There were inconsistent findings for the additive benefit of psychosocial interventions for improving postoperative function, pain, or self-efficacy and limited evidence for improving postoperative quality of life, anxiety, or fear of reinjury. No study examined the effects of psychosocial interventions on return to sport/activity. Overall, there is limited evidence on the efficacy of postoperative psychosocial interventions for improving functional recovery after anterior cruciate ligament reconstruction.
ERIC Educational Resources Information Center
Qu, Xia; Yang, Xiaotong
2016-01-01
Using CiteSpace to draw a keyword co-occurrence knowledge map for 1,048 research papers on the quality of higher education from 2000 to 2014 in the Chinese Social Sciences Citation Index database, we found that over the past 15 years, research on the quality of Chinese higher education was clearly oriented toward policies, and a good interactive…
Akinseye, Gladys Atinuke; Dickinson, Ann-Marie; Munro, Kevin J
2018-04-01
To conduct a systematic review of the benefits of non-linear frequency compression (NLFC) in adults and children. Ten databases were searched for studies comparing the effects of NLFC and conventional processing (CP) for the period January 2008 to September 2017. Twelve articles were included in this review: four adults and school-aged only, one pre-school only and three with both adults and school-aged children. A two-stage process was implemented to grade the evidence. The individual studies were graded based on their study type (from 1 = highest quality of evidence to 5 = the lowest quality) and then sub-graded based on their quality ("a" for "good quality" or "b" for "lesser quality"). All studies were awarded 4a, except the single pre-school study, which was awarded 2a. The overall evidence for each population was graded based on the quality, quantity and consistency of the studies. The body of evidence was rated as very low for both adults and school-aged children, but high for pre-school children. The low number (and quality) of studies means that evidence supporting the benefit from NLFC is inconclusive. Further high-quality RCTs are required to provide a conclusive answer to this question.
Brosseau, Lucie; Toupin-April, Karine; Wells, George; Smith, Christine A; Pugh, Arlanna G; Stinson, Jennifer N; Duffy, Ciarán M; Gifford, Wendy; Moher, David; Sherrington, Catherine; Cavallo, Sabrina; De Angelis, Gino; Loew, Laurianne; Rahman, Prinon; Marcotte, Rachel; Taki, Jade; Bisaillon, Jacinthe; King, Judy; Coda, Andrea; Hendry, Gordon J; Gauvreau, Julie; Hayles, Martin; Hayles, Kay; Feldman, Brian; Kenny, Glen P; Li, Jing Xian; Briggs, Andrew M; Martini, Rose; Feldman, Debbie Ehrmann; Maltais, Désirée B; Tupper, Susan; Bigford, Sarah; Bisch, Marg
2016-07-01
To create evidence-based guidelines evaluating foot care interventions for the management of juvenile idiopathic arthritis (JIA). An electronic literature search of the following databases from database inception to May 2015 was conducted: MEDLINE (Ovid), EMBASE (Ovid), Cochrane CENTRAL, and clinicaltrials.gov. The Ottawa Panel selection criteria targeted studies that assessed foot care or foot orthotic interventions for the management of JIA in those aged 0 to ≤18 years. The Physiotherapy Evidence Database scale was used to evaluate study quality, of which only high-quality studies were included (score, ≥5). A total of 362 records were screened, resulting in 3 full-text articles and 1 additional citation containing supplementary information included for the analysis. Two reviewers independently extracted study data (intervention, comparator, outcome, time period, study design) from the included studies by using standardized data extraction forms. Directed by Cochrane Collaboration methodology, the statistical analysis produced figures and graphs representing the strength of intervention outcomes and their corresponding grades (A, B, C+, C, C-, D+, D, D-). Clinical significance was achieved when an improvement of ≥30% between the intervention and control groups was present, whereas P>.05 indicated statistical significance. An expert panel Delphi consensus (≥80%) was required for the endorsement of recommendations. All included studies were of high quality and analyzed the effects of multidisciplinary foot care, customized foot orthotics, and shoe inserts for the management of JIA. Custom-made foot orthotics and prefabricated shoe inserts displayed the greatest improvement in pain intensity, activity limitation, foot pain, and disability reduction (grades A, C+). The use of customized foot orthotics and prefabricated shoe inserts seems to be a good choice for managing foot pain and function in JIA. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Knowlton, Michelle N; Li, Tongbin; Ren, Yongliang; Bill, Brent R; Ellis, Lynda Bm; Ekker, Stephen C
2008-01-07
The zebrafish is a powerful model vertebrate amenable to high throughput in vivo genetic analyses. Examples include reverse genetic screens using morpholino knockdown, expression-based screening using enhancer trapping and forward genetic screening using transposon insertional mutagenesis. We have created a database to facilitate web-based distribution of data from such genetic studies. The MOrpholino DataBase is a MySQL relational database with an online, PHP interface. Multiple quality control levels allow differential access to data in raw and finished formats. MODBv1 includes sequence information relating to almost 800 morpholinos and their targets and phenotypic data regarding the dose effect of each morpholino (mortality, toxicity and defects). To improve the searchability of this database, we have incorporated a fixed-vocabulary defect ontology that allows for the organization of morpholino affects based on anatomical structure affected and defect produced. This also allows comparison between species utilizing Phenotypic Attribute Trait Ontology (PATO) designated terminology. MODB is also cross-linked with ZFIN, allowing full searches between the two databases. MODB offers users the ability to retrieve morpholino data by sequence of morpholino or target, name of target, anatomical structure affected and defect produced. MODB data can be used for functional genomic analysis of morpholino design to maximize efficacy and minimize toxicity. MODB also serves as a template for future sequence-based functional genetic screen databases, and it is currently being used as a model for the creation of a mutagenic insertional transposon database.
Ground truth and benchmarks for performance evaluation
NASA Astrophysics Data System (ADS)
Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.
2003-09-01
Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.
Steil, H; Amato, C; Carioni, C; Kirchgessner, J; Marcelli, D; Mitteregger, A; Moscardo, V; Orlandini, G; Gatti, E
2004-01-01
The European Clinical Database EuCliD small star, filled has been developed as a tool for supervising selected quality indicators of about 200 European dialysis centers. Major efforts had to be made to comply with local and European laws regarding data security. EuCliD is a Lotus Notes based flat-file database currently containing medical data of more than 14,000 dialysis patients from 10 European countries. Another 15,000 patients from 150 centers in 4 South-American countries will be added soon. Data are entered either manually or by means of interfaces to existing local data managing systems. This information is transferred to a central Lotus Notes Server. Data evaluation was performed with statistical tools like SPSS. EuCliD is used as a part of the CQI (Continuous Quality Improvement) management system of Fresenius Medical Care (FMC) dialysis units. Each participating dialysis center receives (currently every half year) benchmarking reports at a regular interval. The benchmark for all quality parameters is the weighted mean of the corresponding data of all centers. An obvious impact of data sampling and data evaluation on the quality of the treatments could be observed within the first one and a half years of working with EuCliD. This also concerns important outcome predictors like Kt/V and hemoglobin concentration as the outcome itself expressed in hospitalization days and survival rates. With the help of EuCliD the user is able to sample clinical data, identify problems, search for solutions with the aim of improving the dialysis treatment quality and guarantee a high-class treatment quality for all patients.
Dy, Sydney M; Purnell, Tanjala S
2012-02-01
High-quality provider-patient decision-making is key to quality care for complex conditions. We performed an analysis of key elements relevant to quality and complex, shared medical decision-making. Based on a search of electronic databases, including Medline and the Cochrane Library, as well as relevant articles' reference lists, reviews of tools, and annotated bibliographies, we developed a list of key concepts and applied them to a decision-making example. Key concepts identified included provider competence, trustworthiness, and cultural competence; communication with patients and families; information quality; patient/surrogate competence; and roles and involvement. We applied this concept list to a case example, shared decision-making for live donor kidney transplantation, and identified the likely most important concepts as provider and cultural competence, information quality, and communication with patients and families. This concept list may be useful for conceptualizing the quality of complex shared decision-making and in guiding research in this area. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ge, Lixia; Mordiffi, Siti Zubaidah
Caring for elderly cancer patients may cause multidimensional burden on family caregivers. Recognition of factors associated with caregiver burden is important for providing proactive support to caregivers at risk. The aim of this study was to identify factors associated with high caregiver burden among family caregivers of elderly cancer patients. A systematic search of 7 electronic databases was conducted from database inception to October 2014. The identified studies were screened, and full text was further assessed. The quality of included studies was assessed using a checklist, and relevant data were extracted using a predeveloped data extraction form. Best-evidence synthesis model was used for data synthesis. The search yielded a total of 3339 studies, and 7 studies involving 1233 family caregivers were included after screening and full assessment of 116 studies. Moderate evidence supported that younger caregivers, solid tumors, and assistance with patient's activities of daily living were significantly associated with high caregiver burden. Eighteen factors were supported by limited evidence, and 1 was a conflicting factor. The scientific literature to date proved that caregiver burden was commonly experienced by family caregivers of elderly cancer patients. The evidence indicated that family caregivers who were at younger age, caring for solid tumor patients, and providing assistance with patient's activities of daily living reported high caregiver burden. The data provide evidence in identifying family caregivers at high risk of high caregiver burden. More high-quality studies are needed to clarify and determine the estimates of the effects of individual factors.
ERIC Educational Resources Information Center
Sillince, J. A. A.; Sillince, M.
1993-01-01
Discusses molecular databases and the role that government and private companies play in their administration and development. Highlights include copyright and patent issues relating to public databases and the information contained in them; data quality; data structures and technological questions; the international organization of molecular…
Sequencing artifacts in the type A influenza database and attempts to correct them
USDA-ARS?s Scientific Manuscript database
Currently over 300,000 Type A influenza gene sequences representing over 50,000 strains are available in publicly available databases. However, the quality of the sequences submitted are determined by the contributor and many sequence errors are present in the databases, which can affect the result...
Influencing Database Use in Public Libraries.
ERIC Educational Resources Information Center
Tenopir, Carol
1999-01-01
Discusses results of a survey of factors influencing database use in public libraries. Highlights the importance of content; ease of use; and importance of instruction. Tabulates importance indications for number and location of workstations, library hours, availability of remote login, usefulness and quality of content, lack of other databases,…
[Surgical assessment of complications after thyroid gland operations].
Dralle, H
2015-01-01
The extent, magnitude and technical equipment used for thyroid surgery has changed considerably in Germany during the last decade. The number of thyroidectomies due to benign goiter have decreased while the extent of thyroidectomy, nowadays preferentially total thyroidectomy, has increased. Due to an increased awareness of surgical complications the number of malpractice claims is increasing. In contrast to surgical databases the frequency of complications in malpractice claims reflects the individual impact of complications on the quality of life. In contrast to surgical databases unilateral and bilateral vocal fold palsy are therefore at the forefront of malpractice claims. As guidelines are often not applicable for the individual surgical expert review, the question arises which are the relevant criteria for the professional expert witness assessing the severity of the individual complication. While in surgical databases major complications after thyroidectomy, such as vocal fold palsy, hypoparathyroidism, hemorrhage and infections are equally frequent (1-3 %), in malpractice claims vocal fold palsy is significantly more frequent (50 %) compared to hypoparathyroidism (15 %), hemorrhage and infections (about 5 % each). To avoid bilateral nerve palsy intraoperative nerve monitoring has become of utmost importance for surgical strategy and malpractice suits alike. For surgical expert review documentation of individual risk-oriented indications, the surgical approach and postoperative management are highly important. Guidelines only define the treatment corridors of good clinical practice. Surgical expert reviews in malpractice suits concerning quality of care and causality between surgical management, complications and sequelae of complications are therefore highly dependent on the grounds and documentation of risk-oriented indications for thyroidectomy, intraoperative and postoperative surgical management.
Zhang, Guang Lan; Riemer, Angelika B.; Keskin, Derin B.; Chitkushev, Lou; Reinherz, Ellis L.; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/ PMID:24705205
Zhang, Guang Lan; Riemer, Angelika B; Keskin, Derin B; Chitkushev, Lou; Reinherz, Ellis L; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/.
Video quality assessment using motion-compensated temporal filtering and manifold feature similarity
Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju
2017-01-01
Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489
Bouadjenek, Mohamed Reda; Verspoor, Karin; Zobel, Justin
2017-07-01
We investigate and analyse the data quality of nucleotide sequence databases with the objective of automatic detection of data anomalies and suspicious records. Specifically, we demonstrate that the published literature associated with each data record can be used to automatically evaluate its quality, by cross-checking the consistency of the key content of the database record with the referenced publications. Focusing on GenBank, we describe a set of quality indicators based on the relevance paradigm of information retrieval (IR). Then, we use these quality indicators to train an anomaly detection algorithm to classify records as "confident" or "suspicious". Our experiments on the PubMed Central collection show assessing the coherence between the literature and database records, through our algorithms, is an effective mechanism for assisting curators to perform data cleansing. Although fewer than 0.25% of the records in our data set are known to be faulty, we would expect that there are many more in GenBank that have not yet been identified. By automated comparison with literature they can be identified with a precision of up to 10% and a recall of up to 30%, while strongly outperforming several baselines. While these results leave substantial room for improvement, they reflect both the very imbalanced nature of the data, and the limited explicitly labelled data that is available. Overall, the obtained results show promise for the development of a new kind of approach to detecting low-quality and suspicious sequence records based on literature analysis and consistency. From a practical point of view, this will greatly help curators in identifying inconsistent records in large-scale sequence databases by highlighting records that are likely to be inconsistent with the literature. Copyright © 2017 Elsevier Inc. All rights reserved.
Lee, Casey J.; Glysson, G. Douglas
2013-01-01
Human-induced and natural changes to the transport of sediment and sediment-associated constituents can degrade aquatic ecosystems and limit human uses of streams and rivers. The lack of a dedicated, easily accessible, quality-controlled database of sediment and ancillary data has made it difficult to identify sediment-related water-quality impairments and has limited understanding of how human actions affect suspended-sediment concentrations and transport. The purpose of this report is to describe the creation of a quality-controlled U.S. Geological Survey suspended-sediment database, provide guidance for its use, and summarize characteristics of suspended-sediment data through 2010. The database is provided as an online application at http://cida.usgs.gov/sediment to allow users to view, filter, and retrieve available suspended-sediment and ancillary data. A data recovery, filtration, and quality-control process was performed to expand the availability, representativeness, and utility of existing suspended-sediment data collected by the U.S. Geological Survey in the United States before January 1, 2011. Information on streamflow condition, sediment grain size, and upstream landscape condition were matched to sediment data and sediment-sampling sites to place data in context with factors that may influence sediment transport. Suspended-sediment and selected ancillary data are presented from across the United States with respect to time, streamflow, and landscape condition. Examples of potential uses of this database for identifying sediment-related impairments, assessing trends, and designing new data collection activities are provided. This report and database can support local and national-level decision making, project planning, and data mining activities related to the transport of suspended-sediment and sediment-associated constituents.
Systematic review for geo-authentic Lonicerae Japonicae Flos.
Yang, Xingyue; Liu, Yali; Hou, Aijuan; Yang, Yang; Tian, Xin; He, Liyun
2017-06-01
In traditional Chinese medicine, Lonicerae Japonicae Flos is commonly used as anti-inflammatory, antiviral, and antipyretic herbal medicine, and geo-authentic herbs are believed to present the highest quality among all samples from different regions. To discuss the current situation and trend of geo-authentic Lonicerae Japonicae Flos, we searched Chinese Biomedicine Literature Database, Chinese Journal Full-text Database, Chinese Scientific Journal Full-text Database, Cochrane Central Register of Controlled Trials, Wanfang, and PubMed. We investigated all studies up to November 2015 pertaining to quality assessment, discrimination, pharmacological effects, planting or processing, or ecological system of geo-authentic Lonicerae Japonicae Flos. Sixty-five studies mainly discussing about chemical fingerprint, component analysis, planting and processing, discrimination between varieties, ecological system, pharmacological effects, and safety were systematically reviewed. By analyzing these studies, we found that the key points of geo-authentic Lonicerae Japonicae Flos research were quality and application. Further studies should focus on improving the quality by selecting the more superior of all varieties and evaluating clinical effectiveness.
Wang, Kang-Feng; Zhang, Li-Juan; Lu, Feng; Lu, Yong-Hui; Yang, Chuan-Hua
2016-06-01
To provide an evidence-based overview regarding the efficacy of Ashi points stimulation for the treatment of shoulder pain. A comprehensive search [PubMed, Chinese Biomedical Literature Database, China National Knowledge Infrastructure (CNKI), Chongqing Weipu Database for Chinese Technical Periodicals (VIP) and Wanfang Database] was conducted to identify randomized or quasi-randomized controlled trials that evaluated the effectiveness of Ashi points stimulation for shoulder pain compared with conventional treatment. The methodological quality of the included studies was assessed using the Cochrane risk of bias tool. RevMan 5.0 was used for data synthesis. Nine trials were included. Seven studies assessed the effectiveness of Ashi points stimulation on response rate compared with conventional acupuncture. Their results suggested significant effect in favour of Ashi points stimulation [odds ratio (OR): 5.89, 95% confidence interval (CI): 2.97 to 11.67, P<0.01, heterogeneity: χ(2) =3.81, P=0.70, I (2) =0% ]. One trial compared Ashi points stimulation with drug therapy. The result showed there was a significantly greater recovery rate in group of Ashi points stimulation (OR: 9.58, 95% CI: 2.69 to 34.12). One trial compared comprehensive treatment on the myofascial trigger points (MTrPs) with no treatment and the result was in favor of MTrPs. Ashi points stimulation might be superior to conventional acupuncture, drug therapy and no treatment for shoulder pain. However, due to the low methodological quality of included studies, a firm conclusion could not be reached until further studies of high quality are available.
Rudmik, Luke; Mattos, Jose; Schneider, John; Manes, Peter R; Stokken, Janalee K; Lee, Jivianne; Higgins, Thomas S; Schlosser, Rodney J; Reh, Douglas D; Setzen, Michael; Soler, Zachary M
2017-09-01
Measuring quality outcomes is an important prerequisite to improve quality of care. Rhinosinusitis represents a high value target to improve quality of care because it has a high prevalence of disease, large economic burden, and large practice variation. In this study we review the current state of quality measurement for management of both acute (ARS) and chronic rhinosinusitis (CRS). The major national quality metric repositories and clearinghouses were queried. Additional searches included the American Academy of Otolaryngology-Head and Neck Surgery database, PubMed, and Google to attempt to capture any additional quality metrics. Seven quality metrics for ARS and 4 quality metrics for CRS were identified. ARS metrics focused on appropriateness of diagnosis (n = 1), antibiotic prescribing (n = 4), and radiologic imaging (n = 2). CRS quality metrics focused on appropriateness of diagnosis (n = 1), radiologic imaging (n = 1), and measurement of patient quality of life (n = 2). The Physician Quality Reporting System (PQRS) currently tracks 3 ARS quality metrics and 1 CRS quality metric. There are no outcome-based rhinosinusitis quality metrics and no metrics that assess domains of safety, patient-centeredness, and timeliness of care. The current status of quality measurement for rhinosinusitis has focused primarily on the quality domain of efficiency and process measures for ARS. More work is needed to develop, validate, and track outcome-based quality metrics along with CRS-specific metrics. Although there has been excellent work done to improve quality measurement for rhinosinusitis, there remain major gaps and challenges that need to be considered during the development of future metrics. © 2017 ARS-AAOA, LLC.
The Danish Testicular Cancer database.
Daugaard, Gedske; Kier, Maria Gry Gundgaard; Bandak, Mikkel; Mortensen, Mette Saksø; Larsson, Heidi; Søgaard, Mette; Toft, Birgitte Groenkaer; Engvad, Birte; Agerbæk, Mads; Holm, Niels Vilstrup; Lauritsen, Jakob
2016-01-01
The nationwide Danish Testicular Cancer database consists of a retrospective research database (DaTeCa database) and a prospective clinical database (Danish Multidisciplinary Cancer Group [DMCG] DaTeCa database). The aim is to improve the quality of care for patients with testicular cancer (TC) in Denmark, that is, by identifying risk factors for relapse, toxicity related to treatment, and focusing on late effects. All Danish male patients with a histologically verified germ cell cancer diagnosis in the Danish Pathology Registry are included in the DaTeCa databases. Data collection has been performed from 1984 to 2007 and from 2013 onward, respectively. The retrospective DaTeCa database contains detailed information with more than 300 variables related to histology, stage, treatment, relapses, pathology, tumor markers, kidney function, lung function, etc. A questionnaire related to late effects has been conducted, which includes questions regarding social relationships, life situation, general health status, family background, diseases, symptoms, use of medication, marital status, psychosocial issues, fertility, and sexuality. TC survivors alive on October 2014 were invited to fill in this questionnaire including 160 validated questions. Collection of questionnaires is still ongoing. A biobank including blood/sputum samples for future genetic analyses has been established. Both samples related to DaTeCa and DMCG DaTeCa database are included. The prospective DMCG DaTeCa database includes variables regarding histology, stage, prognostic group, and treatment. The DMCG DaTeCa database has existed since 2013 and is a young clinical database. It is necessary to extend the data collection in the prospective database in order to answer quality-related questions. Data from the retrospective database will be added to the prospective data. This will result in a large and very comprehensive database for future studies on TC patients.
NASA Technical Reports Server (NTRS)
Brenton, James C.; Barbre. Robert E., Jr.; Decker, Ryan K.; Orcutt, John M.
2018-01-01
The National Aeronautics and Space Administration's (NASA) Marshall Space Flight Center (MSFC) Natural Environments Branch (EV44) has provided atmospheric databases and analysis in support of space vehicle design and day-of-launch operations for NASA and commercial launch vehicle programs launching from the NASA Kennedy Space Center (KSC), co-located on the United States Air Force's Eastern Range (ER) at the Cape Canaveral Air Force Station. The ER complex is one of the most heavily instrumented sites in the United States with over 31 towers measuring various atmospheric parameters on a continuous basis. An inherent challenge with large sets of data consists of ensuring erroneous data is removed from databases, and thus excluded from launch vehicle design analyses. EV44 has put forth great effort in developing quality control (QC) procedures for individual meteorological instruments, however no standard QC procedures for all databases currently exists resulting in QC databases that have inconsistencies in variables, methodologies, and periods of record. The goal of this activity is to use the previous efforts by EV44 to develop a standardized set of QC procedures from which to build meteorological databases from KSC and the ER, while maintaining open communication with end users from the launch community to develop ways to improve, adapt and grow the QC database. Details of the QC procedures will be described. As the rate of launches increases with additional launch vehicle programs, it is becoming more important that weather databases are continually updated and checked for data quality before use in launch vehicle design and certification analyses.
Lange, Toni; Matthijs, Omer; Jain, Nitin B; Schmitt, Jochen; Lützner, Jörg; Kopkow, Christian
2017-03-01
Shoulder pain in the general population is common and to identify the aetiology of shoulder pain, history, motion and muscle testing, and physical examination tests are usually performed. The aim of this systematic review was to summarise and evaluate intrarater and inter-rater reliability of physical examination tests in the diagnosis of shoulder pathologies. A comprehensive systematic literature search was conducted using MEDLINE, EMBASE, Allied and Complementary Medicine Database (AMED) and Physiotherapy Evidence Database (PEDro) through 20 March 2015. Methodological quality was assessed using the Quality Appraisal of Reliability Studies (QAREL) tool by 2 independent reviewers. The search strategy revealed 3259 articles, of which 18 finally met the inclusion criteria. These studies evaluated the reliability of 62 test and test variations used for the specific physical examination tests for the diagnosis of shoulder pathologies. Methodological quality ranged from 2 to 7 positive criteria of the 11 items of the QAREL tool. This review identified a lack of high-quality studies evaluating inter-rater as well as intrarater reliability of specific physical examination tests for the diagnosis of shoulder pathologies. In addition, reliability measures differed between included studies hindering proper cross-study comparisons. PROSPERO CRD42014009018. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Parra, Lorena; García, Laura
2018-01-01
The monitoring of farming processes can optimize the use of resources and improve its sustainability and profitability. In fish farms, the water quality, tank environment, and fish behavior must be monitored. Wireless sensor networks (WSNs) are a promising option to perform this monitoring. Nevertheless, its high cost is slowing the expansion of its use. In this paper, we propose a set of sensors for monitoring the water quality and fish behavior in aquaculture tanks during the feeding process. The WSN is based on physical sensors, composed of simple electronic components. The system proposed can monitor water quality parameters, tank status, the feed falling and fish swimming depth and velocity. In addition, the system includes a smart algorithm to reduce the energy waste when sending the information from the node to the database. The system is composed of three nodes in each tank that send the information though the local area network to a database on the Internet and a smart algorithm that detects abnormal values and sends alarms when they happen. All the sensors are designed, calibrated, and deployed to ensure its suitability. The greatest efforts have been accomplished with the fish presence sensor. The total cost of the sensors and nodes for the proposed system is less than 90 €. PMID:29494560
Parra, Lorena; Sendra, Sandra; García, Laura; Lloret, Jaime
2018-03-01
The monitoring of farming processes can optimize the use of resources and improve its sustainability and profitability. In fish farms, the water quality, tank environment, and fish behavior must be monitored. Wireless sensor networks (WSNs) are a promising option to perform this monitoring. Nevertheless, its high cost is slowing the expansion of its use. In this paper, we propose a set of sensors for monitoring the water quality and fish behavior in aquaculture tanks during the feeding process. The WSN is based on physical sensors, composed of simple electronic components. The system proposed can monitor water quality parameters, tank status, the feed falling and fish swimming depth and velocity. In addition, the system includes a smart algorithm to reduce the energy waste when sending the information from the node to the database. The system is composed of three nodes in each tank that send the information though the local area network to a database on the Internet and a smart algorithm that detects abnormal values and sends alarms when they happen. All the sensors are designed, calibrated, and deployed to ensure its suitability. The greatest efforts have been accomplished with the fish presence sensor. The total cost of the sensors and nodes for the proposed system is less than 90 €.
Paxton, Elizabeth W; Kiley, Mary-Lou; Love, Rebecca; Barber, Thomas C; Funahashi, Tadashi T; Inacio, Maria C S
2013-06-01
In response to the increased volume, risk, and cost of medical devices, in 2001 Kaiser Permanente (KP) developed implant registries to enhance patient safety and quality, and to evaluate cost-effectiveness. Using an integrated electronic health record system, administrative databases, and other institutional databases, orthopedic, cardiology, and vascular implant registries were developed in 2001, 2006, and 2011, respectively. These registries monitor patients, implants, clinical practices, and surgical outcomes for KP's 9 million members. Critical to registry success is surgeon leadership and engagement; each geographical region has a surgeon champion who provides feedback on registry initiatives and disseminates registry findings. The registries enhance patient safety by providing a variety of clinical decision tools such as risk calculators, quality reports, risk-adjusted medical center reports, summaries of surgeon data, and infection control reports to registry stakeholders. The registries are used to immediately identify patients with recalled devices, evaluate new and established device technology, and identify outlier implants. The registries contribute to cost-effectiveness initiatives through collaboration with sourcing and contracting groups and confirming adherence to device formulary guidelines. Research studies based on registry data have directly influenced clinical best practices. Registries are important tools to evaluate longitudinal device performance and safety, study the clinical indications for and outcomes of device implantation, respond promptly to recalls and advisories, and contribute to the overall high quality of care of our patients.
[Data supporting quality circle management of inpatient depression treatment].
Brand, S; Härter, M; Sitta, P; van Calker, D; Menke, R; Heindl, A; Herold, K; Kudling, R; Luckhaus, C; Rupprecht, U; Sanner, Dirk; Schmitz, D; Schramm, E; Berger, M; Gaebel, W; Schneider, F
2005-07-01
Several quality assurance initiatives in health care have been undertaken during the past years. The next step consists of systematically combining single initiatives in order to built up a strategic quality management. In a German multicenter study, the quality of inpatient depression treatment was measured in ten psychiatric hospitals. Half of the hospitals received comparative feedback on their individual results in comparison to the other hospitals (bench marking). Those bench markings were used by each hospital as a statistic basis for in-house quality work, to improve the quality of depression treatment. According to hospital differences concerning procedure and outcome, different goals were chosen. There were also differences with respect to structural characteristics, strategies, and outcome. The feedback from participants about data-based quality circles in general and the availability of bench-marking data was positive. The necessity of carefully choosing quality circle members and professional moderation became obvious. Data-based quality circles including bench-marking have proven to be useful for quality management in inpatient depression care.
Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2016-06-01
Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.
Semi-Automated Annotation of Biobank Data Using Standard Medical Terminologies in a Graph Database.
Hofer, Philipp; Neururer, Sabrina; Goebel, Georg
2016-01-01
Data describing biobank resources frequently contains unstructured free-text information or insufficient coding standards. (Bio-) medical ontologies like Orphanet Rare Diseases Ontology (ORDO) or the Human Disease Ontology (DOID) provide a high number of concepts, synonyms and entity relationship properties. Such standard terminologies increase quality and granularity of input data by adding comprehensive semantic background knowledge from validated entity relationships. Moreover, cross-references between terminology concepts facilitate data integration across databases using different coding standards. In order to encourage the use of standard terminologies, our aim is to identify and link relevant concepts with free-text diagnosis inputs within a biobank registry. Relevant concepts are selected automatically by lexical matching and SPARQL queries against a RDF triplestore. To ensure correctness of annotations, proposed concepts have to be confirmed by medical data administration experts before they are entered into the registry database. Relevant (bio-) medical terminologies describing diseases and phenotypes were identified and stored in a graph database which was tied to a local biobank registry. Concept recommendations during data input trigger a structured description of medical data and facilitate data linkage between heterogeneous systems.
Cost and cost-effectiveness studies in urologic oncology using large administrative databases.
Wang, Ye; Mossanen, Matthew; Chang, Steven L
2018-04-01
Urologic cancers are not only among the most common types of cancers, but also among the most expensive cancers to treat in the United States. This study aimed to review the use of CEAs and other cost analyses in urologic oncology using large databases to better understand the value of management strategies of these cancers. A literature review on CEAs and other cost analyses in urologic oncology using large databases. The options for and costs of diagnosing, treating, and following patients with urologic cancers can be expected to rise in the coming years. There are numerous opportunities in each urologic cancer to use CEAs to both lower costs and provide high-quality services. Improved cancer care must balance the integration of novelty with ensuring reasonable costs to patients and the health care system. With the increasing focus cost containment, appreciating the value of competing strategies in caring for our patients is pivotal. Leveraging methods such as CEAs and harnessing large databases may help evaluate the merit of established or emerging strategies. Copyright © 2018 Elsevier Inc. All rights reserved.
New seismogenic stress fields for southern Italy from a Bayesian approach
NASA Astrophysics Data System (ADS)
Totaro, Cristina; Orecchio, Barbara; Presti, Debora; Scolaro, Silvia; Neri, Giancarlo
2017-04-01
A new database of high-quality waveform inversion focal mechanism has been compiled for southern Italy by integrating the highest quality solutions, available from literature and catalogues, and 146 newly-computed ones. All the selected focal mechanisms are (i) coming from the Italian CMT, Regional CMT and TDMT catalogues (Pondrelli et al., PEPI 2006, PEPI 2011; http://www.ingv.it), or (ii) computed by using the Cut And Paste (CAP) method (Zhao & Helmberger, BSSA 1994; Zhu & Helmberger, BSSA 1996). Specific tests have been carried out in order to evaluate the robustness of the obtained solutions (e.g., by varying both seismic network configuration and Earth structure parameters) and to estimate uncertainties on the focal mechanism parameters. Only the resulting highest-quality solutions have been enclosed in the database, that has then been used for computation of posterior density distributions of stress tensor components by a Bayesian method (Arnold & Townend, GJI 2007). This algorithm furnishes the posterior density function of the principal components of stress tensor (maximum σ1, intermediate σ2, and minimum σ3 compressive stress, respectively) and the stress-magnitude ratio (R). Before stress computation, we applied the k-means clustering algorithm to subdivide the focal mechanism catalog on the basis of earthquake locations. This approach allows identifying the sectors to be investigated without any "a priori" constraint from faulting type distribution. The large amount of data and the application of the Bayesian algorithm allowed us to provide a more accurate local-to-regional scale stress distribution that has shed new light on the kinematics and dynamics of this very complex area, where lithospheric unit configuration and geodynamic engines are still strongly debated. The new high-quality information here furnished will then represent very useful tools and constraints for future geophysical analyses and geodynamic modeling.
The Efficacy, Safety and Applications of Medical Hypnosis.
Häuser, Winfried; Hagl, Maria; Schmierer, Albrecht; Hansen, Ernil
2016-04-29
The efficacy and safety of hypnotic techniques in somatic medicine, known as medical hypnosis, have not been supported to date by adequate scientific evidence. We systematically reviewed meta-analyses of randomized controlled trials (RCTs) of medical hypnosis. Relevant publications (January 2005 to June 2015) were sought in the Cochrane databases CDSR and DARE, and in PubMed. Meta-analyses involving at least 400 patients were included in the present analysis. Their methodological quality was assessed with AMSTAR (A Measurement Tool to Assess Systematic Reviews). An additional search was carried out in the CENTRAL and PubMed databases for RCTs of waking suggestion (therapeutic suggestion without formal trance induction) in somatic medicine. Out of the 391 publications retrieved, five were reports of metaanalyses that met our inclusion criteria. One of these meta-analyses was of high methodological quality; three were of moderate quality, and one was of poor quality. Hypnosis was superior to controls with respect to the reduction of pain and emotional stress during medical interventions (34 RCTs, 2597 patients) as well as the reduction of irritable bowel symptoms (8 RCTs, 464 patients). Two meta-analyses revealed no differences between hypnosis and control treatment with respect to the side effects and safety of treatment. The effect size of hypnosis on emotional stress during medical interventions was low in one meta-analysis, moderate in one, and high in one. The effect size on pain during medical interventions was low. Five RCTs indicated that waking suggestion is effective in medical procedures. Medical hypnosis is a safe and effective complementary technique for use in medical procedures and in the treatment of irritable bowel syndrome. Waking suggestions can be a component of effective doctor-patient communication in routine clinical situations.
Aguiar, F C; Segurado, P; Urbanič, G; Cambra, J; Chauvin, C; Ciadamidaro, S; Dörflinger, G; Ferreira, J; Germ, M; Manolaki, P; Minciardi, M R; Munné, A; Papastergiadou, E; Ferreira, M T
2014-04-01
This paper exposes a new methodological approach to solve the problem of intercalibrating river quality national methods when a common metric is lacking and most of the countries share the same Water Framework Directive (WFD) assessment method. We provide recommendations for similar works in future concerning the assessment of ecological accuracy and highlight the importance of a good common ground to make feasible the scientific work beyond the intercalibration. The approach herein presented was applied to highly seasonal rivers of the Mediterranean Geographical Intercalibration Group for the Biological Quality Element Macrophytes. The Mediterranean Group of river macrophytes involved seven countries and two assessment methods with similar acquisition data and assessment concept: the Macrophyte Biological Index for Rivers (IBMR) for Cyprus, France, Greece, Italy, Portugal and Spain, and the River Macrophyte Index (RMI) for Slovenia. Database included 318 sites of which 78 were considered as benchmarks. The boundary harmonization was performed for common WFD-assessment methods (all countries except Slovenia) using the median of the Good/Moderate and High/Good boundaries of all countries. Then, whenever possible, the Slovenian method, RMI was computed for the entire database. The IBMR was also computed for the Slovenian sites and was regressed against RMI in order to check the relatedness of methods (R(2)=0.45; p<0.00001) and to convert RMI boundaries into the IBMR scale. The boundary bias of RMI was computed using direct comparison of classification and the median boundary values following boundary harmonization. The average absolute class differences after harmonization is 26% and the percentage of classifications differing by half of a quality class is also small (16.4%). This multi-step approach to the intercalibration was endorsed by the WFD Regulatory Committee. © 2013 Elsevier B.V. All rights reserved.
Del Fabbro, Massimo; Corbella, Stefano; Tsesis, Igor; Taschieri, Silvio
2015-03-01
The aims of the present systematic literature analysis were to evaluate, over a 10-year period, the trend of the proportion of RCT, SR, MA published on endodontic surgery, and to investigate if the impact factor (IF) of the main endodontic Journals correlates with the proportion of RCT, SR, MA they publish. An electronic search of the RCT, SR and MA published on the topic "endodontic surgery" from 2001 to 2010 was performed on Medline and Cochrane CENTRAL database using specific search terms combined with Boolean operators. Endodontic Journals impact factor was retrieved by the Thomson Scientific database. The proportion of each study type over the total number of articles on endodontic surgery published per year was estimated. The correlation between the number of high-evidence level studies published on the main endodontic Journals and the IF of such Journals per year was estimated. From a total of 900 articles published in 2001-2010 on endodontic surgery, there were 114 studies of high evidence level. A significant increase of the proportion of either RCT, SR and MA over the years was found. A modest to unclear correlation was found between the Journal IF and the number of high-evidence articles published. There is a positive trend over the years among researchers in performing studies of good quality in endodontic surgery. The impact factor of endodontic Journals is not consistently influenced by publication of high-evidence level articles. Copyright © 2015 Elsevier Inc. All rights reserved.
Zhang, Juxia; Wang, Jiancheng; Han, Lin; Zhang, Fengwa; Cao, Jianxun; Ma, Yuxia
2015-01-01
Systematic reviews (SRs) and meta-analyses (MAs) of nursing interventions have become increasingly popular in China. This review provides the first examination of epidemiological characteristics of these SRs as well as compliance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses and Assessment of Multiple Systematic Reviews guidelines. The purpose of this study was to examine epidemiologic and reporting characteristics as well as the methodologic quality of SRs and MAs of nursing interventions published in Chinese journals. Four Chinese databases were searched (the Chinese Biomedicine Literature Database, Chinese Scientific Journal Full-text Database, Chinese Journal Full-text Database, and Wanfang Database) for SRs and MAs of nursing intervention from inception through June 2013. Data were extracted into Excel (Microsoft, Redmond, WA). The Assessment of Multiple Systematic Reviews and Preferred Reporting Items for Systematic Reviews and Meta-analyses checklists were used to assess methodologic quality and reporting characteristics, respectively. A total of 144 SRs were identified, most (97.2%) of which used "systematic review" or "meta-analyses" in the titles. None of the reviews had been updated. Nearly half (41%) were written by nurses, and more than half (61%) were reported in specialist journals. The most common conditions studied were endocrine, nutritional and metabolic diseases, and neoplasms. Most (70.8%) reported information about quality assessment, whereas less than half (25%) reported assessing for publication bias. None of the reviews reported a conflict of interest. Although many SRs of nursing interventions have been published in Chinese journals, the quality of these reviews is of concern. As a potential key source of information for nurses and nursing administrators, not only were many of these reviews incomplete in the information they provided, but also some results were misleading. Improving the quality of SRs of nursing interventions conducted and published by nurses in China is urgently needed in order to increase the value of these studies. Copyright © 2015 Elsevier Inc. All rights reserved.
High throughput profile-profile based fold recognition for the entire human proteome.
McGuffin, Liam J; Smith, Richard T; Bryson, Kevin; Sørensen, Søren-Aksel; Jones, David T
2006-06-07
In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power. In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Cooper, Chris; Lovell, Rebecca; Husk, Kerryn; Booth, Andrew; Garside, Ruth
2018-06-01
We undertook a systematic review to evaluate the health benefits of environmental enhancement and conservation activities. We were concerned that a conventional process of study identification, focusing on exhaustive searches of bibliographic databases as the primary search method, would be ineffective, offering limited value. The focus of this study is comparing study identification methods. We compare (1) an approach led by searches of bibliographic databases with (2) an approach led by supplementary search methods. We retrospectively assessed the effectiveness and value of both approaches. Effectiveness was determined by comparing (1) the total number of studies identified and screened and (2) the number of includable studies uniquely identified by each approach. Value was determined by comparing included study quality and by using qualitative sensitivity analysis to explore the contribution of studies to the synthesis. The bibliographic databases approach identified 21 409 studies to screen and 2 included qualitative studies were uniquely identified. Study quality was moderate, and contribution to the synthesis was minimal. The supplementary search approach identified 453 studies to screen and 9 included studies were uniquely identified. Four quantitative studies were poor quality but made a substantive contribution to the synthesis; 5 studies were qualitative: 3 studies were good quality, one was moderate quality, and 1 study was excluded from the synthesis due to poor quality. All 4 included qualitative studies made significant contributions to the synthesis. This case study found value in aligning primary methods of study identification to maximise location of relevant evidence. Copyright © 2017 John Wiley & Sons, Ltd.
Vining, Kevin C.; Cates, Steven W.
2006-01-01
Available surface-water quality, ground-water quality, and water-withdrawal data for the Spirit Lake Reservation were summarized. The data were collected intermittently from 1948 through 2004 and were compiled from U.S. Geological Survey databases, North Dakota State Water Commission databases, and Spirit Lake Nation tribal agencies. Although the quality of surface water on the reservation generally is satisfactory, no surface-water sources are used for consumable water supplies. Ground water on the reservation is of sufficient quality for most uses. The Tokio and Warwick aquifers have better overall water quality than the Spiritwood aquifer. Water from the Spiritwood aquifer is used mostly for irrigation. The Warwick aquifer provides most of the consumable water for the reservation and for the city of Devils Lake. Annual water withdrawals from the Warwick aquifer by the Spirit Lake Nation ranged from 71 million gallons to 122 million gallons during 2000-04.
Horsch, Alexander; Hapfelmeier, Alexander; Elter, Matthias
2011-11-01
Breast cancer is globally a major threat for women's health. Screening and adequate follow-up can significantly reduce the mortality from breast cancer. Human second reading of screening mammograms can increase breast cancer detection rates, whereas this has not been proven for current computer-aided detection systems as "second reader". Critical factors include the detection accuracy of the systems and the screening experience and training of the radiologist with the system. When assessing the performance of systems and system components, the choice of evaluation methods is particularly critical. Core assets herein are reference image databases and statistical methods. We have analyzed characteristics and usage of the currently largest publicly available mammography database, the Digital Database for Screening Mammography (DDSM) from the University of South Florida, in literature indexed in Medline, IEEE Xplore, SpringerLink, and SPIE, with respect to type of computer-aided diagnosis (CAD) (detection, CADe, or diagnostics, CADx), selection of database subsets, choice of evaluation method, and quality of descriptions. 59 publications presenting 106 evaluation studies met our selection criteria. In 54 studies (50.9%), the selection of test items (cases, images, regions of interest) extracted from the DDSM was not reproducible. Only 2 CADx studies, not any CADe studies, used the entire DDSM. The number of test items varies from 100 to 6000. Different statistical evaluation methods are chosen. Most common are train/test (34.9% of the studies), leave-one-out (23.6%), and N-fold cross-validation (18.9%). Database-related terminology tends to be imprecise or ambiguous, especially regarding the term "case". Overall, both the use of the DDSM as data source for evaluation of mammography CAD systems, and the application of statistical evaluation methods were found highly diverse. Results reported from different studies are therefore hardly comparable. Drawbacks of the DDSM (e.g. varying quality of lesion annotations) may contribute to the reasons. But larger bias seems to be caused by authors' own decisions upon study design. RECOMMENDATIONS/CONCLUSION: For future evaluation studies, we derive a set of 13 recommendations concerning the construction and usage of a test database, as well as the application of statistical evaluation methods.
Development of a data entry auditing protocol and quality assurance for a tissue bank database.
Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L
2012-03-01
Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.
Video quality pooling adaptive to perceptual distortion severity.
Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad
2013-02-01
It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.
Richards, Derek
2014-09-01
Cochrane Effective Practice and Organisation of Care (EPOC) Group's Specialised Register; Cochrane Oral Health Group's Specialised Register; the Cochrane Central Register of Controlled Trials Medline; Embase; CINAHL; Cochrane Database of Systematic Reviews; Database of Abstracts of Reviews of Effectiveness; five other databases and two trial registries. A number of dental journals were hand-searched and a grey literature search preformed. Randomised controlled trials (RCTs), non-randomised controlled trials (NRCTs), controlled before and after studies (CBAs) and interrupted time series (ITSs) were considered. Selection was conducted independently by two reviewers. Three reviewers extracted data and assessed risk of bias. Meta-analysis was not possible so a narrative summary was presented. Five studies (one cluster RCT, three RCTs and one NRCT) were included. All the studies were at high risk of bias and the overall quality of evidence was very low. The majority of the studies were more than 20 years old.Four studies evaluated sealant placement; three found no evidence of a difference in retention rates of those placed by dental auxiliaries and dentists over a range of follow-up periods (six to 24 months). One study found that sealants placed by a dental auxiliary had lower retention rates than ones placed by a dentist after 48 months (9.0% with auxiliary versus 29.1% with dentist); but the net reduction in the number of teeth exhibiting caries was lower for teeth treated by the dental auxiliary than the dentist (three with auxiliary versus 60 with dentist, P value < 0.001). One study showed no evidence of a difference in dental decay after treatment with fissure sealants between groups. One study comparing the effectiveness of dental auxiliaries and dentists performing ART reported no difference in survival rates of the restorations (fillings) after 12 months. We only identified five studies for inclusion in this review, all of which were at high risk of bias, and four were published more than 20 years ago, highlighting the paucity of high-quality evaluations of the relative effectiveness, cost-effectiveness and safety of dental auxiliaries compared with dentists in performing clinical tasks. No firm conclusions could be drawn from the present review about the relative effectiveness of dental auxiliaries and dentists.
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
The measurement of quality of care in the Veterans Health Administration.
Halpern, J
1996-03-01
The Veterans Health Administration (VHA) is committed to continual refinement of its system of quality measurement. The VHA organizational structure for quality measurement has three levels. At the national level, the Associate Chief Medical Director for Quality Management provides leadership, sets policy, furnishes measurement tools, develops and distributes measures of quality, and delivers educational programs. At the intermediate level, VHA has four regional offices with staff responsible for reviewing risk management data, investigating quality problems, and ensuring compliance with accreditation requirements. At the hospital level, staff reporting directly to the chief of staff or the hospital director are responsible for implementing VHA quality management policy. The Veterans Health Administration's philosophy of quality measurement recognizes the agency's moral imperative to provide America's veterans with care that meets accepted standards. Because the repair of faulty systems is more efficient than the identification of poor performers, VHA has integrated the techniques of total quality into a multifaceted improvement program that also includes the accreditation program and traditional quality assurance activities. VHA monitors its performance by maintaining adverse incident databases, conducting patient satisfaction surveys, contracting for external peer review of 50,000 records per year, and comparing process and outcome rates internally and when possible with external benchmarks. The near-term objectives of VHA include providing medical centers with a quality matrix that will permit local development of quality indicators, construction of a report card for VHA's customers, and implementing the Malcolm W. Baldrige system for quality improvement as the road map for systemwide continuous improvement. Other goals include providing greater access to data, creating a patient-centered database, providing real-time clinical decision support, and expanding the databases.
Domain fusion analysis by applying relational algebra to protein sequence and domain databases.
Truong, Kevin; Ikura, Mitsuhiko
2003-05-06
Domain fusion analysis is a useful method to predict functionally linked proteins that may be involved in direct protein-protein interactions or in the same metabolic or signaling pathway. As separate domain databases like BLOCKS, PROSITE, Pfam, SMART, PRINTS-S, ProDom, TIGRFAMs, and amalgamated domain databases like InterPro continue to grow in size and quality, a computational method to perform domain fusion analysis that leverages on these efforts will become increasingly powerful. This paper proposes a computational method employing relational algebra to find domain fusions in protein sequence databases. The feasibility of this method was illustrated on the SWISS-PROT+TrEMBL sequence database using domain predictions from the Pfam HMM (hidden Markov model) database. We identified 235 and 189 putative functionally linked protein partners in H. sapiens and S. cerevisiae, respectively. From scientific literature, we were able to confirm many of these functional linkages, while the remainder offer testable experimental hypothesis. Results can be viewed at http://calcium.uhnres.utoronto.ca/pi. As the analysis can be computed quickly on any relational database that supports standard SQL (structured query language), it can be dynamically updated along with the sequence and domain databases, thereby improving the quality of predictions over time.
Mickenautsch, Steffen; Yengopal, Veerasamy
2011-08-01
To investigate extent and quality of current systematic review evidence regarding: powered toothbrushes, triclosan toothpaste, essential oil mouthwashes, xylitol chewing gum. Five databases were searched for systematic reviews until 13 November 2010. relevant to topic, systematic review according to title and/or abstract, published in English. Article exclusion criteria were based on QUOROM recommendations for the reporting of systematic review methods. Systematic review quality was judged using the AMSTAR tool. All trials included by reviews were assessed for selection bias. 119 articles were found, of which 11 systematic reviews were included. Of these, six were excluded and five accepted: one for triclosan toothpaste; one for xylitol chewing gum; two for powered toothbrushes; one for essential oil mouthwashes. AMSTAR scores: triclosan toothpaste 7; powered toothbrushes 9 and 11; xylitol chewing gum 9; essential oil mouthwashes 8. In total, 75 (out of 76) reviewed trials were identified. In-depth assessment showed a high risk of selection bias for all trials. The extent of available systematic review evidence is low. Although the few identified systematic reviews could be rated as of medium and high quality, the validity of their conclusions needs to be treated with caution, owing to high risk of selection bias in the reviewed trials. High quality randomised control trials are needed in order to provide convincing evidence regarding true clinical efficacy. © 2011 FDI World Dental Federation.
NASA Astrophysics Data System (ADS)
Molina-Cardín, Alberto; Campuzano, Saioa A.; Rivero, Mercedes; Osete, María Luisa; Gómez-Paccard, Miriam; Pérez-Fuentes, José Carlos; Pavón-Carrasco, F. Javier; Chauvin, Annick; Palencia-Ortas, Alicia
2017-04-01
In this work we present the first archaeomagnetic intensity database for the Iberian Peninsula covering the last 3 millennia. In addition to previously published archaeointensities (about 100 data), we present twenty new high-quality archaeointensities. The new data have been obtained following the Thellier and Thellier method including pTRM-checks and have been corrected for the effect of the anisotropy of thermoremanent magnetization upon archaeointensity estimates. Importantly, about 50% of the new data obtained correspond to the first millennium BC, a period for which there was not possible to develop an intensity palaeosecular variation curve before due to the lack of high-quality archaeointensity data. The different qualities of the data included in the Iberian dataset have been evaluated following different palaeomagnetic criteria, such as the number of specimens analysed, the laboratory protocol applied and the kind of material analysed. Finally, we present the first intensity palaeosecular variation curve for the Iberian Peninsula centred at Madrid for the last 3000 years. In order to obtain the most reliable secular variation curve, it has been generated using only selected high-quality data from the catalogue.
Five Years into the Past...Five Years into the Future.
ERIC Educational Resources Information Center
Tenopir, Carol
1988-01-01
Discusses issues which will have an impact on database searching for the next five years: (1) quality control; (2) more inhouse databases; (3) changes in database visuals; (4) pricing policies; and (5) market changes. A number of favorable and unfavorable changes unlikely to occur within five years are also noted. (MES)
USDA-ARS?s Scientific Manuscript database
Epidemiologic studies show inverse associations between flavonoid intake and chronic disease risk. However, a lack of comprehensive databases of the flavonoid content of foods has hindered efforts to fully characterize population intake. Using a newly released database of flavonoid values, we soug...
The Database Business: Managing Today--Planning for Tomorrow. Issues and Futures.
ERIC Educational Resources Information Center
Aitchison, T. M.; And Others
1988-01-01
Current issues and the future of the database business are discussed in five papers. Topics covered include aspects relating to the quality of database production; international ownership in the U.S. information marketplace; an overview of pricing strategies in the electronic information industry; and pricing issues from the viewpoints of online…
An Autonomic Framework for Integrating Security and Quality of Service Support in Databases
ERIC Educational Resources Information Center
Alomari, Firas
2013-01-01
The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…
Bridging the Gap between the Data Base and User in a Distributed Environment.
ERIC Educational Resources Information Center
Howard, Richard D.; And Others
1989-01-01
The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)
Linking microarray reporters with protein functions
Gaj, Stan; van Erk, Arie; van Haaften, Rachel IM; Evelo, Chris TA
2007-01-01
Background The analysis of microarray experiments requires accurate and up-to-date functional annotation of the microarray reporters to optimize the interpretation of the biological processes involved. Pathway visualization tools are used to connect gene expression data with existing biological pathways by using specific database identifiers that link reporters with elements in the pathways. Results This paper proposes a novel method that aims to improve microarray reporter annotation by BLASTing the original reporter sequences against a species-specific EMBL subset, that was derived from and crosslinked back to the highly curated UniProt database. The resulting alignments were filtered using high quality alignment criteria and further compared with the outcome of a more traditional approach, where reporter sequences were BLASTed against EnsEMBL followed by locating the corresponding protein (UniProt) entry for the high quality hits. Combining the results of both methods resulted in successful annotation of > 58% of all reporter sequences with UniProt IDs on two commercial array platforms, increasing the amount of Incyte reporters that could be coupled to Gene Ontology terms from 32.7% to 58.3% and to a local GenMAPP pathway from 9.6% to 16.7%. For Agilent, 35.3% of the total reporters are now linked towards GO nodes and 7.1% on local pathways. Conclusion Our methods increased the annotation quality of microarray reporter sequences and allowed us to visualize more reporters using pathway visualization tools. Even in cases where the original reporter annotation showed the correct description the new identifiers often allowed improved pathway and Gene Ontology linking. These methods are freely available at http://www.bigcat.unimaas.nl/public/publications/Gaj_Annotation/. PMID:17897448
A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6).
Darwish, Omar; Li, Shuxian; May, Zane; Matthews, Benjamin; Alkharouf, Nadim W
2016-01-01
Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe- Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx.
A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6)
May, Zane; Matthews, Benjamin; Alkharouf, Nadim W.
2016-01-01
Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe– Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. Availability: http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx PMID:28197060
Kuhn, Stefan; Schlörer, Nils E
2015-08-01
nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.
Compilation and analysis of multiple groundwater-quality datasets for Idaho
Hundt, Stephen A.; Hopkins, Candice B.
2018-05-09
Groundwater is an important source of drinking and irrigation water throughout Idaho, and groundwater quality is monitored by various Federal, State, and local agencies. The historical, multi-agency records of groundwater quality include a valuable dataset that has yet to be compiled or analyzed on a statewide level. The purpose of this study is to combine groundwater-quality data from multiple sources into a single database, to summarize this dataset, and to perform bulk analyses to reveal spatial and temporal patterns of water quality throughout Idaho. Data were retrieved from the Water Quality Portal (https://www.waterqualitydata.us/), the Idaho Department of Environmental Quality, and the Idaho Department of Water Resources. Analyses included counting the number of times a sample location had concentrations above Maximum Contaminant Levels (MCL), performing trends tests, and calculating correlations between water-quality analytes. The water-quality database and the analysis results are available through USGS ScienceBase (https://doi.org/10.5066/F72V2FBG).
Kamali, Parisa; Zettervall, Sara L; Wu, Winona; Ibrahim, Ahmed M S; Medin, Caroline; Rakhorst, Hinne A; Schermerhorn, Marc L; Lee, Bernard T; Lin, Samuel J
2017-04-01
Research derived from large-volume databases plays an increasing role in the development of clinical guidelines and health policy. In breast cancer research, the Surveillance, Epidemiology and End Results, National Surgical Quality Improvement Program, and Nationwide Inpatient Sample databases are widely used. This study aims to compare the trends in immediate breast reconstruction and identify the drawbacks and benefits of each database. Patients with invasive breast cancer and ductal carcinoma in situ were identified from each database (2005-2012). Trends of immediate breast reconstruction over time were evaluated. Patient demographics and comorbidities were compared. Subgroup analysis of immediate breast reconstruction use per race was conducted. Within the three databases, 1.2 million patients were studied. Immediate breast reconstruction in invasive breast cancer patients increased significantly over time in all databases. A similar significant upward trend was seen in ductal carcinoma in situ patients. Significant differences in immediate breast reconstruction rates were seen among races; and the disparity differed among the three databases. Rates of comorbidities were similar among the three databases. There has been a significant increase in immediate breast reconstruction; however, the extent of the reporting of overall immediate breast reconstruction rates and of racial disparities differs significantly among databases. The Nationwide Inpatient Sample and the National Surgical Quality Improvement Program report similar findings, with the Surveillance, Epidemiology and End Results database reporting results significantly lower in several categories. These findings suggest that use of the Surveillance, Epidemiology and End Results database may not be universally generalizable to the entire U.S.
DPP-4 inhibitors for the treatment of type 2 diabetes: a methodology overview of systematic reviews.
Ling, Juan; Ge, Long; Zhang, Ding-Hua; Wang, Yong-Feng; Xie, Zhuo-Lin; Tian, Jin-Hui; Xiao, Xiao-Hui; Yang, Ke-Hu
2018-06-01
To evaluate the methodological quality of systematic reviews (SRs), and summarize evidence of important outcomes from dipeptidyl peptidase-4 inhibitors (DPP4-I) in treating type 2 diabetes mellitus (T2DM). We included SRs of DPP4-I for the treatment of T2DM until January, 2018 by searching the Cochrane Library, PubMed, EMBASE and three Chinese databases. We evaluated the methodological qualities with the AMSTAR (Assessing the Methodological Quality of Systematic Reviews) tool and the GRADE (The Grading of Recommendations Assessment, Development and Evaluation) approach. Sixty-three SRs (a total of 2,603,140 participants) receiving DPP4-I for the treatment of T2DM were included. The results of AMSTAR showed that the lowest quality was "a list of studies (included and excluded) item" with only one (1.6%) study provided, followed by the "providing a priori design" item with only four (6.3%) studies conforming to this item, the next were "the status of publication (gray literature) used as an inclusion criterion item", with only 18 (28.9%) studies conforming to these items. Only seven (11.1%) studies scored more than nine points in AMSTAR, indicating high methodological quality. For GRADE, of the 128 outcomes, high quality evidence was provided in only 28 (21.9%), moderate in 70 (54.7%), low in 27 (21.1%), and very low in three (2.3%). The methodological quality of SRs of DPP4-I for type 2 diabetes mellitus is not high and there are common areas for improvement. Furthermore, the quality of evidence level is moderate and more high quality evidence is needed.
A high-throughput Sanger strategy for human mitochondrial genome sequencing
2013-01-01
Background A population reference database of complete human mitochondrial genome (mtGenome) sequences is needed to enable the use of mitochondrial DNA (mtDNA) coding region data in forensic casework applications. However, the development of entire mtGenome haplotypes to forensic data quality standards is difficult and laborious. A Sanger-based amplification and sequencing strategy that is designed for automated processing, yet routinely produces high quality sequences, is needed to facilitate high-volume production of these mtGenome data sets. Results We developed a robust 8-amplicon Sanger sequencing strategy that regularly produces complete, forensic-quality mtGenome haplotypes in the first pass of data generation. The protocol works equally well on samples representing diverse mtDNA haplogroups and DNA input quantities ranging from 50 pg to 1 ng, and can be applied to specimens of varying DNA quality. The complete workflow was specifically designed for implementation on robotic instrumentation, which increases throughput and reduces both the opportunities for error inherent to manual processing and the cost of generating full mtGenome sequences. Conclusions The described strategy will assist efforts to generate complete mtGenome haplotypes which meet the highest data quality expectations for forensic genetic and other applications. Additionally, high-quality data produced using this protocol can be used to assess mtDNA data developed using newer technologies and chemistries. Further, the amplification strategy can be used to enrich for mtDNA as a first step in sample preparation for targeted next-generation sequencing. PMID:24341507
Sequencing artifacts in the type A influenza databases and attempts to correct them.
Suarez, David L; Chester, Nikki; Hatfield, Jason
2014-07-01
There are over 276 000 influenza gene sequences in public databases, with the quality of the sequences determined by the contributor. As part of a high school class project, influenza sequences with possible errors were identified in the public databases based on the size of the gene being longer than expected, with the hypothesis that these sequences would have an error. Students contacted sequence submitters alerting them of the possible sequence issue(s) and requested they the suspect sequence(s) be correct as appropriate. Type A influenza viruses were screened, and gene segments longer than the accepted size were identified for further analysis. Attention was placed on sequences with additional nucleotides upstream or downstream of the highly conserved non-coding ends of the viral segments. A total of 1081 sequences were identified that met this criterion. Three types of errors were commonly observed: non-influenza primer sequence wasn't removed from the sequence; PCR product was cloned and plasmid sequence was included in the sequence; and Taq polymerase added an adenine at the end of the PCR product. Internal insertions of nucleotide sequence were also commonly observed, but in many cases it was unclear if the sequence was correct or actually contained an error. A total of 215 sequences, or 22.8% of the suspect sequences, were corrected in the public databases in the first year of the student project. Unfortunately 138 additional sequences with possible errors were added to the databases in the second year. Additional awareness of the need for data integrity of sequences submitted to public databases is needed to fully reap the benefits of these large data sets. © 2014 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.
Validation and extraction of molecular-geometry information from small-molecule databases.
Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N
2017-02-01
A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.
Wasson, Lauren T; Cusmano, Amberle; Meli, Laura; Louh, Irene; Falzon, Louise; Hampsey, Meghan; Young, Geoffrey; Shaffer, Jonathan; Davidson, Karina W
2016-12-06
Concerns exist about the current quality of undergraduate medical education and its effect on students' well-being. To identify best practices for undergraduate medical education learning environment interventions that are associated with improved emotional well-being of students. Learning environment interventions were identified by searching the biomedical electronic databases Ovid MEDLINE, EMBASE, the Cochrane Library, and ERIC from database inception dates to October 2016. Studies examined any intervention designed to promote medical students' emotional well-being in the setting of a US academic medical school, with an outcome defined as students' reports of well-being as assessed by surveys, semistructured interviews, or other quantitative methods. Two investigators independently reviewed abstracts and full-text articles. Data were extracted into tables to summarize results. Study quality was assessed by the Medical Education Research Study Quality Instrument (MERQSI), which has a possible range of 5 to 18; higher scores indicate higher design and methods quality and a score of 14 or higher indicates a high-quality study. Twenty-eight articles including at least 8224 participants met eligibility criteria. Study designs included single-group cross-sectional or posttest only (n = 10), single-group pretest/posttest (n = 2), nonrandomized 2-group (n = 13), and randomized clinical trial (n = 3); 89.2% were conducted at a single site, and the mean MERSQI score for all studies was 10.3 (SD, 2.11; range, 5-13). Studies encompassed a variety of interventions, including those focused on pass/fail grading systems (n = 3; mean MERSQI score, 12.0), mental health programs (n = 4; mean MERSQI score, 11.9), mind-body skills programs (n = 7; mean MERSQI score, 11.3), curriculum structure (n = 3; mean MERSQI score, 9.5), multicomponent program reform (n = 5; mean MERSQI score, 9.4), wellness programs (n = 4; mean MERSQI score, 9.0), and advising/mentoring programs (n = 3; mean MERSQI score, 8.2). In this systematic review, limited evidence suggested that some specific learning environment interventions were associated with improved emotional well-being among medical students. However, the overall quality of the evidence was low, highlighting the need for high-quality medical education research.
E-MSD: improving data deposition and structure quality.
Tagari, M; Tate, J; Swaminathan, G J; Newman, R; Naim, A; Vranken, W; Kapopoulou, A; Hussain, A; Fillon, J; Henrick, K; Velankar, S
2006-01-01
The Macromolecular Structure Database (MSD) (http://www.ebi.ac.uk/msd/) [H. Boutselakis, D. Dimitropoulos, J. Fillon, A. Golovin, K. Henrick, A. Hussain, J. Ionides, M. John, P. A. Keller, E. Krissinel et al. (2003) E-MSD: the European Bioinformatics Institute Macromolecular Structure Database. Nucleic Acids Res., 31, 458-462.] group is one of the three partners in the worldwide Protein DataBank (wwPDB), the consortium entrusted with the collation, maintenance and distribution of the global repository of macromolecular structure data [H. Berman, K. Henrick and H. Nakamura (2003) Announcing the worldwide Protein Data Bank. Nature Struct. Biol., 10, 980.]. Since its inception, the MSD group has worked with partners around the world to improve the quality of PDB data, through a clean up programme that addresses inconsistencies and inaccuracies in the legacy archive. The improvements in data quality in the legacy archive have been achieved largely through the creation of a unified data archive, in the form of a relational database that stores all of the data in the wwPDB. The three partners are working towards improving the tools and methods for the deposition of new data by the community at large. The implementation of the MSD database, together with the parallel development of improved tools and methodologies for data harvesting, validation and archival, has lead to significant improvements in the quality of data that enters the archive. Through this and related projects in the NMR and EM realms the MSD continues to improve the quality of publicly available structural data.
Economic evaluation of Varicella vaccination: results of a systematic review
Unim, Brigid; Saulle, Rosella; Boccalini, Sara; Taddei, Cristina; Ceccherini, Vega; Boccia, Antonio; Bonanni, Paolo; La Torre, Giuseppe
2013-01-01
Introduction: The aim of the present study is to review the economic burden of varicella disease and the benefit of universal varicella vaccination in different settings pending its implementation in all Italian regions. Materials and Methods: Research was conducted using PubMed, Scopus and ISI databases. Score quality and data extraction were performed for all included studies. Results: Twenty-three articles met the criteria: 15 cost-effectiveness, 8 cost-benefit and one cost-utility analysis. Varicella vaccination could save the society from €637,762 (infant strategy) to 53 million annually (combined infant and adolescent strategy). The median and the mean quality scores resulted in 91.8% and 85.4% respectively; 11 studies were considered of high quality and 12 of low quality. Discussion: The studies are favorable to the introduction of universal varicella vaccination in Italy, being cost saving and having a positive impact on morbidity. The quality score of the studies varied greatly: recent analyses were of comparable quality to older studies. PMID:23823940
Naranjo-Gil, David; Ruiz-Muñoz, David
2015-01-01
Healthcare supply expenses consume a large part of the financial resources allocated to public health. The aim of this study was to analyze the use of a benchmarking process in the management of hospital purchases, as well as its effect on product cost reduction and quality improvement. Data were collected through a survey conducted in 29 primary healthcare districts from 2010 to 2011, and through a healthcare database on the prices, quality, delivery time and supplier characteristics of 5373 products. The use of benchmarking processes reduced or eliminated products with a low quality and high price. These processes increased the quality of products by 10.57% and reduced their purchase price by 28.97%. The use of benchmarking by healthcare centers can reduce expenditure and allow more efficient management of the healthcare supply chain. It also facilitated the acquisition of products at lower prices and higher quality. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
Improving quality: bridging the health sector divide.
Pringle, Mike
2003-12-01
All too often, quality assurance looks at just one small part of the complex system that is health care. However, evidently each individual patient has one set of experiences and outcomes, often involving a range of health professionals in a number of settings across multiple sectors. In order to solve the problems of this complexity, we need to establish high-quality electronic recording in each of the settings. In the UK, primary care has been leading the way in adopting information technology and can now use databases for individual clinical care, for quality assurance using significant event and conventional auditing, and for research. Before we can understand and quality-assure the whole health care system, we need electronic patient records in all settings and good communication to build a summary electronic health record for each patient. Such an electronic health record will be under the control of the patient concerned, will be shared with the explicit consent of the patient, and will form the vehicle for quality assurance across all sectors of the health service.
Use of electronic medical record data for quality improvement in schizophrenia treatment.
Owen, Richard R; Thrush, Carol R; Cannon, Dale; Sloan, Kevin L; Curran, Geoff; Hudson, Teresa; Austen, Mark; Ritchie, Mona
2004-01-01
An understanding of the strengths and limitations of automated data is valuable when using administrative or clinical databases to monitor and improve the quality of health care. This study discusses the feasibility and validity of using data electronically extracted from the Veterans Health Administration (VHA) computer database (VistA) to monitor guideline performance for inpatient and outpatient treatment of schizophrenia. The authors also discuss preliminary results and their experience in applying these methods to monitor antipsychotic prescribing using the South Central VA Healthcare Network (SCVAHCN) Data Warehouse as a tool for quality improvement.
Mining Quality Phrases from Massive Text Corpora
Liu, Jialu; Shang, Jingbo; Wang, Chi; Ren, Xiang; Han, Jiawei
2015-01-01
Text data are ubiquitous and play an essential role in big data applications. However, text data are mostly unstructured. Transforming unstructured text into structured units (e.g., semantically meaningful phrases) will substantially reduce semantic ambiguity and enhance the power and efficiency at manipulating such data using database technology. Thus mining quality phrases is a critical research problem in the field of databases. In this paper, we propose a new framework that extracts quality phrases from text corpora integrated with phrasal segmentation. The framework requires only limited training but the quality of phrases so generated is close to human judgment. Moreover, the method is scalable: both computation time and required space grow linearly as corpus size increases. Our experiments on large text corpora demonstrate the quality and efficiency of the new method. PMID:26705375
Rolston, John D; Han, Seunggu J; Chang, Edward F
2017-03-01
The American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) provides a rich database of North American surgical procedures and their complications. Yet no external source has validated the accuracy of the information within this database. Using records from the 2006 to 2013 NSQIP database, we used two methods to identify errors: (1) mismatches between the Current Procedural Terminology (CPT) code that was used to identify the surgical procedure, and the International Classification of Diseases (ICD-9) post-operative diagnosis: i.e., a diagnosis that is incompatible with a certain procedure. (2) Primary anesthetic and CPT code mismatching: i.e., anesthesia not indicated for a particular procedure. Analyzing data for movement disorders, epilepsy, and tumor resection, we found evidence of CPT code and postoperative diagnosis mismatches in 0.4-100% of cases, depending on the CPT code examined. When analyzing anesthetic data from brain tumor, epilepsy, trauma, and spine surgery, we found evidence of miscoded anesthesia in 0.1-0.8% of cases. National databases like NSQIP are an important tool for quality improvement. Yet all databases are subject to errors, and measures of internal consistency show that errors affect up to 100% of case records for certain procedures in NSQIP. Steps should be taken to improve data collection on the frontend of NSQIP, and also to ensure that future studies with NSQIP take steps to exclude erroneous cases from analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Geospatial Science is increasingly becoming an important tool in making Agency decisions. QualIty Control and Quality Assurance are required to be integrated during the planning, implementation and assessment of geospatial databases, processes and products. In order to ensure Age...
NASA Technical Reports Server (NTRS)
Ramirez, Eric; Gutheinz, Sandy; Brison, James; Ho, Anita; Allen, James; Ceritelli, Olga; Tobar, Claudia; Nguyen, Thuykien; Crenshaw, Harrel; Santos, Roxann
2008-01-01
Supplier Management System (SMS) allows for a consistent, agency-wide performance rating system for suppliers used by NASA. This version (2.0) combines separate databases into one central database that allows for the sharing of supplier data. Information extracted from the NBS/Oracle database can be used to generate ratings. Also, supplier ratings can now be generated in the areas of cost, product quality, delivery, and audit data. Supplier data can be charted based on real-time user input. Based on these individual ratings, an overall rating can be generated. Data that normally would be stored in multiple databases, each requiring its own log-in, is now readily available and easily accessible with only one log-in required. Additionally, the database can accommodate the storage and display of quality-related data that can be analyzed and used in the supplier procurement decision-making process. Moreover, the software allows for a Closed-Loop System (supplier feedback), as well as the capability to communicate with other federal agencies.
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
Data Preparation Process for the Buildings Performance Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Travis; Dunn, Laurel; Mercado, Andrea
2014-06-30
The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation processmore » takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.« less
MicRhoDE: a curated database for the analysis of microbial rhodopsin diversity and evolution
Boeuf, Dominique; Audic, Stéphane; Brillet-Guéguen, Loraine; Caron, Christophe; Jeanthon, Christian
2015-01-01
Microbial rhodopsins are a diverse group of photoactive transmembrane proteins found in all three domains of life and in viruses. Today, microbial rhodopsin research is a flourishing research field in which new understandings of rhodopsin diversity, function and evolution are contributing to broader microbiological and molecular knowledge. Here, we describe MicRhoDE, a comprehensive, high-quality and freely accessible database that facilitates analysis of the diversity and evolution of microbial rhodopsins. Rhodopsin sequences isolated from a vast array of marine and terrestrial environments were manually collected and curated. To each rhodopsin sequence are associated related metadata, including predicted spectral tuning of the protein, putative activity and function, taxonomy for sequences that can be linked to a 16S rRNA gene, sampling date and location, and supporting literature. The database currently covers 7857 aligned sequences from more than 450 environmental samples or organisms. Based on a robust phylogenetic analysis, we introduce an operational classification system with multiple phylogenetic levels ranging from superclusters to species-level operational taxonomic units. An integrated pipeline for online sequence alignment and phylogenetic tree construction is also provided. With a user-friendly interface and integrated online bioinformatics tools, this unique resource should be highly valuable for upcoming studies of the biogeography, diversity, distribution and evolution of microbial rhodopsins. Database URL: http://micrhode.sb-roscoff.fr. PMID:26286928
MicRhoDE: a curated database for the analysis of microbial rhodopsin diversity and evolution.
Boeuf, Dominique; Audic, Stéphane; Brillet-Guéguen, Loraine; Caron, Christophe; Jeanthon, Christian
2015-01-01
Microbial rhodopsins are a diverse group of photoactive transmembrane proteins found in all three domains of life and in viruses. Today, microbial rhodopsin research is a flourishing research field in which new understandings of rhodopsin diversity, function and evolution are contributing to broader microbiological and molecular knowledge. Here, we describe MicRhoDE, a comprehensive, high-quality and freely accessible database that facilitates analysis of the diversity and evolution of microbial rhodopsins. Rhodopsin sequences isolated from a vast array of marine and terrestrial environments were manually collected and curated. To each rhodopsin sequence are associated related metadata, including predicted spectral tuning of the protein, putative activity and function, taxonomy for sequences that can be linked to a 16S rRNA gene, sampling date and location, and supporting literature. The database currently covers 7857 aligned sequences from more than 450 environmental samples or organisms. Based on a robust phylogenetic analysis, we introduce an operational classification system with multiple phylogenetic levels ranging from superclusters to species-level operational taxonomic units. An integrated pipeline for online sequence alignment and phylogenetic tree construction is also provided. With a user-friendly interface and integrated online bioinformatics tools, this unique resource should be highly valuable for upcoming studies of the biogeography, diversity, distribution and evolution of microbial rhodopsins. Database URL: http://micrhode.sb-roscoff.fr. © The Author(s) 2015. Published by Oxford University Press.
The Danish Neuro-Oncology Registry: establishment, completeness and validity.
Hansen, Steinbjørn; Nielsen, Jan; Laursen, René J; Rasmussen, Birthe Krogh; Nørgård, Bente Mertz; Gradel, Kim Oren; Guldberg, Rikke
2016-08-30
The Danish Neuro-Oncology Registry (DNOR) is a nationwide clinical cancer database that has prospectively registered data on patients with gliomas since January 2009. The purpose of this study was to describe the establishment of the DNOR and further to evaluate the database completeness of patient registration and validity of data. The completeness of the number of patients registered in the database was evaluated in the study period from January 2009 through December 2014 by comparing cases reported to the DNOR with the Danish National Patient Registry and the Danish Pathology Registry. The data validity of important clinical variables was evaluated by a random sample of 100 patients from the DNOR using the medical records as reference. A total of 2241 patients were registered in the DNOR by December 2014 with an overall patient completeness of 92 %, which increased during the study period (from 78 % in 2009 to 96 % in 2014). Medical records were available for all patients in the validity analyses. Most variables showed a high agreement proportion (56-100 %), with a fair to good chance-corrected agreement (k = 0.43-1.0). The completeness of patient registration was very high (92 %) and the validity of the most important patient data was good. The DNOR is a newly established national database, which is a reliable source for future scientific studies and clinical quality assessments among patients with gliomas.
Forster, Samuel C; Browne, Hilary P; Kumar, Nitin; Hunt, Martin; Denise, Hubert; Mitchell, Alex; Finn, Robert D; Lawley, Trevor D
2016-01-04
The Human Pan-Microbe Communities (HPMC) database (http://www.hpmcd.org/) provides a manually curated, searchable, metagenomic resource to facilitate investigation of human gastrointestinal microbiota. Over the past decade, the application of metagenome sequencing to elucidate the microbial composition and functional capacity present in the human microbiome has revolutionized many concepts in our basic biology. When sufficient high quality reference genomes are available, whole genome metagenomic sequencing can provide direct biological insights and high-resolution classification. The HPMC database provides species level, standardized phylogenetic classification of over 1800 human gastrointestinal metagenomic samples. This is achieved by combining a manually curated list of bacterial genomes from human faecal samples with over 21000 additional reference genomes representing bacteria, viruses, archaea and fungi with manually curated species classification and enhanced sample metadata annotation. A user-friendly, web-based interface provides the ability to search for (i) microbial groups associated with health or disease state, (ii) health or disease states and community structure associated with a microbial group, (iii) the enrichment of a microbial gene or sequence and (iv) enrichment of a functional annotation. The HPMC database enables detailed analysis of human microbial communities and supports research from basic microbiology and immunology to therapeutic development in human health and disease. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The relationship between competition and quality in procedural cardiac care.
Glick, David B; Wroblewski, Kristen; Apfelbaum, Sean; Dauber, Benjamin; Woo, Joyce; Tung, Avery
2015-01-01
Anesthesiologists are frequently involved in efforts to meet perioperative quality metrics. The degree to which hospitals compete on publicly reported quality measures, however, is unclear. We hypothesized that hospitals in more competitive environments would be more likely to compete on quality and thus perform better on such measures. To test our hypothesis, we studied the relationship between competition and quality in hospitals providing procedural cardiac care and participating in a national quality database. For hospitals performing heart valve surgery (HVS) and delivering acute myocardial infarction (AMI) care in the Hospital Compare database, we assessed the degree of intrahospital competition using both geographical radius and federally defined metropolitan statistical area (MSA) to determine the degree of intrahospital competition. For each hospital, we then correlated the degree of competition with quality measure performance, mortality, patient volume, and per-patient Medicare costs for both HVS and AMI. Six hundred fifty-three hospitals met inclusion criteria for HVS and 1898 hospitals for AMI care. We found that for both definitions of competition, hospitals facing greater competition did not demonstrate better quality measure performance for either HVS or AMI. For both diagnoses, competition by number of hospitals correlated positively with cost: partial correlation coefficients = 0.40 (0.42 for MSA) (P < 0.001) for HVS and 0.52 (0.47 for MSA) (P < 0.001) for AMI. An analysis of the Hospital Compare database found that competition among hospitals correlated overall with increased Medicare costs but did not predict better scores on publicly reported quality metrics. Our results suggest that hospitals do not compete meaningfully on publicly reported quality metrics or costs.
Kröger, Edeltraut; Tatar, Ovidiu; Vedel, Isabelle; Giguère, Anik M C; Voyer, Philippe; Guillaumie, Laurence; Grégoire, Jean-Pierre; Guénette, Line
2017-08-01
Background Medication non-adherence may lead to poor therapeutic outcomes. Cognitive functions deteriorate with age, contributing to decreased adherence. Interventions have been tested to improve adherence in seniors with cognitive impairment or Alzheimer disease (AD), but high-quality systematic reviews are lacking. It remains unclear which interventions are promising. Objectives We conducted a systematic review to identify, describe, and evaluate interventions aimed at improving medication adherence among seniors with any type of cognitive impairment. Methods Following NICE guidance, databases and websites were searched using combinations of controlled and free vocabulary. All adherence-enhancing interventions and study designs were considered. Studies had to include community dwelling seniors, aged 65 years or older, with cognitive impairment, receiving at least one medication for a chronic condition, and an adherence measure. Study characteristics and methodological quality were assessed. Results We identified 13 interventions, including six RCTs. Two studies were of poor, nine of low/medium and two of high quality. Seven studies had sample sizes below 50 and six interventions focused on adherence to AD medication. Six interventions tested a behavioral, four a medication oriented, two an educational and one a multi-faceted approach. Studies rarely assessed therapeutic outcomes. All but one intervention showed improved adherence. Conclusion Three medium quality studies showed better adherence with patches than with pills for AD treatment. Promising interventions used educational or reminding strategies, including one high quality RCT. Nine studies were of low/moderate quality. High quality RCTs using a theoretical framework for intervention selection are needed to identify strategies for improved adherence in these seniors.
Inpatient Volume and Quality of Mental Health Care Among Patients With Unipolar Depression.
Rasmussen, Line Ryberg; Mainz, Jan; Jørgensen, Mette; Videbech, Poul; Johnsen, Søren Paaske
2018-04-26
The relationship between inpatient volume and the quality of mental health care remains unclear. This study examined the association between inpatient volume in psychiatric hospital wards and quality of mental health care among patients with depression admitted to wards in Denmark. In a nationwide, population-based cohort study, 17,971 patients (N=21,120 admissions) admitted to psychiatric hospital wards between 2011 and 2016 were identified from the Danish Depression Database. Inpatient volume was categorized into quartiles according to the individual ward's average caseload volume per year during the study period: low volume (quartile 1, <102 inpatients per year), medium volume (quartile 2, 102-172 inpatients per year), high volume (quartile 3, 173-227 inpatients per year) and very high volume (quartile 4, >227 inpatients per year). Quality of mental health care was assessed by receipt of process performance measures reflecting national clinical guidelines for care of depression. Compared with patients admitted to low-volume psychiatric hospital wards, patients admitted to very-high-volume wards were more likely to receive a high overall quality of mental health care (≥80% of the recommended process performance measures) (adjusted relative risk [ARR]=1.78, 95% confidence interval [CI]=1.02-3.09) as well as individual processes of care, including a somatic examination (ARR=1.35, CI=1.03-1.78). Admission to very-high-volume psychiatric hospital wards was associated with a greater chance of receiving guideline-recommended process performance measures for care of depression.
Cybermaterials: materials by design and accelerated insertion of materials
NASA Astrophysics Data System (ADS)
Xiong, Wei; Olson, Gregory B.
2016-02-01
Cybermaterials innovation entails an integration of Materials by Design and accelerated insertion of materials (AIM), which transfers studio ideation into industrial manufacturing. By assembling a hierarchical architecture of integrated computational materials design (ICMD) based on materials genomic fundamental databases, the ICMD mechanistic design models accelerate innovation. We here review progress in the development of linkage models of the process-structure-property-performance paradigm, as well as related design accelerating tools. Extending the materials development capability based on phase-level structural control requires more fundamental investment at the level of the Materials Genome, with focus on improving applicable parametric design models and constructing high-quality databases. Future opportunities in materials genomic research serving both Materials by Design and AIM are addressed.
Garraín, Daniel; Fazio, Simone; de la Rúa, Cristina; Recchioni, Marco; Lechón, Yolanda; Mathieux, Fabrice
2015-01-01
The aim of this paper is to identify areas of potential improvement of the European Reference Life Cycle Database (ELCD) electricity datasets. The revision is based on the data quality indicators described by the International Life Cycle Data system (ILCD) Handbook, applied on sectorial basis. These indicators evaluate the technological, geographical and time-related representativeness of the dataset and the appropriateness in terms of completeness, precision and methodology. Results show that ELCD electricity datasets have a very good quality in general terms, nevertheless some findings and recommendations in order to improve the quality of Life-Cycle Inventories have been derived. Moreover, these results ensure the quality of the electricity-related datasets to any LCA practitioner, and provide insights related to the limitations and assumptions underlying in the datasets modelling. Giving this information, the LCA practitioner will be able to decide whether the use of the ELCD electricity datasets is appropriate based on the goal and scope of the analysis to be conducted. The methodological approach would be also useful for dataset developers and reviewers, in order to improve the overall Data Quality Requirements of databases.
Wimmer, Helge; Gundacker, Nina C; Griss, Johannes; Haudek, Verena J; Stättner, Stefan; Mohr, Thomas; Zwickl, Hannes; Paulitschke, Verena; Baron, David M; Trittner, Wolfgang; Kubicek, Markus; Bayer, Editha; Slany, Astrid; Gerner, Christopher
2009-06-01
Interpretation of proteome data with a focus on biomarker discovery largely relies on comparative proteome analyses. Here, we introduce a database-assisted interpretation strategy based on proteome profiles of primary cells. Both 2-D-PAGE and shotgun proteomics are applied. We obtain high data concordance with these two different techniques. When applying mass analysis of tryptic spot digests from 2-D gels of cytoplasmic fractions, we typically identify several hundred proteins. Using the same protein fractions, we usually identify more than thousand proteins by shotgun proteomics. The data consistency obtained when comparing these independent data sets exceeds 99% of the proteins identified in the 2-D gels. Many characteristic differences in protein expression of different cells can thus be independently confirmed. Our self-designed SQL database (CPL/MUW - database of the Clinical Proteomics Laboratories at the Medical University of Vienna accessible via www.meduniwien.ac.at/proteomics/database) facilitates (i) quality management of protein identification data, which are based on MS, (ii) the detection of cell type-specific proteins and (iii) of molecular signatures of specific functional cell states. Here, we demonstrate, how the interpretation of proteome profiles obtained from human liver tissue and hepatocellular carcinoma tissue is assisted by the Clinical Proteomics Laboratories at the Medical University of Vienna-database. Therefore, we suggest that the use of reference experiments supported by a tailored database may substantially facilitate data interpretation of proteome profiling experiments.
Bagger, Frederik Otzen; Sasivarevic, Damir; Sohi, Sina Hadi; Laursen, Linea Gøricke; Pundhir, Sachin; Sønderby, Casper Kaae; Winther, Ole; Rapin, Nicolas; Porse, Bo T
2016-01-04
Research on human and murine haematopoiesis has resulted in a vast number of gene-expression data sets that can potentially answer questions regarding normal and aberrant blood formation. To researchers and clinicians with limited bioinformatics experience, these data have remained available, yet largely inaccessible. Current databases provide information about gene-expression but fail to answer key questions regarding co-regulation, genetic programs or effect on patient survival. To address these shortcomings, we present BloodSpot (www.bloodspot.eu), which includes and greatly extends our previously released database HemaExplorer, a database of gene expression profiles from FACS sorted healthy and malignant haematopoietic cells. A revised interactive interface simultaneously provides a plot of gene expression along with a Kaplan-Meier analysis and a hierarchical tree depicting the relationship between different cell types in the database. The database now includes 23 high-quality curated data sets relevant to normal and malignant blood formation and, in addition, we have assembled and built a unique integrated data set, BloodPool. Bloodpool contains more than 2000 samples assembled from six independent studies on acute myeloid leukemia. Furthermore, we have devised a robust sample integration procedure that allows for sensitive comparison of user-supplied patient samples in a well-defined haematopoietic cellular space. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
The UMD-p53 database: new mutations and analysis tools.
Béroud, Christophe; Soussi, Thierry
2003-03-01
The tumor suppressor gene TP53 (p53) is the most extensively studied gene involved in human cancers. More than 1,400 publications have reported mutations of this gene in 150 cancer types for a total of 14,971 mutations. To exploit this huge bulk of data, specific analytic tools were highly warranted. We therefore developed a locus-specific database software called UMD-p53. This database compiles all somatic and germline mutations as well as polymorphisms of the TP53 gene which have been reported in the published literature since 1989, or unpublished data submitted to the database curators. The database is available at www.umd.necker.fr or at http://p53.curie.fr/. In this paper, we describe recent developments of the UMD-p53 database. These developments include new fields and routines. For example, the analysis of putative acceptor or donor splice sites is now automated and gives new insight for the causal role of "silent mutations." Other routines have also been created such as the prescreening module, the UV module, and the cancer distribution module. These new improvements will help users not only for molecular epidemiology and pharmacogenetic studies but also for patient-based studies. To achieve theses purposes we have designed a procedure to check and validate data in order to reach the highest quality data. Copyright 2003 Wiley-Liss, Inc.
Supervised Learning for Detection of Duplicates in Genomic Sequence Databases.
Chen, Qingyu; Zobel, Justin; Zhang, Xiuzhen; Verspoor, Karin
2016-01-01
First identified as an issue in 1996, duplication in biological databases introduces redundancy and even leads to inconsistency when contradictory information appears. The amount of data makes purely manual de-duplication impractical, and existing automatic systems cannot detect duplicates as precisely as can experts. Supervised learning has the potential to address such problems by building automatic systems that learn from expert curation to detect duplicates precisely and efficiently. While machine learning is a mature approach in other duplicate detection contexts, it has seen only preliminary application in genomic sequence databases. We developed and evaluated a supervised duplicate detection method based on an expert curated dataset of duplicates, containing over one million pairs across five organisms derived from genomic sequence databases. We selected 22 features to represent distinct attributes of the database records, and developed a binary model and a multi-class model. Both models achieve promising performance; under cross-validation, the binary model had over 90% accuracy in each of the five organisms, while the multi-class model maintains high accuracy and is more robust in generalisation. We performed an ablation study to quantify the impact of different sequence record features, finding that features derived from meta-data, sequence identity, and alignment quality impact performance most strongly. The study demonstrates machine learning can be an effective additional tool for de-duplication of genomic sequence databases. All Data are available as described in the supplementary material.
miRSponge: a manually curated database for experimentally supported miRNA sponges and ceRNAs.
Wang, Peng; Zhi, Hui; Zhang, Yunpeng; Liu, Yue; Zhang, Jizhou; Gao, Yue; Guo, Maoni; Ning, Shangwei; Li, Xia
2015-01-01
In this study, we describe miRSponge, a manually curated database, which aims at providing an experimentally supported resource for microRNA (miRNA) sponges. Recent evidence suggests that miRNAs are themselves regulated by competing endogenous RNAs (ceRNAs) or 'miRNA sponges' that contain miRNA binding sites. These competitive molecules can sequester miRNAs to prevent them interacting with their natural targets to play critical roles in various biological and pathological processes. It has become increasingly important to develop a high quality database to record and store ceRNA data to support future studies. To this end, we have established the experimentally supported miRSponge database that contains data on 599 miRNA-sponge interactions and 463 ceRNA relationships from 11 species following manual curating from nearly 1200 published articles. Database classes include endogenously generated molecules including coding genes, pseudogenes, long non-coding RNAs and circular RNAs, along with exogenously introduced molecules including viral RNAs and artificial engineered sponges. Approximately 70% of the interactions were identified experimentally in disease states. miRSponge provides a user-friendly interface for convenient browsing, retrieval and downloading of dataset. A submission page is also included to allow researchers to submit newly validated miRNA sponge data. Database URL: http://www.bio-bigdata.net/miRSponge. © The Author(s) 2015. Published by Oxford University Press.
miRSponge: a manually curated database for experimentally supported miRNA sponges and ceRNAs
Wang, Peng; Zhi, Hui; Zhang, Yunpeng; Liu, Yue; Zhang, Jizhou; Gao, Yue; Guo, Maoni; Ning, Shangwei; Li, Xia
2015-01-01
In this study, we describe miRSponge, a manually curated database, which aims at providing an experimentally supported resource for microRNA (miRNA) sponges. Recent evidence suggests that miRNAs are themselves regulated by competing endogenous RNAs (ceRNAs) or ‘miRNA sponges’ that contain miRNA binding sites. These competitive molecules can sequester miRNAs to prevent them interacting with their natural targets to play critical roles in various biological and pathological processes. It has become increasingly important to develop a high quality database to record and store ceRNA data to support future studies. To this end, we have established the experimentally supported miRSponge database that contains data on 599 miRNA-sponge interactions and 463 ceRNA relationships from 11 species following manual curating from nearly 1200 published articles. Database classes include endogenously generated molecules including coding genes, pseudogenes, long non-coding RNAs and circular RNAs, along with exogenously introduced molecules including viral RNAs and artificial engineered sponges. Approximately 70% of the interactions were identified experimentally in disease states. miRSponge provides a user-friendly interface for convenient browsing, retrieval and downloading of dataset. A submission page is also included to allow researchers to submit newly validated miRNA sponge data. Database URL: http://www.bio-bigdata.net/miRSponge. PMID:26424084
Parson, W; Gusmão, L; Hares, D R; Irwin, J A; Mayr, W R; Morling, N; Pokorak, E; Prinz, M; Salas, A; Schneider, P M; Parsons, T J
2014-11-01
The DNA Commission of the International Society of Forensic Genetics (ISFG) regularly publishes guidelines and recommendations concerning the application of DNA polymorphisms to the question of human identification. Previous recommendations published in 2000 addressed the analysis and interpretation of mitochondrial DNA (mtDNA) in forensic casework. While the foundations set forth in the earlier recommendations still apply, new approaches to the quality control, alignment and nomenclature of mitochondrial sequences, as well as the establishment of mtDNA reference population databases, have been developed. Here, we describe these developments and discuss their application to both mtDNA casework and mtDNA reference population databasing applications. While the generation of mtDNA for forensic casework has always been guided by specific standards, it is now well-established that data of the same quality are required for the mtDNA reference population data used to assess the statistical weight of the evidence. As a result, we introduce guidelines regarding sequence generation, as well as quality control measures based on the known worldwide mtDNA phylogeny, that can be applied to ensure the highest quality population data possible. For both casework and reference population databasing applications, the alignment and nomenclature of haplotypes is revised here and the phylogenetic alignment proffered as acceptable standard. In addition, the interpretation of heteroplasmy in the forensic context is updated, and the utility of alignment-free database searches for unbiased probability estimates is highlighted. Finally, we discuss statistical issues and define minimal standards for mtDNA database searches. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Ruusmann, Villu; Maran, Uko
2013-07-01
The scientific literature is important source of experimental and chemical structure data. Very often this data has been harvested into smaller or bigger data collections leaving the data quality and curation issues on shoulders of users. The current research presents a systematic and reproducible workflow for collecting series of data points from scientific literature and assembling a database that is suitable for the purposes of high quality modelling and decision support. The quality assurance aspect of the workflow is concerned with the curation of both chemical structures and associated toxicity values at (1) single data point level and (2) collection of data points level. The assembly of a database employs a novel "timeline" approach. The workflow is implemented as a software solution and its applicability is demonstrated on the example of the Tetrahymena pyriformis acute aquatic toxicity endpoint. A literature collection of 86 primary publications for T. pyriformis was found to contain 2,072 chemical compounds and 2,498 unique toxicity values, which divide into 2,440 numerical and 58 textual values. Every chemical compound was assigned to a preferred toxicity value. Examples for most common chemical and toxicological data curation scenarios are discussed.
Nutritional modifications in male infertility: a systematic review covering 2 decades
Mohammadmoradi, Shayan; Javidan, Aida; Sadeghi, Mohammad Reza
2016-01-01
Context: Studies suggest that appropriate nutritional modifications can improve the natural conception rate of infertile couples. Objectives: The purpose of this study was to review the human trials that investigated the relation between nutrition and male infertility. Data Sources: A comprehensive systematic review of published human studies was carried out by searching scientific databases. Article selection was carried out in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses. The American Dietetic Association Research Design and Implementation Checklist was also used for quality assessment. Data Extraction: A total of 502 articles were identified, of which 23 studies met the inclusion criteria. Data Synthesis: Results indicated that a healthy diet improves at least one measure of semen quality, while diets high in lipophilic foods, soy isoflavones, and sweets lower semen quality. Conclusion: The role of daily nutrient exposure and dietary quality needs to be highlighted in male infertility. Mechanistic studies addressing the responsible underlying mechanisms of action of dietary modifications are highly warranted. Systematic Review Registration: PROSPERO 2013: CRD42013005953. Available at: http://www.crd.york.ac.uk/PROSPERO/display_record.asp?ID=CRD42013005953. PMID:26705308
TIPTOPbase: the Iron Project and the Opacity Project Atomic Database
NASA Astrophysics Data System (ADS)
Mendoza, Claudio; Nahar, Sultana; Pradhan, Anil; Seaton, Micheal; Zeippen, Claude
2001-05-01
The Opacity Project, the IRON Project, and the RmaX Network (The Opacity Project Team, Vol.1,2), IOPP, Bristol (1995,1996); Hummer et al., Astron. Astrophys. 279, 298 (1993) are international computational efforts concerned with the production of high quality atomic data for astrophysical applications. Research groups from Canada, France, Germany, UK, USA and Venezuela are involved. Extensive data sets containing accurate energy levels, f-values, A-values, photoionisation cross sections, collision strengths, recombination rates, and opacitites have been computed for cosmically abundant elements using state-of-the-art atomic physics codes. Their volume, completeness and overall accuracy are presently unmatched in the field of laboratory astrophysics. Some of the data sets have been available since 1993 from a public on-line database service referred to as TOPbase (Cunto et al Astron. Astrophys. 275), L5 (1993), ( http://cdsweb.u-strasbg.fr/OP.html at CDS France, and http://heasarc.gsfc.nasa.gov/topbase, at NSAS USA). We are currently involved in a major effort to scale the existing database services to develop a robust platform for the high-profile dissemination of atomic data to the scientific community within the next 12 months. (Partial support from the NSF and NASA is acknowledged.)
Crystallography Open Database – an open-access collection of crystal structures
Gražulis, Saulius; Chateigner, Daniel; Downs, Robert T.; Yokochi, A. F. T.; Quirós, Miguel; Lutterotti, Luca; Manakova, Elena; Butkus, Justas; Moeck, Peter; Le Bail, Armel
2009-01-01
The Crystallography Open Database (COD), which is a project that aims to gather all available inorganic, metal–organic and small organic molecule structural data in one database, is described. The database adopts an open-access model. The COD currently contains ∼80 000 entries in crystallographic information file format, with nearly full coverage of the International Union of Crystallography publications, and is growing in size and quality. PMID:22477773
Font-Gonzalez, Anna; Mulder, Renée L; Loeffen, Erik A H; Byrne, Julianne; van Dulmen-den Broeder, Eline; van den Heuvel-Eibrink, Marry M; Hudson, Melissa M; Kenney, Lisa B; Levine, Jennifer M; Tissing, Wim J E; van de Wetering, Marianne D; Kremer, Leontien C M
2016-07-15
Fertility preservation care for children, adolescents, and young adults (CAYAs) with cancer is not uniform among practitioners. To ensure high-quality care, evidence-based clinical practice guidelines (CPGs) are essential. The authors identified existing CPGs for fertility preservation in CAYAs with cancer, evaluated their quality, and explored differences in recommendations. A systematic search in PubMed (January 2000-October 2014); guideline databases; and Web sites of oncology, pediatric, and fertility organizations was performed. Two reviewers evaluated the quality of the identified CPGs using the Appraisal of Guidelines for Research and Evaluation II Instrument (AGREE II). From high-quality CPGs, the authors evaluated concordant and discordant areas among the recommendations. A total of 25 CPGs regarding fertility preservation were identified. The average AGREE II domain scores (scale of 0%-100%) varied from 15% on applicability to 100% on clarity of presentation. The authors considered 8 CPGs (32%) to be of high quality, which was defined as scores ≥60% in any 4 domains. Large variations in the recommendations of the high-quality CPGs were observed, with 87.2% and 88.6%, respectively, of discordant guideline areas among the fertility preservation recommendations for female and male patients with cancer. Only approximately one-third of the identified CPGs were found to be of sufficient quality. Of these CPGs, the fertility preservation recommendations varied substantially, which can be a reflection of inadequate evidence for specific recommendations, thereby hindering the ability of providers to deliver high-quality care. CPGs including a transparent decision process for fertility preservation can help health care providers to deliver optimal and uniform care, thus improving the quality of life of CAYAs with cancer and cancer survivors. Cancer 2016;122:2216-23. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Fernando, Shannon M; Vaillancourt, Christian; Morrow, Stanley; Stiell, Ian G
2018-07-01
Little is known regarding the quality of cardiopulmonary resuscitation (CPR) performed by bystanders in out-of-hospital cardiac arrest (OHCA). We sought to determine quality of bystander CPR provided during OHCA using CPR quality data stored by Automated External Defibrillators (AEDs). We used the Resuscitation Outcomes Consortium database to identify OHCA cases of presumed cardiac etiology where an AED was utilized. We then matched AED data to each case identified. AED data was analyzed using manufacturer software in order to determine overall measures of bystander CPR quality, changes in bystander CPR quality over time, and adherence to existing 2010 Resuscitation Quality Guidelines. 100 cases of OHCA of presumed cardiac etiology involving bystander CPR and with corresponding AED data. Mean age was 62.3 years, and 75% were male. Bystanders demonstrated high-quality CPR over all minutes of resuscitation, with a chest compression fraction of 76%, a compression depth of 5.3 cm, and a compression rate of 111.2 compressions/min. Mean perishock pause was 26.8 s. Adherence rates to 2010 Resuscitation Guidelines for compression rate and depth were found to be 66% and 55%, respectively. CPR quality was lowest in the first minute, resulting from increased delay to rhythm analysis (mean 40.7 s). In cases involving shock delivery, latency from initiation of AED to shock delivery was 59.2 s. We found that bystanders perform high-quality CPR, with strong adherence rates to existing Resuscitation Guidelines. High-quality CPR is maintained over the first five minutes of resuscitation, but was lowest in the first minute. Copyright © 2018 Elsevier B.V. All rights reserved.
Maximizing the use of Special Olympics International's Healthy Athletes database: A call to action.
Lloyd, Meghann; Foley, John T; Temple, Viviene A
2018-02-01
There is a critical need for high-quality population-level data related to the health of individuals with intellectual disabilities. For more than 15 years Special Olympics International has been conducting free Healthy Athletes screenings at local, national and international events. The Healthy Athletes database is the largest known international database specifically on the health of people with intellectual disabilities; however, it is relatively under-utilized by the research community. A consensus meeting with two dozen North American researchers, stakeholders, clinicians and policymakers took place in Toronto, Canada. The purpose of the meeting was to: 1) establish the perceived utility of the database, and 2) to identify and prioritize 3-5 specific priorities related to using the Healthy Athletes database to promote the health of individuals with intellectual disabilities. There was unanimous agreement from the meeting participants that this database represents an immense opportunity both from the data already collected, and data that will be collected in the future. The 3 top priorities for the database were deemed to be: 1) establish the representativeness of data collected on Special Olympics athletes compared to the general population with intellectual disabilities, 2) create a scientific advisory group for Special Olympics International, and 3) use the data to improve Special Olympics programs around the world. The Special Olympics Healthy Athletes database includes data not found in any other source and should be used, in partnership with Special Olympics International, by researchers to significantly increase our knowledge and understanding of the health of individuals with intellectual disabilities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Detection of alternative splice variants at the proteome level in Aspergillus flavus.
Chang, Kung-Yen; Georgianna, D Ryan; Heber, Steffen; Payne, Gary A; Muddiman, David C
2010-03-05
Identification of proteins from proteolytic peptides or intact proteins plays an essential role in proteomics. Researchers use search engines to match the acquired peptide sequences to the target proteins. However, search engines depend on protein databases to provide candidates for consideration. Alternative splicing (AS), the mechanism where the exon of pre-mRNAs can be spliced and rearranged to generate distinct mRNA and therefore protein variants, enable higher eukaryotic organisms, with only a limited number of genes, to have the requisite complexity and diversity at the proteome level. Multiple alternative isoforms from one gene often share common segments of sequences. However, many protein databases only include a limited number of isoforms to keep minimal redundancy. As a result, the database search might not identify a target protein even with high quality tandem MS data and accurate intact precursor ion mass. We computationally predicted an exhaustive list of putative isoforms of Aspergillus flavus proteins from 20 371 expressed sequence tags to investigate whether an alternative splicing protein database can assign a greater proportion of mass spectrometry data. The newly constructed AS database provided 9807 new alternatively spliced variants in addition to 12 832 previously annotated proteins. The searches of the existing tandem MS spectra data set using the AS database identified 29 new proteins encoded by 26 genes. Nine fungal genes appeared to have multiple protein isoforms. In addition to the discovery of splice variants, AS database also showed potential to improve genome annotation. In summary, the introduction of an alternative splicing database helps identify more proteins and unveils more information about a proteome.
Cochrane Commentary: Probiotics For Prevention of Acute Upper Respiratory Infection.
Quick, Melissa
2015-01-01
Probiotics may improve a person's health by regulating their immune function. Some trials have shown that probiotic strains can prevent respiratory infections. Even though the previous version of our review showed benefits of probiotics for acute upper respiratory tract infections (URTIs), several new studies have been published. To assess the effectiveness and safety of probiotics (any specified strain or dose), compared with placebo, in the prevention of acute URTIs in people of all ages, who are at risk of acute URTIs. We searched CENTRAL (2014, Issue 6), MEDLINE (1950 to July week 3, 2014), EMBASE (1974 to July 2014), Web of Science (1900 to July 2014), the Chinese Biomedical Literature Database, which includes the China Biological Medicine Database (from 1978 to July 2014), the Chinese Medicine Popular Science Literature Database (from 2000 to July 2014) and the Masters Degree Dissertation of Beijing Union Medical College Database (from 1981 to July 2014). We also searched the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov for completed and ongoing trials on 31 July 2014. Randomised controlled trials (RCTs) comparing probiotics with placebo to prevent acute URTIs. Two review authors independently assessed the eligibility and quality of trials, and extracted data using the standard methodological procedures expected by The Cochrane Collaboration. We included 13 RCTs, although we could only extract data to meta-analyze 12 trials, which involved 3720 participants including children, adults (aged around 40 years) and older people. We found that probiotics were better than placebo when measuring the number of participants experiencing episodes of acute URTI [at least one episode: odds ratio (OR): 0.53; 95% CI = 0.37-0.76, P < .001, low quality evidence; at least three episodes: OR: 0.53; 95% CI = 0.36-0.80, P = .002, low quality evidence]; the mean duration of an episode of acute URTI [mean difference (MD): -1.89; 95% CI = -2.03 to -1.75, P < .001, low quality evidence]; reduced antibiotic prescription rates for acute URTIs (OR: 0.65; 95% CI = 0.45-0.94, moderate quality evidence) and cold-related school absence (OR: 0.10; 95% CI = 0.02-0.47, very low quality evidence). Probiotics and placebo were similar when measuring the rate ratio of episodes of acute URTI (rate ratio: 0.83; 95% CI = 0.66-1.05, P = .12, very low quality evidence) and adverse events (OR: 0.88; 95% CI = 0.65-1.19, P = .40, low quality evidence). Side effects of probiotics were minor and gastrointestinal symptoms were the most common. We found that some subgroups had a high level of heterogeneity when we conducted pooled analyses and the evidence level was low or very low quality. Probiotics were better than placebo in reducing the number of participants experiencing episodes of acute URTI, the mean duration of an episode of acute URTI, antibiotic use and cold-related school absence. This indicates that probiotics may be more beneficial than placebo for preventing acute URTIs. However, the quality of the evidence was low or very low. Copyright © 2015. Published by Elsevier Inc.
Probiotics for preventing acute upper respiratory tract infections.
Hao, Qiukui; Dong, Bi Rong; Wu, Taixiang
2015-02-03
Probiotics may improve a person's health by regulating their immune function. Some trials have shown that probiotic strains can prevent respiratory infections. Even though the previous version of our review showed benefits of probiotics for acute upper respiratory tract infections (URTIs), several new studies have been published. To assess the effectiveness and safety of probiotics (any specified strain or dose), compared with placebo, in the prevention of acute URTIs in people of all ages, at risk of acute URTIs. We searched CENTRAL (2014, Issue 6), MEDLINE (1950 to July week 3, 2014), EMBASE (1974 to July 2014), Web of Science (1900 to July 2014), the Chinese Biomedical Literature Database, which includes the China Biological Medicine Database (from 1978 to July 2014), the Chinese Medicine Popular Science Literature Database (from 2000 to July 2014) and the Masters Degree Dissertation of Beijing Union Medical College Database (from 1981 to July 2014). We also searched the World Health Organization (WHO) International Clinical Trials Registry Platform (ICTRP) and ClinicalTrials.gov for completed and ongoing trials on 31 July 2014. Randomised controlled trials (RCTs) comparing probiotics with placebo to prevent acute URTIs. Two review authors independently assessed the eligibility and quality of trials, and extracted data using the standard methodological procedures expected by The Cochrane Collaboration. We included 13 RCTs, although we could only extract data to meta-analyse 12 trials, which involved 3720 participants including children, adults (aged around 40 years) and older people. We found that probiotics were better than placebo when measuring the number of participants experiencing episodes of acute URTI (at least one episode: odds ratio (OR) 0.53; 95% confidence interval (CI) 0.37 to 0.76, P value < 0.001, low quality evidence; at least three episodes: OR 0.53; 95% CI 0.36 to 0.80, P value = 0.002, low quality evidence); the mean duration of an episode of acute URTI (mean difference (MD) -1.89; 95% CI -2.03 to -1.75, P value < 0.001, low quality evidence); reduced antibiotic prescription rates for acute URTIs (OR 0.65; 95% CI 0.45 to 0.94, moderate quality evidence) and cold-related school absence (OR 0.10; 95% CI 0.02 to 0.47, very low quality evidence). Probiotics and placebo were similar when measuring the rate ratio of episodes of acute URTI (rate ratio 0.83; 95% CI 0.66 to 1.05, P value = 0.12, very low quality evidence) and adverse events (OR 0.88; 95% CI 0.65 to 1.19, P value = 0.40, low quality evidence). Side effects of probiotics were minor and gastrointestinal symptoms were the most common. We found that some subgroups had a high level of heterogeneity when we conducted pooled analyses and the evidence level was low or very low quality. Probiotics were better than placebo in reducing the number of participants experiencing episodes of acute URTI, the mean duration of an episode of acute URTI, antibiotic use and cold-related school absence. This indicates that probiotics may be more beneficial than placebo for preventing acute URTIs. However, the quality of the evidence was low or very low.
Sharma, Ravi; Lebrun-Harris, Lydie A.; Ngo-Metzger, Quyen
2014-01-01
Objective Determine the association between access to primary care by the underserved and Medicare spending and clinical quality across hospital referral regions (HRRs). Data Sources Data on elderly fee-for-service beneficiaries across 306 HRRs came from CMS’ Geographic Variation in Medicare Spending and Utilization database (2010). We merged data on number of health center patients (HRSA’s Uniform Data System) and number of low-income residents (American Community Survey). Study Design We estimated access to primary care in each HRR by “health center penetration” (health center patients as a proportion of low-income residents). We calculated total Medicare spending (adjusted for population size, local input prices, and health risk). We assessed clinical quality by preventable hospital admissions, hospital readmissions, and emergency department visits. We sorted HRRs by health center penetration rate and compared spending and quality measures between the high- and low-penetration deciles. We also employed linear regressions to estimate spending and quality measures as a function of health center penetration. Principal Findings The high-penetration decile had 9.7% lower Medicare spending ($926 per capita, p=0.01) than the low-penetration decile, and no different clinical quality outcomes. Conclusions Compared with elderly fee-for-service beneficiaries residing in areas with low-penetration of health center patients among low-income residents, those residing in high-penetration areas may accrue Medicare cost savings. Limited evidence suggests that these savings do not compromise clinical quality. PMID:25243096
Startsev, N; Dimov, P; Grosche, B; Tretyakov, F; Schüz, J; Akleyev, A
2015-01-01
To follow up populations exposed to several radiation accidents in the Southern Urals, a cause-of-death registry was established at the Urals Center capturing deaths in the Chelyabinsk, Kurgan and Sverdlovsk region since 1950. When registering deaths over such a long time period, quality measures need to be in place to maintain quality and reduce the impact of individual coders as well as quality changes in death certificates. To ensure the uniformity of coding, a method for semi-automatic coding was developed, which is described here. Briefly, the method is based on a dynamic thesaurus, database-supported coding and parallel coding by two different individuals. A comparison of the proposed method for organizing the coding process with the common procedure of coding showed good agreement, with, at the end of the coding process, 70 - 90% agreement for the three-digit ICD -9 rubrics. The semi-automatic method ensures a sufficiently high quality of coding by at the same time providing an opportunity to reduce the labor intensity inherent in the creation of large-volume cause-of-death registries.
Recchia, Holly E; Howe, Nina
2009-08-01
This study extends research on sibling conflict strategies and outcomes by examining unique and interactive associations with age, relative birth order, sibling relationship quality, and caregivers' interventions into conflict. Each of 62 sibling dyads (older sibling mean age = 8.39 years; younger sibling mean age = 6.06 years) discussed 1 recurring conflict alone (dyadic negotiation) and a 2nd conflict with their primary parental caregiver (triadic negotiation). Negotiations were coded for children's conflict strategies, outcomes, and caregiver interventions; each family member provided ratings of sibling relationship quality. Results revealed that age was associated with siblings' constructive strategies, particularly in the dyadic negotiation. With age controlled, younger siblings referred more frequently to their own perspective. Caregivers' future orientation in the triadic negotiation was associated with children's future orientation in the dyadic negotiation; however, this association was most evident when sibling relationship quality was high. Similarly, caregivers' past orientation was positively associated with dyadic compromise, especially when relationship quality was high. Results reveal the value of simultaneously considering associations among parental, affective, and developmental correlates of sibling conflict strategies. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2008-08-01
This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.